source
sequence | source_labels
sequence | rouge_scores
sequence | paper_id
stringlengths 9
11
| ic
unknown | target
sequence |
---|---|---|---|---|---|
[
"We present a data driven approach to construct a library of feedback motion primitives for non-holonomic vehicles that guarantees bounded error in following arbitrarily long trajectories.",
"This ensures that motion re-planning can be avoided as long as disturbances to the vehicle remain within a certain bound and also potentially when the obstacles are displaced within a certain bound.",
"The library is constructed along local abstractions of the dynamics that enables addition of new motion primitives through abstraction refinement.",
"We provide sufficient conditions for construction of such robust motion primitives for a large class of nonlinear dynamics, including commonly used models, such as the standard Reeds-Shepp model.",
"The algorithm is applied for motion planning and control of a rover with slipping without its prior modelling.",
"Various state-the-art motion planning approaches for carlike vehicles use the bicycle model to generate feasible trajectories for high level planning BID3 .",
"The model is either discretized in lattice based methods or used as a heuristic for measuring distance between two states in sampling based methods such as rapidly exploring random trees (RRT) BID1 .",
"It is then up to the low level feedback controllers of the vehicle to follow the prescribed trajectory; an overview of this group of approaches can be found in Paden et al. BID2 .",
"This might prove a challenge in cases where the bicycle model does not resemble the actual vehicle dynamics closely enough; this may result in growing error between the prescribed trajectory and vehicles position which in turn may require trajectory re-planning BID3 .",
"Recently, approaches such as Howard et al. BID0 and Schwarting et al. BID4 have been proposed that can incorporate the vehicle dynamics in planning to ensure collision avoidance by using model predictive control.",
"While model predictive control can provide feasible trajectories for a large class of nonlinear models, it becomes prohibitively complex for long prediction horizons and may fall into local optima for short prediction horizons in non-convex problem settings BID5 .In",
"this work we follow the input discretization approach similar to lattice based methods for motion planning. Instead",
"of relying on a model, we sample from the input space similar to Howard et al. BID0 . The main",
"contribution in this work is that we construct locally linear abstractions of the system around samples in the input space and design local feedback rules to ensure fixed upper bound on state error after applying any motion primitive considering both linearization error and initial state error. Therefore",
", we can guarantee bounded state error through application of the motion primitives at all times. The idea",
"of feedback based motion primitives has also been presented in Vukosavljev et al. BID6 for multi-agent drones with omni-directional controllability; the main contrast here is that we provide a tool for construction of such motion primitives for non-holonomic vehicles. We pose",
"an assumption we refer to as robustifiability in order to be able to synthesize such motion primitives."
] | [
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.23529411852359772,
0.19230768084526062,
0.2222222238779068,
0.11764705181121826,
0.13636362552642822,
0.08888888359069824,
0.03703703358769417,
0.1111111044883728,
0.16393442451953888,
0.17543859779834747,
0.06557376682758331,
0.09302324801683426,
0.08888888359069824,
0.20895521342754364,
0.13636362552642822,
0.16129031777381897,
0.19512194395065308
] | BJgsNFWsaE | true | [
"We show that under some assumptions on vehicle dynamics and environment uncertainty it is possible to automatically synthesize motion primitives that do not accumulate error over time."
] |
[
"Deep Convolutional Networks (DCNs) have been shown to be sensitive to Universal Adversarial Perturbations (UAPs): input-agnostic perturbations that fool a model on large portions of a dataset.",
"These UAPs exhibit interesting visual patterns, but this phenomena is, as yet, poorly understood.",
"Our work shows that visually similar procedural noise patterns also act as UAPs.",
"In particular, we demonstrate that different DCN architectures are sensitive to Gabor noise patterns.",
"This behaviour, its causes, and implications deserve further in-depth study.",
"Deep Convolutional Networks (DCNs) have enabled deep learning to become one the primary tools for computer vision tasks.",
"However, adversarial examples-slightly altered inputs that change the model's output-have raised concerns on their reliability and security.",
"Adversarial perturbations can be defined as the noise patterns added to natural inputs to generate adversarial examples.",
"Some of these perturbations are universal, i.e. the same pattern can be used to fool the classifier on a large fraction of the tested dataset (MoosaviDezfooli et al., 2017; BID4 .",
"As shown in FIG1 , it is interesting to observe that such Universal Adversarial Perturbations (UAPs) for DCNs contain structure in their noise patterns.Results from BID1 together with our results here suggest that DCNs are sensitive to procedural noise perturbations, and more specifically here to Gabor noise.",
"Existing UAPs have some visual similarities with Gabor noise as in FIG2 .",
"Convolutional layers induce a prior on DCNs to learn local spatial information BID2 , and DCNs trained on natural image datasets, such as ImageNet, learn convolution filters that are similar UAPs generated for VGG-19 targeting specific layers using singular vector method BID4 .",
"BID10 and decreasing frequency from left to right.",
"in appearance to Gabor kernels and colour blobs BID15 BID11 .",
"Gabor noise is a convolution between a Gabor kernel 2 and a sparse white noise.",
"Thus, we hypothesize that DCNs are sensitive to Gabor noise, as it exploits specific features learned by the convolutional filters.In this paper we demonstrate the sensitivity of 3 different DCN architectures (Inception v3, , to Gabor noise on the ImageNet image classification task.",
"We empirically observed that even random Gabor noise patterns can be effective to generate UAPs.",
"Understanding this behaviour is important, as the generation and injection of Gabor noise is computationally inexpensive and, therefore, can become a threat to the security and reliability of DCNs.",
"The results show that the tested DCN models are sensitive to Gabor noise for a large fraction of the inputs, even when the parameters of the Gabor noise are chosen at random.",
"This hints that it may be representative of patterns learned at the earlier layers as Gabor noise appears visually similar to some UAPs targeting earlier layers in DCNs BID4 .This",
"phenomenon has important implications on the security and reliability of DCNs, as it can allow attackers to craft inexpensive black-box attacks. On the",
"defender's side, Gabor noise patterns can also be used to efficiently generate data for adversarial training to improve DCNs robustness. However",
", both the sensitivity exploited and the potential to mitigate it require a more in-depth understanding of the phenomena at play. In future",
"work, it may be worth analyzing the sensitivity of hidden layer activations across different families of procedural noise patterns and to investigate techniques to reduce the sensitivity of DCNs to perturbations."
] | [
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.2448979616165161,
0.052631575614213943,
0.05405404791235924,
0.2631579041481018,
0,
0.2857142686843872,
0.04878048226237297,
0.14999999105930328,
0.22641508281230927,
0.1875,
0.2222222238779068,
0.12903225421905518,
0.0624999962747097,
0.1764705777168274,
0.11428570747375488,
0.2539682388305664,
0.1538461446762085,
0.16326530277729034,
0.2857142686843872,
0.19607841968536377,
0.08695651590824127,
0.13636362552642822,
0.08888888359069824,
0.12244897335767746
] | HJx08NSnnE | true | [
"Existing Deep Convolutional Networks in image classification tasks are sensitive to Gabor noise patterns, i.e. small structured changes to the input cause large changes to the output."
] |
[
"Deep neural networks (DNNs) perform well on a variety of tasks despite the fact that most used in practice are vastly overparametrized and even capable of perfectly fitting randomly labeled data.",
"Recent evidence suggests that developing \"compressible\" representations is key for adjusting the complexity of overparametrized networks to the task at hand and avoiding overfitting (Arora et al., 2018; Zhou et al., 2018).",
"In this paper, we provide new empirical evidence that supports this hypothesis, identifying two independent mechanisms that emerge when the network’s width is increased: robustness (having units that can be removed without affecting accuracy) and redundancy (having units with similar activity).",
"In a series of experiments with AlexNet, ResNet and Inception networks in the CIFAR-10 and ImageNet datasets, and also using shallow networks with synthetic data, we show that DNNs consistently increase either their robustness, their redundancy, or both at greater widths for a comprehensive set of hyperparameters.",
"These results suggest that networks in the deep learning regime adjust their effective capacity by developing either robustness or redundancy.",
"Deep neural networks (DNNs) are capable of successfully learning from examples in a wide variety of tasks.",
"Though these networks are typically trained with large amounts of data, the number of free parameters in their architectures is often several orders of magnitude greater than the number of training examples.",
"This overparametrization reflects the ability of DNNs to memorize entire datasets, even with randomized labels .",
"Additionally, large networks not only tend to match the performance of small ones, but often generalize better (e.g. Neyshabur et al. (2017b) ; Frankle & Carbin (2018) ; Neyshabur et al. (2018) ; Novak et al. (2018) ).",
"Figure 1 demonstrates this for a variety of modern networks trained in ImageNet and CIFAR-10.",
"These observations raise the question of how vastly overparametrized networks can perform well in structured tasks without overfitting.",
"While DNNs appear to adapt their capacity to the complexity of the given task, precisely what causes them to do so remains an open question .",
"Several previous studies have aimed to uncover why, out of the many optima an overparametrized network can reach to achieve 100% training accuracy, they tend toward ones that generalize well (Neyshabur et al., 2017b; Zhang et al., 2017; Neyshabur et al., 2018; Novak et al., 2018) often by proving generalization bounds for simple models related to weight matrix norms or Rademacher complexity (Bartlett et al., 2017; Neyshabur et al., 2017a; Arora et al., 2018; Neyshabur et al., 2018) .",
"Frankle & Carbin (2018) , showed that, in certain networks, the crucial computations were performed by sparse subnetworks within them.",
"In doing so, they suggested that large networks tend to perform as well as or better than smaller ones because they more reliably contained fortuitously-initialized \"lottery ticket\" subnetworks.",
"Here, we focus on the question of why generalization ability does not decrease as a network's degree of overparametrization increases.",
"We investigate two critical properties of DNNs: robustness (how fragile the network is to removal of units) and redundancy (how similar unit activity is).",
"In doing so, we build off of theoretical work by Arora et al. (2018) and Zhou et al. (2018) , connecting the compressibility of DNNs to their non-overfitting behavior.",
"We find that various DNNs train toward regimes with different degrees of robustness and redundancy, but that at least one of the two properties, if not both, consistently emerges as a model's size is increased.",
"Based on these results, we offer interpretations of the various ways in which DNNs may constrain their effective capacity to protect from overfitting.",
"In this work, we empirically analyze models in terms of their activations (Novak et al., 2018; Morcos et al., 2018a; b) which makes our results contextual to input data.",
"Because of this, we are able to scale our analysis to state of the art networks like ResNet18 and Inception-v3.",
"And by focusing not on the broad question of generalization, but on the subproblem of why networks do not perform worse when their size is increased, we are able to show that redundancy and robustness are central to how networks autoregularize.",
"A related branch of work has focused on the relationship between a network's compressibility and its generalization behavior (Zhou et al., 2018) Our results generally validate both of these approaches, but we show that different networks develop different compressible features and to different extents, so we speculate that both pruning unimportant units and compressing redundant units may be complementary tools for developing new compression algorithms.",
"We also show that redundancy is highly sensitive to a network's initialization while its accuracy is not.",
"This suggests that certain compression techniques could be improved greatly by validating over multiple initializations in order to produce maximally redundant models.",
"We also make progress toward tightening our understanding of how compressible DNNs are which Zhou et al. (2018) shows can lead to improved practical generalization bounds.",
"Arora et al. (2014) suggests that redundancy implies robustness, and Morcos et al. (2018b) connects a network's robustness to the flattening of a layers' activation space along the direction of a single activation vector to improved generalization performance.",
"However, our findings suggest that these trends may not hold for all networks and that redundancy and robustness poorly predict generalization.",
"Our work is also related to Maennel et al. (2018) who takes a theoretical approach to show that model networks in the overparametrized regime tend to develop weight vectors that align to a set of discrete directions that are determined by the input data.",
"Our work suggest that their conclusions may retain a high degree of explanatory power in some but not all state of the art cases.",
"Despite a great deal of recent progress, to our knowledge, ours is the first work to date that has quantitatively studied the connections between overparametrization, robustness, and redundancy together.",
"We analyze these phenomena across a wide range of networks which may aid in understanding how well theoretical findings (which are typically based on simple models) generalize to common networks in machine learning.",
"We find that each network we analyze displays unique trends in robustness, compressibility, and similarity, yet that all deep ones develop more redundancy and/or robustness at large model sizes.",
"We also demonstrate that the two are highly dependent on initializations and that high variance increases redundancy in some networks and decrease it in others.",
"Limitations of our work include that we do not analyze cases with varying network depth and the fact that our single-layer MLPs with large initializations trained with high-dimensional, uncorrelated data do not seem to develop either increased robustness or redundancy at large model sizes.",
"However, a recent strand of research has emerged illluminating similarities between deep networks and kernel machines (Belkin et al., 2018; Jacot et al., 2018; Liang & Rakhlin, 2018) and suggesting that networks with high-variance initializations can operate in a kernel-like regime (Chizat et al., 2019; Woodworth et al., 2019) which we suspect relates to these findings for networks initialized with large variance.",
"In this paper, we jointly analyze the robustness and redundancy of deep neural networks with the aim of understanding why generalization ability does not tend to decrease as a network's degree of overparametrization increases.",
"In doing so, we find that robustness and redundancy do not imply each other but that one or the other or both consistently increase alongside overparametrization.",
"We connect these observations to various capacity-constraining features which DNNs may develop in order to support the connection between compressibility and generalization and to shed light on the features networks may develop to avoid overfitting.",
"In doing so, we paint a more complex picture of robustness and redundancy than much previous work has assumed.",
"By illustrating the relationships between these phenomena, we suggest various new research directions in theory of learning and compression.",
"We believe that together, these findings represent a milestone in understanding the emergent properties of overparametrized neural networks.",
"ResNet18s: These networks were off the shelf from He et al. (2016) for the ImageNet dataset.",
"They consisted of an initial convolution and batch norm followed by 4 building block (v1) layers, each with 2 blocks and a fully connected layer leading to a softmax output.",
"All kernel sizes in the initial layers and block layers were of size 7 × 7 and stride 2.",
"All activations were ReLU.",
"In the 1x sized model, the convolutions in the initial and block layers used 64, 64, 128, and 256 filters respectively.",
"After Xavier/Glorot initialization, we trained them for 90 epochs with a default batch size of 256 an initial default learning rate of 1 which decayed by a factor of 10 at epochs 30, 60, and 80.",
"Training was done on the ILSVRC 2012 dataset with approximately 1 million images, and evaluation was done on 50,000 validation images.",
"Optimization was done with SGD using 0.9 momentum.",
"We used batch normalization, data augmentation with random cropping and flipping, and 0.0001 weight decay."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.17391303181648254,
0.1304347813129425,
0.11538460850715637,
0.1071428507566452,
0.277777761220932,
0.1875,
0.09302324801683426,
0.06451612710952759,
0.08695651590824127,
0.19354838132858276,
0.11764705181121826,
0.052631575614213943,
0.02702702395617962,
0.0555555522441864,
0.0952380895614624,
0,
0.21052631735801697,
0.1463414579629898,
0.08163265138864517,
0.1538461446762085,
0.13636362552642822,
0.1764705777168274,
0.20000000298023224,
0.10958904027938843,
0.1249999925494194,
0.10526315122842789,
0.0952380895614624,
0.17391303181648254,
0.22857142984867096,
0.11320754140615463,
0.05128204822540283,
0.1395348757505417,
0.1702127605676651,
0.22727271914482117,
0.21052631735801697,
0.15094339847564697,
0.17910447716712952,
0.2978723347187042,
0.1538461446762085,
0.3255814015865326,
0.17142856121063232,
0.11428570747375488,
0.1764705777168274,
0.06451612710952759,
0.09090908616781235,
0.1249999925494194,
0,
0.12121211737394333,
0.08510638028383255,
0.05882352590560913,
0,
0.06451612710952759
] | S1xRbxHYDr | true | [
"Probing robustness and redundancy in deep neural networks reveals capacity-constraining features which help to explain non-overfitting."
] |
[
"Deep networks realize complex mappings that are often understood by their locally linear behavior at or around points of interest.",
"For example, we use the derivative of the mapping with respect to its inputs for sensitivity analysis, or to explain (obtain coordinate relevance for) a prediction.",
"One key challenge is that such derivatives are themselves inherently unstable.",
"In this paper, we propose a new learning problem to encourage deep networks to have stable derivatives over larger regions.",
"While the problem is challenging in general, we focus on networks with piecewise linear activation functions.",
"Our algorithm consists of an inference step that identifies a region around a point where linear approximation is provably stable, and an optimization step to expand such regions.",
"We propose a novel relaxation to scale the algorithm to realistic models.",
"We illustrate our method with residual and recurrent networks on image and sequence datasets.",
"Complex mappings are often characterized by their derivatives at points of interest.",
"Such derivatives with respect to the inputs play key roles across many learning problems, including sensitivity analysis.",
"The associated local linearization is frequently used to obtain explanations for model predictions BID3 BID24 BID28 BID26 ; explicit first-order local approximations BID22 BID17 BID31 Koh & Liang, 2017; BID1 ; or used to guide learning through regularization of functional classes controlled by derivatives BID19 BID5 Mroueh et al., 2018) .",
"We emphasize that the derivatives discussed in this paper are with respect to the input coordinates rather than parameters.The key challenge lies in the fact that derivatives of functions parameterized by deep learning models are not stable in general BID14 .",
"State-of-the-art deep learning models (He et al., 2016; Huang et al., 2017) are typically over-parametrized BID37 , leading to unstable functions as a by-product.",
"The instability is reflected in both the function values BID17 as well as the derivatives BID14 BID0 .",
"Due to unstable derivatives, first-order approximations used for explanations therefore also lack robustness BID14 BID0 .We",
"note that gradient stability is a notion different from adversarial examples. A",
"stable gradient can be large or small, so long as it remains approximately invariant within a local region. Adversarial",
"examples, on the other hand, are small perturbations of the input that change the predicted output BID17 . A large local",
"gradient, whether stable or not in our sense, is likely to contribute to finding an adversarial example. Robust estimation",
"techniques used to protect against adversarial examples (e.g., (Madry et al., 2018) ) focus on stable function values rather than stable gradients but can nevertheless indirectly impact (potentially help) gradient stability. A direct extension",
"of robust estimation to ensure gradient stability would involve finding maximally distorted derivatives and require access to approximate Hessians of deep networks.In this paper, we focus on deep networks with piecewise linear activations to make the problem tractable. The special structure",
"of this class of networks (functional characteristics) allows us to infer lower bounds on the p margin -the maximum radius of p -norm balls around a point where derivatives are provably stable. In particular, we investigate",
"the special case of p = 2 since the lower bound has an analytical solution, and permits us to formulate a regularization problem to maximize it. The resulting objective is, however",
", rigid and non-smooth, and we further relax the learning problem in a manner resembling (locally) support vector machines (SVM) BID29 BID8 .Both the inference and learning problems",
"in our setting require evaluating the gradient of each neuron with respect to the inputs, which poses a significant computational challenge. For piecewise linear networks, given D-dimensional",
"data, we propose a novel perturbation algorithm that collects all the exact gradients by means of forward propagating O(D) carefully crafted samples in parallel without any back-propagation. When the GPU memory cannot fit O(D) samples in one",
"batch, we develop an unbiased approximation to the objective with a random subset of such samples.Empirically, we examine our inference and learning algorithms with fully-connected (FC), residual (ResNet) (He et al., 2016) , and recurrent (RNN) networks on image and time-series datasets with quantitative and qualitative experiments. The main contributions of this work are as follows:•",
"Inference algorithms that identify input regions of neural networks, with piecewise linear activation functions, that are provably stable.• A novel learning criterion that effectively expand",
"regions of provably stable derivatives.• Novel perturbation algorithms that scale computation",
"to high dimensional data.• Empirical evaluation with several types of networks.",
"This paper introduces a new learning problem to endow deep learning models with robust local linearity.",
"The central attempt is to construct locally transparent neural networks, where the derivatives faithfully approximate the underlying function and lends itself to be stable tools for further applications.",
"We focus on piecewise linear networks and solve the problem based on a margin principle similar to SVM.",
"Empirically, the proposed ROLL loss expands regions with provably stable derivatives, and further generalize the stable gradient property across linear regions.",
"DISPLAYFORM0 , and the feasible set of the activation pattern is equivalent to DISPLAYFORM1 Ifx is feasible to the fixed activation pattern o 1 j , it is equivalent to thatx satisfies the linear constraint DISPLAYFORM2 in the first layer.Assumex has satisfied all the constraints before layer i > 1.",
"We know if all the previous layers follows the fixed activation indicators, it is equivalent to rewrite each DISPLAYFORM3 Then for j ∈ [N i ], it is clear that z DISPLAYFORM4 The proof follows by induction."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.11428570747375488,
0.20512820780277252,
0.07692307233810425,
0.23529411852359772,
0.12903225421905518,
0.14999999105930328,
0.23076923191547394,
0.0714285671710968,
0.14814814925193787,
0.25,
0.09677419066429138,
0.20408162474632263,
0.10810810327529907,
0.13333332538604736,
0.06451612710952759,
0.07407406717538834,
0,
0.1818181723356247,
0.060606054961681366,
0.07999999821186066,
0.26923075318336487,
0.2083333283662796,
0.13636362552642822,
0.05128204822540283,
0.1463414579629898,
0.1249999925494194,
0.1230769231915474,
0.10256409645080566,
0.14814814925193787,
0.2222222238779068,
0.19999998807907104,
0.1463414579629898,
0.1875,
0.060606054961681366,
0.11999999731779099,
0.0833333283662796
] | SylCrnCcFX | true | [
"A scalable algorithm to establish robust derivatives of deep networks w.r.t. the inputs."
] |
[
"Recent years have witnessed two seemingly opposite developments of deep convolutional neural networks (CNNs).",
"On one hand, increasing the density of CNNs by adding cross-layer connections achieve higher accuracy.",
"On the other hand, creating sparsity structures through regularization and pruning methods enjoys lower computational costs.",
"In this paper, we bridge these two by proposing a new network structure with locally dense yet externally sparse connections.",
"This new structure uses dense modules, as basic building blocks and then sparsely connects these modules via a novel algorithm during the training process.",
"Experimental results demonstrate that the locally dense yet externally sparse structure could acquire competitive performance on benchmark tasks (CIFAR10, CIFAR100, and ImageNet) while keeping the network structure slim.",
"In this paper, we firstly create locally dense and externally sparse structures by prefixing some dense modules and add sparse connections between them.",
"Experiment results demonstrate that evolving sparse connections could reach competitive results on benchmark datasets.",
"In order to give properties of these biologically plausible structures, we apply several sets of contrast experiments as shown in Experiment.",
"By equally changing the input feature groups of each module during the whole training process, this strategy could alleviate the risk of the weights being trapped in local optimal point.",
"Same to most of the related works, redundancy of each dense module is not 'the larger the better', where the test accuracy will first increase within the growth rate increases, but finally drop while the growth is above some threshold.The combination of being dense and being sparse is an interesting area, and the internal dense and externally sparse structure also coincide with the modularity in human brain.",
"We prove the feasibility of these structures and give a simple algorithm to search best connections.",
"We also noticed that the connection matrix is not unique for reaching good performance.",
"We will concentrate on revealing the relationship between these similar connection matrices and the representing features behind it.In this case, we may acquire state of the art performance on other datasets and tasks in our future work.",
"Moreover, as these structures have various direct paths between input and output, separating a network into several small networks without any accuracy loss is also a promising topic."
] | [
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.2222222238779068,
0.05405404791235924,
0.052631575614213943,
0.4285714328289032,
0.1304347813129425,
0.25,
0.3333333134651184,
0.05714285373687744,
0.1904761791229248,
0.0833333283662796,
0.19178082048892975,
0.10526315122842789,
0,
0.178571417927742,
0.12244897335767746
] | BkNUFjR5KQ | true | [
"In this paper, we explore an internal dense yet external sparse network structure of deep neural networks and analyze its key properties."
] |
[
"Statistical inference methods are fundamentally important in machine learning.",
"Most state-of-the-art inference algorithms are \n",
"variants of Markov chain Monte Carlo (MCMC) or variational inference (VI).",
"However, both methods struggle with limitations in practice: MCMC methods can be computationally demanding; VI methods may have large bias. \n",
"In this work, we aim to improve upon MCMC and VI by a novel hybrid method based on the idea of reducing simulation bias of finite-length MCMC chains using gradient-based optimisation.",
"The proposed method can generate low-biased samples by increasing the length of MCMC simulation and optimising the MCMC hyper-parameters, which offers attractive balance between approximation bias and computational efficiency.",
"We show that our method produces promising results on popular benchmarks when compared to recent hybrid methods of MCMC and VI.",
"Statistical inference methods in machine learning are dominated by two approaches: simulation and optimisation.",
"Markov chain Monte Carlo (MCMC) is a well-known simulation-based method, which promises asymptotically unbiased samples from arbitrary distributions at the cost of expensive Markov simulations.",
"Variational inference (VI) is a well-known method using optimisation, which fits a parametric approximation to the target distribution.",
"VI is biased but offers a computationally efficient generation of approximate samples.",
"There is a recent trend of hybrid methods of MCMC and VI to achieve a better balance between computational efficiency and bias.",
"Hybrid methods often use MCMC or VI as an algorithmic component of the other.",
"In particular, Salimans et al. (2015) proposed a promising modified VI method that reduces approximation bias by using MCMC transition kernels.",
"Another technique reduces the computational complexity of MCMC by initialising the Markov simulation from a pretrained variational approximation (Hoffman, 2017; Han et al., 2017) .",
"Levy et al. (2018) proposed to improve MCMC using flexible non-linear transformations given by neural networks and gradientbased auto-tuning strategies.",
"In this work, we propose a novel hybrid method, called ergodic inference (EI).",
"EI improves over both MCMC and VI by tuning the hyper-parameters of a flexible finite-step MCMC chain so that its last state sampling distribution converges fast to a target distribution.",
"EI optimises a tractable objective function which only requires to evaluate the logarithm of the unnormalized target density.",
"Furthermore, unlike in traditional MCMC methods, the samples generated by EI from the last state of the MCMC chain are independent and have no correlations.",
"EI offers an appealing option to balance computational complexity vs. bias on popular benchmarks in machine learning.",
"Compared with previous hybrid methods, EI has following advantages:",
"• EI's hyperparameter tuning produces sampling distributions with lower approximation bias.",
"• The bias is guaranteed to decrease as the length of the MCMC chain increases.",
"• By stopping gradient computations, EI has less computational cost than related baselines.",
"We also state some disadvantages of our method:",
"• The initial state distribution in EI's MCMC chain has to have higher entropy than the target.",
"• The computational complexity per simulated sample of EI is in general higher than in VI."
] | [
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0,
0,
0.04999999701976776,
0.1249999925494194,
1,
0.290909081697464,
0.3199999928474426,
0.1860465109348297,
0.11320754140615463,
0.21739129722118378,
0.1463414579629898,
0.3333333432674408,
0.1860465109348297,
0.3199999928474426,
0.22641508281230927,
0.2448979616165161,
0.3333333432674408,
0.2857142686843872,
0.17391303181648254,
0.19607841968536377,
0.1304347813129425,
0.052631575614213943,
0.04999999701976776,
0.23255813121795654,
0,
0.054054051637649536,
0.1304347813129425,
0.09090908616781235
] | HkxZVlHYvH | true | [
"In this work, we aim to improve upon MCMC and VI by a novel hybrid method based on the idea of reducing simulation bias of finite-length MCMC chains using gradient-based optimisation."
] |
[
"We show that information about whether a neural network's output will be correct or incorrect is present in the outputs of the network's intermediate layers.",
"To demonstrate this effect, we train a new \"meta\" network to predict from either the final output of the underlying \"base\" network or the output of one of the base network's intermediate layers whether the base network will be correct or incorrect for a particular input.",
"We find that, over a wide range of tasks and base networks, the meta network can achieve accuracies ranging from 65% - 85% in making this determination.\n",
"What do neural networks know and where do they know it?",
"At what stage of a network's processing does a \"decision\" get made and are there reliable markers of a correct or incorrect decision either in the output or during a network's operation at one of its intermediate layers?",
"To begin this investigation, we ask where in a neural network's operation it becomes possible to determine whether the network might be correct or incorrect in its output for a particular input.",
"We feed a second, \"meta\" network the outputs of either an intermediate or final layer of the first, \"base\", network and train the meta network to predict whether the base network will be correct for an individual input.",
"We call the second network a meta or metacognitive network because humans and other animals are known to make so-called metacognitive judgments to assess their confidence in the correctness of their beliefs or actions BID4 .We",
"find that the meta network is able to predict whether a base network will be correct or incorrect on previously unseen inputs with up to 69% accuracy for base networks 1 Computer Science Department, Columbia University, New York, New York, USA 2 Mechanical Engineering Department and Data Science Institute, Columbia University, New York, New York, USA. Correspondence",
"to: Chad DeChant <[email protected]>.Identifying and",
"Understanding Deep Learning Phenomena Workshop at the International Conference on Machine Learning 2019 FIG0 . Meta network pipeline",
": the Meta network receives as input the output of one of the base network's layers for a particular input and predicts whether the base network will be correct.classifying ImageNet images and 85% accuracy for a base network classifying CIFAR 10 images. As these two examples",
"suggest, the accuracy of the meta network is higher for simpler underlying tasks in our experiments.The usefulness of the layers' outputs for predicting the accuracy of the network is lowest at the earliest layers in the network and increases to be highest either at the last hidden layer or, in most cases, the final output. Meta networks trained",
"on different layers' outputs have significant but not complete overlap in which examples they are able to correctly predict will go on to be accurately or inaccurately classified, suggesting that there is slightly different information at each level which can be used to make assessments of accuracy.",
"It is clear that the meta networks are able to learn something about the intermediate and final outputs which are indicative of the networks' accuracy.",
"Just what that is and whether it can be useful in improving or interpreting the networks is as yet unclear.It is difficult to estimate the accuracy of a neural network at runtime.",
"On tasks that involve a choice between discrete options, the value of the highest output after it is put through a softmax is often considered to represent the network's confidence or estimate of the probability of the corresponding class's being correct.",
"However, it is not clear that this interpretation is warranted.",
"Recent work has shown that these outputs are not reliable BID2 .",
"It is interesting, then, to consider whether when a meta network is trained on the final outputs it learns to simply classify those outputs in which the predicted class has very high values as correct and those with relatively low values as incorrect.",
"This would correspond to the general intuition that high values for predicted classes indicate meaningfully high confidence.Figure 2 graphically illustrates the outputs of a ResNet18 network trained on ImageNet, with sample outputs of the highest confidence class arrayed along the x axis (a similar chart for outputs of the BiDAF model is found in the Appendix).",
"It shows that while there is certainly a correlation between a base network's accuracy and the value of the output corresponding to the highest predicted class, it is not a simple or completely reliable one.",
"On average, the base network indeed tends to be more confident in its correct answers than its wrong answers, and the set of examples the meta network is correct on shows this pattern clearly while the examples the meta network gets wrong show less distinct base \"confidence\" numbers.",
"However, it is apparent that the base network is often very \"confident\" of a wrong answer and not confident of a correct answer.",
"From inspecting the plots it is clear that the meta network is not judging the net- FIG1 .",
"Examples of maximum values (arrayed along the x axis) output by a Resnet18 network on ImageNet after the softmax function.",
"The meta network is correct in both cases in the top row and incorrect in the bottom row; the Resnet base classifier is correct on the left and incorrect on the right in both rows.",
"The mean value in each category is given.",
"This shows that the meta network does not learn to simply classify the output based on the value of the class prediction, which is often interpreted as the network's 'confidence'.work",
"'s output simply by learning a threshold \"confidence\" level above which it predicts it will be correct and below which it predicts it will be incorrect. This",
"is evident by the large number of incorrect high \"confidence\" outputs of the base network which the meta network accurately marks as incorrect, as well as the correct low \"confidence\" outputs which the meta networks finds correct. Further",
"study will be required to better understand what features the meta network has learned to look for to measure accuracy.Neural networks designed for a classification-type task are generally trained to give an answer, not to also indicate whether they are likely to be right or wrong. While there",
"has has certainly been work to address this, notably that involving Bayesian networks BID0 , the present work and its future extensions may point in other fruitful directions for characterizing a network's likely accuracy at runtime. There may also",
"be interesting connections to work studying neural networks from an information theoretic perspective BID9 . We train meta",
"networks to judge whether a base network is correct or incorrect on particular inputs by feeding the meta network outputs, final or intermediate, from the base network. The blue arrows",
"show which outputs of the base Bi-Directional Attention Flow model the meta network examines when classifying the base network's output as accurate or inaccurate. Image adapted from",
"BID8"
] | [
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.8636363744735718,
0.4727272689342499,
0.16326530277729034,
0.06666666269302368,
0.38461539149284363,
0.4313725531101227,
0.3921568691730499,
0.19999998807907104,
0.2769230604171753,
0,
0.05405404791235924,
0.37037035822868347,
0.25806450843811035,
0.22580644488334656,
0.2790697515010834,
0.3529411852359772,
0.29629629850387573,
0.06666666269302368,
0.0624999962747097,
0.28070175647735596,
0.1818181723356247,
0.2745097875595093,
0.21052631735801697,
0.25,
0.11428570747375488,
0.19999998807907104,
0.23255813121795654,
0.13793103396892548,
0.2083333283662796,
0.2926829159259796,
0.260869562625885,
0.19354838132858276,
0.17543859779834747,
0.10526315122842789,
0.30434781312942505,
0.2666666507720947
] | H1xXwEB2h4 | true | [
"Information about whether a neural network's output will be correct or incorrect is somewhat present in the outputs of the network's intermediate layers."
] |
[
"We develop a new algorithm for imitation learning from a single expert demonstration.",
"In contrast to many previous one-shot imitation learning approaches, our algorithm does not assume access to more than one expert demonstration during the training phase.",
"Instead, we leverage an exploration policy to acquire unsupervised trajectories, which are then used to train both an encoder and a context-aware imitation policy.",
"The optimization procedures for the encoder, imitation learner, and exploration policy are all tightly linked.",
"This linking creates a feedback loop wherein the exploration policy collects new demonstrations that challenge the imitation learner, while the encoder attempts to help the imitation policy to the best of its abilities.",
"We evaluate our algorithm on 6 MujoCo robotics tasks."
] | [
1,
0,
0,
0,
0,
0
] | [
0.52173912525177,
0.17142856121063232,
0.0624999962747097,
0,
0.10810810327529907,
0.09999999403953552
] | Skl-fAVYvH | false | [
"Unsupervised self-imitation algorithm capable of inference from a single expert demonstration."
] |
[
"Significant work has been dedicated to developing methods for communicating reasons for decision-making within au-\n",
"tomated scheduling systems to human users.",
"However, much less focus has been placed on communicating reasons for why\n",
"scheduling systems are unable to arrive at a feasible solution when over-constrained.",
"We investigate this problem in the\n",
"context of task scheduling.",
"We introduce the agent resource-constrained project scheduling problem (ARCPSP), an ex-\n",
"tension of the resource-constrained project scheduling problem which includes a conception of agents that execute tasks\n",
"in parallel.",
"We outline a generic framework, based on efficiently enumerating minimal unsatisfiable sets (MUS) and\n",
"maximal satisfiable sets (MSS), to produce small descriptions of the source of infeasibility.",
"These descriptions are supple-\n",
"mented with potential relaxations that would fix the infeasibility found within the problem instance.",
"We illustrate how\n",
"this method may be applied to the ARCPSP and demonstrate how to generate different types of explanations for an over-\n",
"constrained instance of the ARCPSP.",
"In many real-world applications, human users in charge of developing plans and making decisions are aided by automated planning and scheduling systems.",
"For example, NASA mission planning makes use of a large team of human planners that use various automated scheduling systems in order to construct day-to-day as well as long-term plans for crew members.",
"A primary function of these automated systems is generating different types of plans and schedules while ensuring that various constraints do not conflict.",
"When plans are ultimately constructed by human planners for a human crew, it is essential for both the planners, and the crew executing the plans, to understand how and why certain scheduling decisions were made by automated tools.",
"In general, when the primary function of such constraint satisfaction and optimization tools is to support human decision-making, it is necessary for the automated systems to be transparent in how they arrive at certain outputs.Significant work has been dedicated to generating humanunderstandable explanations for why certain automated planning decisions were made BID10 ).However",
", little work has been done in generating reasons for why plans or schedules cannot be generated under certain specifications. Human",
"users interacting with such constraint satisfaction or optimization tools are bound to run into configurations for which no feasible solution exists. Fixing",
"infeasible configurations is a challenging task for the human user if they are unable to understand why the solver arrives at an unsatisfiable conclusion.While various partial constraint satisfaction tools exist for solving such over-constrained problems BID4 , solutions employing these tools have significant limitations that make them less applicable in certain real-life scenarios. Most of",
"these methods employ constraint hierarchies to determine which constraints should be violated in order to satisfy more important ones. However",
", in complicated planning or scheduling applications involving multiple human agents, constructing such a hierarchy is often impractical. Instead",
", if reasons for infeasibility can be properly conveyed back to the human user, they can make high-level decisions to solve infeasibility in any way they see fit.In this paper, we provide a framework for iteratively generating human-understandable explanations of infeasibility for a specific class of scheduling problems. These",
"explanations manifest themselves as minimal sets of specifications (or constraints) that are responsible for causing infeasibility, coupled with suggestions for relaxations through which feasibility could be achieved.The method proposed in this paper allows users to enumerate over a series of explanations for infeasible instances of problems at varying levels of abstraction. For example",
", raw explanations of relevant low-level constraints may be directly output or a causal link may be established back to higher level descriptions of the problem to understand what specifications were responsible for the feasibility issue. This system",
"also allows directed questions about feasibility to be asked, such as \"why can task A not be scheduled after task B?\"A strategy for iteratively generating minimal unsatisfiable sets (MUS) and maximal satisfiable sets (MSS) forms the basis for interpreting the infeasibility of the problem. Existing methods",
"such as QuickXplain BID5 ) focus on generating a single most preferable explanation of infeasibility. Likewise, BID1 aims",
"to generate a single explanation in the context of optimization without attempting to achieve minimality. However, overconstrained",
"problems may contain several infeasibility issues which cannot be solved by changing only a single part of the problem. So, because a single MUS",
"only provides indication of a single feasibility issue, we aim to enumerate several sets of MUS to highlight multiple feasibility issues found within the problem instance. Therefore, the proposed",
"enumeration strategy is based on MARCO BID8 ), a flexible algorithm for generating MUSes and MSSes in succession.Motivated by the domain of space mission scheduling, we introduce and investigate the agent resource-constrained project scheduling problem (ARCPSP), an extension of the resource-constrained project scheduling problem (RCPSP) that incorporates the delegation of tasks to differing agents. This problem cannot be",
"framed as an instance of the RCPSP because it deals with the case of asymmetric agents in which certain tasks may only be executed by a subset of the agents. This problem is meant",
"to model applications in which efficient scheduling for teams of differing agents is critical. While we only explicitly",
"investigate this problem, the generality of the approach outlined in this paper would allow the methodology to be adapted for different types of constraint satisfaction and optimization tools as well as different types of planning and scheduling problems.The main contributions of this paper are the following: firstly, we provide a formal definition of the agent resourceconstrained project scheduling problem (ARCPSP) in Section 3. Then in Section 4 we outline",
"a difference logic encoding of the ARCPSP which is used to check feasibility of problem instances. The framework for generating",
"humanunderstandable explanations of infeasibility for instances of the ARCPSP is described in Section 5. Finally, we provide an overview",
"of the trade-off between interpretability and expressibility of different types of explanations and conclude by discussing how these ideas can be extended.",
"We introduced the agent resource-constrained project scheduling problem (ARCPSP) along with an associated difference logic encoding.",
"We proposed a general framework for generating minimal conflicts and minimal relaxations based on the MARCO algorithm and demonstrated how it could be used to generate varying types of descriptions for why infeasibility is occurring in instances of the ARCPSP.",
"The framework outlined in this paper is general enough to be applied to constraint satisfaction formulations for various other scheduling and planning problems.",
"These ideas may potentially be further extended to different kinds of formal languages, such as linear temporal logic, that are used to describe planning problems."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0
] | [
0.05882352590560913,
0.07692307233810425,
0.1249999925494194,
0.1875,
0.1538461446762085,
0.1666666567325592,
0.19354838132858276,
0.22857142984867096,
0.11764705181121826,
0.1249999925494194,
0,
0.060606054961681366,
0.08695651590824127,
0.1538461446762085,
0.07999999821186066,
0.1463414579629898,
0.19999998807907104,
0.1428571343421936,
0.19230768084526062,
0.20588235557079315,
0.19512194395065308,
0.0476190410554409,
0.21917808055877686,
0.051282044500112534,
0.20512819290161133,
0.39344263076782227,
0.20588235557079315,
0.15094339847564697,
0.13333332538604736,
0.21052631735801697,
0.1666666567325592,
0.1904761791229248,
0.08888888359069824,
0.23529411852359772,
0.16326530277729034,
0.2631579041481018,
0.17142856121063232,
0.3589743673801422,
0.3684210479259491,
0.10256409645080566,
0.1666666567325592,
0.4363636374473572,
0.2857142686843872,
0.09090908616781235
] | rkG3QTnQcN | true | [
"We develop a framework for generating human-understandable explanations for why infeasibility is occurring in over-constrained instances of a class of resource-constrained scheduling problems."
] |
[
"Adversarial perturbations cause a shift in the salient features of an image, which may result in a misclassification.",
"We demonstrate that gradient-based saliency approaches are unable to capture this shift, and develop a new defense which detects adversarial examples based on learnt saliency models instead.",
"We study two approaches: a CNN trained to distinguish between natural and adversarial images using the saliency masks produced by our learnt saliency model, and a CNN trained on the salient pixels themselves as its input.",
"On MNIST, CIFAR-10 and ASSIRA, our defenses are able to detect various adversarial attacks, including strong attacks such as C&W and DeepFool, contrary to gradient-based saliency and detectors which rely on the input image.",
"The latter are unable to detect adversarial images when the L_2- and L_infinity- norms of the perturbations are too small.",
"Lastly, we find that the salient pixel based detector improves on saliency map based detectors as it is more robust to white-box attacks.",
"Adversarial examples highlight a crucial difference between human vision and computer image processing.",
"Often computers fail to understand the relevant characteristics of an image for classification (Ribeiro et al., 2016) or fail to generalize locally, i.e., misclassify examples close to the training data (Szegedy et al., 2013) .",
"Attacks exploit this property by altering pixels the classifier heavily relies on -pixels which are irrelevant to humans for object recognition.",
"As a consequence, adversarial perturbations fool classifiers while the correct class remains clear to humans.",
"Saliency maps identify the pixels an image classifier uses for its prediction; as such, they can be used as a tool to understand why a classifier is fooled.",
"Building on this concept, researchers have shown qualitatively that adversarial perturbations cause a shift in the saliency of classifiers (Fong & Vedaldi, 2017; Gu & Tresp, 2019) .",
"Figure 1 shows examples of a natural image and corresponding adversarial images, each above their respective saliency maps.",
"The saliency maps corresponding to adversarial images show perceptible differences to that of the original image, even though adversarial images themselves often seem unperturbed.",
"For the original image, the saliency map shows that the classifier focuses on the four (and a couple of random pixels on the left).",
"We observe that for the adversarial images, the classifier starts focusing more on irrelevant aspects of the left side of the image.",
"There is ample research into different techniques for finding saliency maps (see e.g. Zeiler & Fergus, 2014; Springenberg et al., 2014; Bach et al., 2015; Ribeiro et al., 2016; Shrikumar et al., 2017; Selvaraju et al., 2017; Zintgraf et al., 2017; Fong & Vedaldi, 2017) .",
"However, not all saliency maps are equally informative (Fong & Vedaldi, 2017) .",
"For example, the Jacobian 1 can be used to determine the saliency of a pixel in the classification of the image (Papernot et al., 2016b; Zhang et al., 2018) .",
"As the Jacobian is often used to generate adversarial examples, intuitively, we expect that it can be used effectively to detect adversarial perturbations.",
"Zhang et al. (2018) propose a defense to this effect: they determine whether an input is adversarial, given the Jacobian-based The top is the input image and the bottom shows the corresponding saliency map.",
"In the second row, lighter colours correspond to higher saliency (black corresponds to a saliency of 0, the lowest possible value).",
"The classifier predicts (from left to right) the images as: 4, 9, 9 , 8, 9, 9.",
"Note the stark difference between the saliency masks of the original image and those of the adversarial examples.",
"saliency map concatenated with the image.",
"However, as shown qualitatively by Gu & Tresp (2019) , gradients are not always able to capture differences between adversarial images and natural images (for an example see Figures 7 and 8 in Appendix D).",
"2 Here we inspect the proposed Jacobian-based approach and show that only the concatenated input affects the technique's performance in detecting adversarial examples, with the Jacobian having no effect.",
"While gradients may not be informative for detection, saliency should be an effective tool for detecting adversarial images.",
"In our analysis, we use more powerful model-based saliency techniques and show that the magnitude of the shift of the saliency map due to adversarial perturbations often exceeds the L 2 distance between the saliency maps of different natural images.",
"Building on this result, we consider two different possible effects adversarial perturbations might have on the classifier:",
"1. They might cause the classifier to focus on the wrong pixel locations",
"2. They might change the pixel values of salient pixels Based on these hypotheses, we employ two CNN classifier architectures to detect adversarial images.",
"Claim (1) can be captured by shifts in saliency maps, as previously considered by Fong & Vedaldi (2017) .",
"In this work, we extend on their analysis 3 by proving the defensive capability of our model-based saliency against difficult black-box attacks, such as C&W and DeepFool 4 , as well as white-box adversarial attacks.",
"By considering claim (2), we demonstrate that incorporating pixel values improves the performance of the classifier when shifts in saliency maps do not suffice to capture adversarial perturbations.",
"We also show that our salient-pixel based defense generalizes well (detecting stronger attacks when trained on weaker attacks) and is more robust than the saliency map defense against white-box attacks.",
"Lastly, we demonstrate that saliency can be used to detect adversarial examples generated by small perturbations, contrary to other defenses, which exhibit threshold behavior: i.e., when the adversarial perturbation is too small, other defenses (specifically Gong et al., 2017; Zhang et al., 2018) are unable to detect the adversarial images.",
"In our analysis, we ascertain that the saliency maps of adversarial images differ from those of natural images.",
"Further, we show that salient pixel based defenses perform better than a saliency map defense.",
"When trained on a single black-box attack, our method is able to detect adversarial perturbations generated by different and stronger attacks.",
"We show that gradients are unable to capture shifts in saliency due to adversarial perturbations and present an alternative adversarial defense using learnt saliency models that is effective against both black-box and white-box attacks.",
"Building on the work of Gong et al. (2017) , we further establish the notion of threshold behavior, showing that the trend depends on the L 2 and L ∞ -norms of the perturbations and therefore also prevails when using other methods (JSD) and across different attacks.",
"Future work could further investigate the performance of the defense in different applications.",
"For example, as our method runs in real-time, it could be used to detect adversarial perturbations in video to counter recent attacks Jiang et al., 2019) .",
"A ARCHITECTURES, HYPER-PARAMETERS AND DATA Figure 3 : ASSIRA, CIFAR-10, and MNIST image classifier architecture and hyper-parameters.",
"The first entry corresponds to the first layer, and the table proceeds chronologically until the last layer.",
"Parameters f, k, p, s and n represent the number of filters, kernel size, pooling size, stride, number of filters, respectively.",
"If stride is omitted, it is set to 1.",
"All classifiers have a final softmax activation."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.13333332538604736,
0.4363636374473572,
0.23728813230991364,
0.19999998807907104,
0.25531914830207825,
0.23529411852359772,
0.0476190447807312,
0.06779660284519196,
0.07999999821186066,
0.13636362552642822,
0.1111111044883728,
0.1818181723356247,
0.12765957415103912,
0.19999998807907104,
0.0833333283662796,
0.12765957415103912,
0.06451612710952759,
0.09756097197532654,
0.11320754140615463,
0.20408162474632263,
0.20689654350280762,
0.08510638028383255,
0.045454539358615875,
0.1395348757505417,
0.05714285373687744,
0.25806450843811035,
0.1818181723356247,
0.2222222238779068,
0.26229506731033325,
0.08888888359069824,
0.04878048226237297,
0.07547169178724289,
0.1304347813129425,
0.22580644488334656,
0.2857142686843872,
0.35087719559669495,
0.1944444328546524,
0.13333332538604736,
0.1818181723356247,
0.2800000011920929,
1,
0.1515151411294937,
0.09756097197532654,
0.18518517911434174,
0.04444443807005882,
0.09302324801683426,
0.043478257954120636,
0.10810810327529907,
0
] | HJe5_6VKwS | true | [
"We show that gradients are unable to capture shifts in saliency due to adversarial perturbations and present an alternative adversarial defense using learnt saliency models that is effective against both black-box and white-box attacks."
] |
[
"Disentangled encoding is an important step towards a better representation learning.",
"However, despite the numerous efforts, there still is no clear winner that captures the independent features of the data in an unsupervised fashion.",
"In this work we empirically evaluate the performance of six unsupervised disentanglement approaches on the mpi3d toy dataset curated and released for the NeurIPS 2019 Disentanglement Challenge.",
"The methods investigated in this work are Beta-VAE, Factor-VAE, DIP-I-VAE, DIP-II-VAE, Info-VAE, and Beta-TCVAE.",
"The capacities of all models were progressively increased throughout the training and the hyper-parameters were kept intact across experiments.",
"The methods were evaluated based on five disentanglement metrics, namely, DCI, Factor-VAE, IRS, MIG, and SAP-Score.",
"Within the limitations of this study, the Beta-TCVAE approach was found to outperform its alternatives with respect to the normalized sum of metrics.",
"However, a qualitative study of the encoded latents reveal that there is not a consistent correlation between the reported metrics and the disentanglement potential of the model.",
"Unsupervised disentanglement is an open problem in the realm of representation learning, incentivized around interpretability BID8 BID1 .",
"A disentangled representation is a powerful tool in transfer learning, few shot learning, reinforcement learning, and semi-supervised learning of downstream tasks (Goo, 2018; BID9 BID1 .Here",
", we investigate the performance of some of the promising disentanglement methods from the family of variational autoencoders (VAE). The",
"methods are evaluated based on five relatively established disentanglement metrics on the simplistic rendered images of the mpi3d toy dataset curated and released for the NeurIPS 2019 Disentanglement Challenge.",
"In this work we compared the degree of disentanglement in latent encodings of six variational learning algorithms, namely, β-VAE, Factor-VAE, DIP-I-VAE, DIP-II-VAE, Info-VAE, and β-TCVAE.",
"The empirical results TAB0 point to β-TCVAE being marginally the superior option and, consequently, chosen as the best performing approach.",
"However, a qualitative study of the traversed latent spaces (Appendix B) reveals that none of the models encoded a true disentangled representation.",
"Lastly, although the DIP-VAE-II model is under performing according to the quantitative results, it has the least number of ignored latent variables with a promising latent traversal compared to other higher performing methods (Appendix B).",
"As a result of these inconsistencies, we find the five metrics utilized in this study inadequate for the purpose of disentanglement evaluation.",
"Among the limitations of this study is the insufficient search of the hyper-parameters space for all the six learning algorithms.",
"Moreover, the NeurIPS 2019 Disentanglement Challenge imposed an 8-hour limit on the training time of the models which we found to be insufficient.",
"This, while the maximum number of iterations was set to 200k in our experiments, this value was limited to 100k in the submissions made to the challenge portal.2.",
"The repository will be publicly released upon the completion of the competition."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0
] | [
0,
0.07999999821186066,
0.13793103396892548,
0,
0.0952380895614624,
0,
0.08695651590824127,
0.07692307233810425,
0.0952380895614624,
0.0714285671710968,
0.09999999403953552,
0.13333332538604736,
0.0714285671710968,
0,
0.08695651590824127,
0.05882352590560913,
0.0833333283662796,
0.09999999403953552,
0.1599999964237213,
0.07407407462596893,
0.13333332538604736
] | rJePwB8prH | true | [
"Inadequacy of Disentanglement Metrics"
] |
[
"Most of the prior work on multi-agent reinforcement learning (MARL) achieves optimal collaboration by directly learning a policy for each agent to maximize a common reward.",
"In this paper, we aim to address this from a different angle.",
"In particular, we consider scenarios where there are self-interested agents (i.e., worker agents) which have their own minds (preferences, intentions, skills, etc.) and can not be dictated to perform tasks they do not want to do.",
"For achieving optimal coordination among these agents, we train a super agent (i.e., the manager) to manage them by first inferring their minds based on both current and past observations and then initiating contracts to assign suitable tasks to workers and promise to reward them with corresponding bonuses so that they will agree to work together.",
"The objective of the manager is to maximize the overall productivity as well as minimize payments made to the workers for ad-hoc worker teaming.",
"To train the manager, we propose Mind-aware Multi-agent Management Reinforcement Learning (M^3RL), which consists of agent modeling and policy learning.",
"We have evaluated our approach in two environments, Resource Collection and Crafting, to simulate multi-agent management problems with various task settings and multiple designs for the worker agents.",
"The experimental results have validated the effectiveness of our approach in modeling worker agents' minds online, and in achieving optimal ad-hoc teaming with good generalization and fast adaptation.",
"As the main assumption and building block in economics, self-interested agents play a central roles in our daily life.",
"Selfish agents, with their private beliefs, preferences, intentions, and skills, could collaborate (ad-hoc teaming) effectively to make great achievement with proper incentives and contracts, an amazing phenomenon that happens every day in every corner of the world.However, most existing multi-agent reinforcement learning (MARL) methods focus on collaboration when agents selflessly share a common goal, expose its complete states and are willing to be trained towards the goal.",
"While this is plausible in certain games, few papers address the more practical situations, in which agents are self-interested and inclined to show off, and only get motivated to work with proper incentives.In this paper, we try to model such behaviors.",
"We have multiple workers and a manager, together to work on a set of tasks.",
"The manager gets an external reward upon the completion of some tasks, or one specific task.",
"Each worker has a skill set and preference over the tasks.",
"Note that their skills and preferences may not align with each other ( Fig. 1(a",
") ), and are not known to the manager ( Fig. 1(b",
") ). Furthermore",
", manager may not get any external reward until a specific task is complete, which depends on other tasks.By default, the self-interested workers simply choose the most preferred tasks, which is often unproductive from the perspective of the entire project. Therefore,",
"the manager gives additional incentives in the form of contracts. Each contract",
"assigns a goal and a bonus for achieving the goal to a worker. Figure 1: Illustration",
"of our problem setup. Workers have different",
"skills (abilities for completing tasks) and preferences (which tasks they like) indicated by the bar charts. They are self-interested",
"and perform the tasks they prefer the most. To achieve optimal collaboration",
", a manager has to first infer workers' minds, and assigns right bonuses to workers for finishing specified tasks in the form of contracts. Consequently, workers will adjust",
"their intentions and work together accordingly. E.g., workers in the figure initially",
"all want to do task B. To finish all tasks, the manager has to pay more bonus to worker 1 and 2 so that they will perform A and C respectively.With the external incentives, workers may choose different goals than their preferences. Upon completion of assigned goals, the",
"manager receives the rewards associated with those goals and makes the promised payments to the workers. To generate optimal contracts, the manager",
"must infer the workers' minds and learn a good policy of goal and reward assignment.Conventional approaches of mechanism design tackle similar problems by imposing strong assumptions (e.g., skill/preference distributions, task dependencies, etc) to find an analytic solution. In contrast, we aim to train a manager using",
"reinforcement learning to i) assess minds of workers (skills, preferences",
", intentions, etc.) on the fly, ii) to optimally assign contracts to maximize",
"a collaborative reward, and iii) is adapted to diverse and even evolving",
"workers and environments.For this, we propose a novel framework -Mind-aware Multi-agent Management Reinforcement Learning (M 3 RL), which entails both agent modeling for estimating workers' minds and policy learning for contract generation. For agent modeling, we infer workers' identities",
"by their performance history, and track their internal states with a mind tracker trained by imitation learning (IL). For contract generation, we apply deep reinforcement",
"learning (RL) to learn goal and bonus assignment policies. To improve the learning efficiency and adaptation, we",
"also propose high-level successor representation (SR) learning BID17 and agent-wise -greedy exploration.As a proof of concept, we evaluate our approach in two environments: Resource Collection and Crafting in 2D Minecraft, to simulate multi-agent management problems. The setup and underlying assumptions are designed to",
"mimic real world problems, where workers are not compelled to reveal their true preferences and skills, and there may be dependency between tasks resulting in delayed and sparse reward signals. Workers may also be deceitful (e.g., accepting a contract",
"even when the assigned goal is unreachable). Our experiments demonstrate that the manager trained by our",
"approach can i) estimate the mind of each worker from the recent behaviors",
", ii) motivate the workers to finish less preferable or intermediate",
"tasks by assigning the right bonuses, iii) is adaptive to changing teams, e.g., change of members and/or",
"change of workers' skills and preferences, iv) and has good generalization in different team sizes and novel",
"environments.We have conducted substantial ablation studies by removing the key components, including IL, SR, agent-wise -greedy exploration, and performance history. Our approach shows a consistent performance in standard settings",
"as well as in more challenging ones where workers' policies are stochastic and sub-optimal, or there are multiple levels of bonuses required to motivate workers.",
"In this paper, we propose Mind-aware Multi-agent Management Reinforcement Learning (M 3 RL) for solving the collaboration problems among self-interested workers with different skills and preferences.",
"We train a manager to simultaneously infer workers' minds and optimally assign contracts to workers for maximizing the overall productivity, for which we combine imitation learning and reinforcement learning for a joint training of agent modeling and management policy optimization.",
"We also improve the model performance by a few techniques including learning high-level successor representation, agent-wise -greedy exploration, and agent identification based on performance history.",
"Results from extensive experiments demonstrate that our approach learns effectively, generalizes well, and has a fast and continuous adaptation."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0
] | [
0.25,
0.11428570747375488,
0.06666666269302368,
0.21333332359790802,
0.1818181723356247,
0.27272728085517883,
0.11764705181121826,
0.03999999538064003,
0.0952380895614624,
0.06976743787527084,
0.06557376682758331,
0.21052631735801697,
0.04999999701976776,
0.05714285373687744,
0,
0.1111111044883728,
0,
0.13114753365516663,
0.11428570747375488,
0.1621621549129486,
0,
0.1395348757505417,
0.17142856121063232,
0.23999999463558197,
0.052631575614213943,
0.08955223113298416,
0.1904761791229248,
0.11594202369451523,
0.11764705181121826,
0.10810810327529907,
0.11428570747375488,
0.27586206793785095,
0.08510638028383255,
0.05128204822540283,
0.0937499925494194,
0.09836065024137497,
0.10256409645080566,
0,
0.17142856121063232,
0.1395348757505417,
0,
0.11320754140615463,
0.12765957415103912,
0.4000000059604645,
0.28070175647735596,
0.1249999925494194,
0.0476190410554409
] | BkzeUiRcY7 | true | [
"We propose Mind-aware Multi-agent Management Reinforcement Learning (M^3RL) for training a manager to motivate self-interested workers to achieve optimal collaboration by assigning suitable contracts to them."
] |
[
"Inferring temporally coherent data features is crucial for a large variety of learning tasks.",
"We propose a network architecture that introduces temporal recurrent connections for the internal state of the widely used residual blocks.",
"We demonstrate that, with these connections, convolutional neural networks can more robustly learn stable temporal states that persist between evaluations.",
"We demonstrate their potential for inferring high-quality super-resolution images from low resolution images produced with real-time renderers.",
"This data arises in a wide range of applications, and is particularly challenging as it contains a strongly aliased signal.",
"Hence, the data differs substantially from the smooth inputs encountered in natural videos, and existing techniques do not succeed at producing acceptable image quality.",
"We additionally propose a series of careful adjustments of typical generative adversarial architectures for video super-resolution to arrive at a first model that can produce detailed, yet temporally coherent images from an aliased stream of inputs from a real-time renderer.",
"Learning expressive and stable representations is a goal that lies at the heart of a vast range of deep learning tasks (Dahl et al., 2011; Radford et al., 2015; .",
"While typical recurrent architectures focus on feedback loops to form persistent latent-spaces (Rumelhart et al., 1988; Chaitanya et al., 2017) , we show that for inference tasks where the result is conditioned on a stream of inputs, these existing architectures unnecessarily complicate the learning task, and fail to reliably stabilize the inference.",
"With our work, we propose a new type of connection for the very widely used building blocks of ResNet architectures (He et al., 2015) that lets the network easily compare internal states in-place.",
"The learned representation can then, e.g., yield a detailed image sequence with natural changes.",
"We demonstrate this with a particularly challenging learning objective: we aim for the synthesis of detailed images from a stream of strongly aliased inputs.",
"Specifically, we show that adversarially trained convolutional neural networks (CNNs) can be leveraged to produce detailed images from unfiltered, low-resolution images generated via point-sampling with a rasterization-based real-time renderer.",
"Real-time graphics are the basis for a wide range of applications: Generating images with a sufficient resolution from low resolution, yet computationally light-weight renderings is a task that is, e.g., important for generating content for the high resolution screens of mobile devices, and is especially interesting for streaming services of games in order to compute the final resolution only on the client.",
"Our work shares its goal with a variety of approaches that have been proposed for generating highquality images for raytracing algorithms (Zhang et al., 2016; Chaitanya et al., 2017) and purely image-based super-resolution algorithms (Sajjadi et al., 2017; .",
"Our architecture differs from previous works as the proposed recurrent connection allows the network to learn a temporally stable latent-space representation that does not negatively impact the residual flow of a ResNet architecture.",
"Also, the temporal connections for deeper layers of the network are important for successful learning, as we will demonstrate below.",
"While the basic concept of depth-recurrent connections could potentially be applied to a variety of sequence-based learning tasks, we focus on demonstrating its potential for pushing forward the limits of real-time rendering.",
"Hence, we additionally outline a series of modifications to existing architectures which are crucial for achieving high quality of the strongly aliased input images from LR RDA modified TecoGAN re-trained DRR Figure 1 : Given a strongly aliased low-resolution input rendering with one sample per pixel, recurrent non-adversarial training ((Chaitanya et al., 2017) with modifications for fair comparisons) produces blurry results, and existing adversarial methods ( , re-trained) introduce strong flickering artifacts.",
"Trained on the same data, due to the proposed DRR connections our network infers more consistent spatio-temporal features (see the supplemental footage for a clear assessment of the temporal differences).",
"typical real-time rendering pipelines.",
"A typical input for our network is shown on the left of Fig. 1 .",
"This application scenario is especially challenging for CNNs, since it requires to work with images that need to be rendered at very high frame rates and, thus, exhibit severe aliasing due to point sampling and typically low resolutions.",
"The aliasing not only distorts the spatial signal, but likewise affects the temporal changes.",
"Therefore, a super-resolution (SR) network can't rely on receiving smoothly changing, filtered inputs that allow for localization of small features.",
"Rather, it has to learn over the course of multiple frames to infer consistent output images (Fig. 1, right ) from spatially and temporally aliased input content.",
"As we will demonstrate in a number of studies below, this task is where our proposed depth-recurrent connections unfold their strength.",
"They enable the network to match the data distribution of the targets, i.e., to synthesize images with a high visual quality in terms of detail as well as their temporal behavior.",
"We show results and comparisons in the paper, and provide many additional evaluations in the supplemental material 1 , where videos more clearly show spatial and temporal differences.",
"As we focus on real-time rendering as our use case scenario, ideally the performance of the inference step needs to surpass the performance of the renderer.",
"For a desired output resolution of 1920×1080, our pre-trained model takes 113ms per frame on average.",
"2 Although this is not yet fast enough for real-time applications, we expect that techniques such as network compression (Choi et al., 2018; Molchanov et al., 2016) and evaluation of the models with dedicated hardware (NVIDIA Corporation, 2017) will easily yield very significant performance improvements.",
"We have demonstrated how depth-recurrent residual connections can be leveraged to learn stable internal latent-space representations in conditional generator architectures.",
"The DRR connections are particularly promising for iterative models with strongly aliased data, such as low-resolution inputs from a real-time renderer.",
"We have additionally shown how to achieve high quality synthesis in the context of real-time rendering by carefully analyzing and adjusting the network architecture.",
"We anticipate that DRRs could be beneficial for a variety of other tasks such as object tracking (Ning et al., 2017) and physics predictions (Li et al., 2019) .",
"A DATA As source of our data we use the projects \"FPS Sample\" and \"Book of the Dead: Environment\" (Unity Technologies, 2019b;a) for the Unity engine, both use the HDRP.",
"We captured a total of 57 120-frame sequences, split 50-5-2 for training, validation and testing.",
"For each frame we have lit color (the final image), unlit diffuse color, view-space surface normals, roughness, screen-space motion and depth for both HR and LR.",
"This data is easy to acquire as it can be inferred from the scene, geometry and materials and is rendered by default in Unity's HDRP.",
"However, the use of unlit color, normals or roughness had no tangible effects during our tests.",
"Most post-processing effect have been turned off, but the HR color is augmented with TAA.",
"HR is rendered and captured at a resolution of 512 × 512, LR at 128 × 128."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.1428571343421936,
0.12121211737394333,
0.05882352590560913,
0.13333332538604736,
0.12121211737394333,
0.10810810327529907,
0.12244897335767746,
0.04999999701976776,
0.10344827175140381,
0.1304347813129425,
0.06666666269302368,
0.1111111044883728,
0,
0.0923076868057251,
0.12765957415103912,
0.04651162400841713,
0.1249999925494194,
0.09302324801683426,
0.051948048174381256,
0.09756097197532654,
0,
0.2142857164144516,
0.03999999538064003,
0,
0.1764705777168274,
0.04999999701976776,
0.11428570747375488,
0.0952380895614624,
0.05405404791235924,
0.05714285373687744,
0.06666666269302368,
0.06896551698446274,
0.11764705181121826,
0.05714285373687744,
0.10810810327529907,
0.09756097197532654,
0.1538461446762085,
0.13793103396892548,
0.05128204822540283,
0.05405404791235924,
0.06666666269302368,
0,
0.0714285671710968
] | H1gW93NKvH | true | [
"A method for persistent latent states in ResBlocks demonstrated for super-resolution of alised image sequences."
] |
[
"The backpropagation algorithm is the most popular algorithm training neural networks nowadays.",
"However, it suffers from the forward locking, backward locking and update locking problems, especially when a neural network is so large that its layers are distributed across multiple devices.",
"Existing solutions either can only handle one locking problem or lead to severe accuracy loss or memory inefficiency.",
"Moreover, none of them consider the straggler problem among devices.",
"In this paper, we propose \\textbf{Layer-wise Staleness} and a novel efficient training algorithm, \\textbf{Diversely Stale Parameters} (DSP), which can address all these challenges without loss of accuracy nor memory issue.",
"We also analyze the convergence of DSP with two popular gradient-based methods and prove that both of them are guaranteed to converge to critical points for non-convex problems.",
"Finally, extensive experimental results on training deep convolutional neural networks demonstrate that our proposed DSP algorithm can achieve significant training speedup with stronger robustness and better generalization than compared methods."
] | [
0,
0,
0,
0,
0,
1,
0
] | [
0.13793103396892548,
0.1304347813129425,
0.05714285373687744,
0.1428571343421936,
0.2083333283662796,
0.22727271914482117,
0.08510638028383255
] | HJgLlgBKvH | false | [
"We propose Diversely Stale Parameters to break lockings of the backpropoagation algorithm and train a CNN in parallel."
] |
[
"The emergence of language in multi-agent settings is a promising research direction to ground natural language in simulated agents.",
"If AI would be able to understand the meaning of language through its using it, it could also transfer it to other situations flexibly.",
"That is seen as an important step towards achieving general AI.",
"The scope of emergent communication is so far, however, still limited.",
"It is necessary to enhance the learning possibilities for skills associated with communication to increase the emergable complexity.",
"We took an example from human language acquisition and the importance of the empathic connection in this process.",
"We propose an approach to introduce the notion of empathy to multi-agent deep reinforcement learning.",
"We extend existing approaches on referential games with an auxiliary task for the speaker to predict the listener's mind change improving the learning time.",
"Our experiments show the high potential of this architectural element by doubling the learning speed of the test setup.",
"Natural language is not as rule-based as researchers in supervised language learning would prefer.",
"There are limitless context-dependent notions to it, and flexible language use is considered as a necessary aspect of general AI.",
"Originally, natural language emerged through a necessity to achieve successful coordination.",
"Hence, a general AI would need to understand the functional aspects of language and learn communication through interaction (Wittgenstein, 1958; Wagner et al., 2003) .",
"These considerations led to the research field of emergent communication and the attempt to ground natural language through reinforcement learning.",
"Deep reinforcement learning has achieved some impressive results over the last years (Arulkumaran et al., 2017) .",
"One of its principal aspects is the ability to extract features from high dimensional input data without manual preprocessing.",
"This capability is especially useful if the necessary representation is unknown to the designer.",
"Classical deep reinforcement learning approaches rely on a large number of training examples, mainly because the sparse reward hardly provides enough feedback to shape the deep layers.",
"These deep layers are responsible for the embedding of input data into a meaningful representation.",
"Therefore, it takes many training steps before a useful representation emerges; if it converges at all.",
"According to the theory of the predictive mind (Hohwy, 2013) , the human brain generates richer feedback through learning several unsupervised prediction tasks while training on the main task.",
"The purpose of these predictions is to produce more and more expressive models and representations of the world.",
"Oh et al. (2015) achieved a far more expressive representation of their visual inputs by learning an auxiliary prediction task.",
"The sole purpose of the auxiliary net is to predict the change in the visual input given the last movement action.",
"Training this net does not directly affect the original task, but it refines the visual representation to reflect the concepts of a 3D world.",
"Hermann et al. (2017) used predictive tasks to ground natural language, but only focused on better understanding an existent language.",
"We transfer the auxiliary prediction to the task of active communication.",
"This goes along with the theory of mind (Premack & Woodruff, 1978; Schaafsma et al., 2015) stating that an essential part of intelligence in interaction emerges through predicting the mental state of the interaction partner.",
"We let the speaker train an auxiliary net that tries to predict how the speaker's utterance will change the listener's hidden state.",
"That resembles humans empathetic way of understanding what a message will do to the listener.",
"We assume this leads to a more communication effective representation of the sensory input; in other words, the input encoding becomes more communicatable.",
"The effect is visible in the essential acceleration of learning successes in developing a shared language.",
"Our main contribution is an elegant extension to multi-agent deep reinforcement learning (MADRL) algorithms aiming to emerge a communication.",
"It resembles an empathic connection between speaker and listener, which leads to faster convergence to a shared language.",
"We doubled the learning speed of a MADRL algorithm playing a referential game by introducing this auxiliary prediction task to the speaking agent.",
"We attribute the improvement to the richer gradients in the lower layers of the neural network to embed the input."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0
] | [
0.20689654350280762,
0.05882352590560913,
0,
0,
0.0714285671710968,
0.13793103396892548,
0.07692307233810425,
0.1764705777168274,
0.1428571343421936,
0.25,
0.0624999962747097,
0.08695651590824127,
0.05405404791235924,
0.13333332538604736,
0.06896550953388214,
0,
0,
0.05405404791235924,
0,
0,
0.15789473056793213,
0,
0.25,
0.13333332538604736,
0,
0.0624999962747097,
0.27272728085517883,
0.04651162400841713,
0.0624999962747097,
0,
0.060606054961681366,
0.2222222238779068,
0.06666666269302368,
0.06896550953388214,
0.3030303120613098,
0.07407406717538834
] | Hke1gySFvB | true | [
"An auxiliary prediction task can speed up learning in language emergence setups."
] |
[
"Image paragraph captioning is the task of automatically generating multiple sentences for describing images in grain-fined and coherent text.",
"Existing typical deep learning-based models for image captioning consist of an image encoder to extract visual features and a language model decoder, which has shown promising results in single high-level sentence generation.",
"However, only the word-level scalar guiding signal is available when the image encoder is optimized to extract visual features.",
"The inconsistency between the parallel extraction of visual features and sequential text supervision limits its success when the length of the generated text is long (more than 50 words).",
"In this paper, we propose a new module, called the Text Embedding Bank (TEB) module, to address the problem for image paragraph captioning.",
"This module uses the paragraph vector model to learn fixed-length feature representations from a variable-length paragraph.",
"We refer to the fixed-length feature as the TEB.",
"This TEB module plays two roles to benefit paragraph captioning performance.",
"First, it acts as a form of global and coherent deep supervision to regularize visual feature extraction in the image encoder.",
"Second, it acts as a distributed memory to provide features of the whole paragraph to the language model, which alleviating the long-term dependency problem.",
"Adding this module to two existing state-of-the-art methods achieves a new state-of-the-art result by a large margin on the paragraph captioning Visual Genome dataset.",
"Automatically generating a natural language description for visual content like image or video is an emerging interdisciplinary task.",
"This task involves computer vision, natural language processing and artificial intelligence.",
"Thanks to the advent of large datasets Lin et al. (2014) ; Young et al. (2014) ; Krishna et al. (2017b) , many recent works Mao et al. (2014) ; You et al. (2016) have shown promising results in generating a single high-level scene for images and videos.",
"However, the coarse, scene-level descriptions that these models produce cannot meet real-world applications such as video retrieval, automatic medical report generation Greenspan et al. (2016) ; ; Li et al. (2018a) , blind navigation and automatic video subtitling which capture fine-grained entities and have a coherent and logically detailed description.",
"To tackle this challenge, a relatively new task called paragraph captioning is emerging.",
"Paragraph captioning is the task of generating coherent and logically detailed descriptions by capturing the fine-grained entities of the image or video.",
"A few works Krause et al. (2017) ; Liang et al. (2017) ; Melas-Kyriazi et al. (2018) have pushed the performance to new heights with the main paragraph captioning dataset, the Visual Genome corpus, a dataset introduced by Krause et al. (2017) .",
"Compared with the performance of single-sentence caption generating models, the performance paragraph-length caption generating models is lower by a large margin.",
"Paragraph captioning for images and videos is challenging due to the requirement of both fine-grained image understanding and long-term language reasoning.",
"To overcome these challenges, we propose the TEB module, a module that is easy to integrate with existing image captioning models.",
"This module maps variedlength paragraphs to a fixed-length vector which we call TEB.",
"Each unique vector in the TEB has distance meaning and indexed by the order of the word in the vocabulary.",
"The TEB has a distributed memory.",
"This is illustrated in detail in section 3.",
"Existing deep learning based models typically consist of an image encoder to extract visual features in parallel with a RNN language model decoder to generate the sentences word by word sequentially.",
"In the training stage, only a tiny partial scalar guiding information from the word level loss is available to optimize the image encoding training.",
"This results in an insufficient fine-grained and coherent image visual feature extraction.",
"The TEB module, which holds the whole paragraph in a distributed memory model, can provide global supervision to better regularize the image encoder in the training stage.",
"The RNNs are known to have a long-term dependency problem because of vanishing and exploding gradients which make it unable to meet long-term language reasoning.",
"Since the TEB module has distributed memory and can provide ordering, it is better with long-term language reasoning.",
"We integrated our TEB module with the state-of-the-art methods on the only available paragraph captioning dataset, the Visual Genome corpus, and achieved new state-of-the-art by a large margin.",
"In this paper, we propose the Text Embedding Bank (TEB) module for visual paragraph captioning, a task which requires capturing fine-grained entities in the image to generate a detailed and coherent paragraph, like a story.",
"Our TEB module provides global and parallel deep supervision and distributed memory for find-grained image understanding and long-term language reasoning.",
"Integrating the TEB module to existing state-of-the-art methods achieves new state-of-the-art results by a large margin."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.08695651590824127,
0.05714285373687744,
0,
0,
0.07999999821186066,
0,
0.1666666567325592,
0.13333332538604736,
0,
0,
0,
0.09090908616781235,
0,
0.04999999701976776,
0,
0,
0,
0,
0,
0.0833333283662796,
0.07999999821186066,
0.11764705181121826,
0.09999999403953552,
0.19999998807907104,
0,
0,
0,
0,
0.0714285671710968,
0,
0.09090908616781235,
0.06896551698446274,
0.0555555522441864,
0.1818181723356247,
0.10526315122842789
] | Sygt9yBtPS | true | [
"TEB Module for IPC"
] |
[
"Learning Mahalanobis metric spaces is an important problem that has found numerous applications.",
"Several algorithms have been designed for this problem, including Information Theoretic Metric Learning (ITML) [Davis et al. 2007] and Large Margin Nearest Neighbor (LMNN) classification [Weinberger and Saul 2009]. ",
"We consider a formulation of Mahalanobis metric learning as an optimization problem,where the objective is to minimize the number of violated similarity/dissimilarity constraints. ",
"We show that for any fixed ambient dimension, there exists a fully polynomial time approximation scheme (FPTAS) with nearly-linear running time.",
"This result is obtained using tools from the theory of linear programming in low dimensions.",
"We also discuss improvements of the algorithm in practice, and present experimental results on synthetic and real-world data sets.",
"Our algorithm is fully parallelizable and performs favorably in the presence of adversarial noise.",
"Learning metric spaces is a fundamental computational primitive that has found numerous applications and has received significant attention in the literature.",
"We refer the reader to Kulis et al. (2013) ; Li and Tian (2018) for detailed exposition and discussion of previous work.",
"At the high level, the input to a metric learning problem consists of some universe of objects X, together with some similarity information on subsets of these objects.",
"Here, we focus on pairwise similarity and dissimilarity constraints.",
"Specifically, we are given S, D Ă`X 2˘, which are sets of pairs of objects that are labeled as similar and dissimilar respectively.",
"We are also given some u, ą 0, and we seek to find a mapping f : X Ñ Y , into some target metric space pY, ρq, such that for all x, y P S, ρpf pxq, f pyqq ď u, and for all x, y P D, ρpf pxq, f pyqq ě .",
"In the case of Mahalanobis metric learning, we have X Ă R d , with |X| \" n, for some d P N, and the mapping f : R d Ñ R d is linear.",
"Specifically, we seek to find a matrix G P R dˆd , such that for all tp, qu P S, we have",
"and for all tp, qu P D, we have",
"1.1 OUR CONTRIBUTION",
"In general, there might not exist any G that satisfies all constraints of type 1 and 2.",
"We are thus interested in finding a solution that minimizes the fraction of violated constraints, which corresponds to maximizing the accuracy of the mapping.",
"We develop a p1`εq-approximation algorithm for optimization problem of computing a Mahalanobis metric space of maximum accuracy, that runs in near-linear time for any fixed ambient dimension d P N. This algorithm is obtained using tools from geometric approximation algorithms and the theory of linear programming in small dimension.",
"The following summarizes our result.",
"Theorem 1.1.",
"For any d P N, ε ą 0, there exists a randomized algorithm for learning d-dimensional Mahalanobis metric spaces, which given an instance that admits a mapping with accuracy r˚, computes a mapping with accuracy at least r˚´ε, in time d Op1q nplog n{εq Opdq , with high probability.",
"The above algorithm can be extended to handle various forms of regularization.",
"We also propose several modifications of our algorithm that lead to significant performance improvements in practice.",
"The final algorithm is evaluated experimentally on both synthetic and real-world data sets, and is compared against the currently best-known algorithms for the problem."
] | [
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.0833333283662796,
0.04999999701976776,
0.12121211737394333,
0.06451612710952759,
0,
0.13793103396892548,
0.23999999463558197,
0.12903225421905518,
0.0624999962747097,
0.1764705777168274,
0.09999999403953552,
0.06451612710952759,
0.07692307233810425,
0.14999999105930328,
0,
0.09999999403953552,
0,
0.0714285671710968,
0,
0.11320754140615463,
0,
0.15094339847564697,
0.08695651590824127,
0.07407406717538834,
0.1249999925494194
] | SkluFgrFwH | true | [
"Fully parallelizable and adversarial-noise resistant metric learning algorithm with theoretical guarantees."
] |
[
"Standard image captioning tasks such as COCO and Flickr30k are factual, neutral in tone and (to a human) state the obvious (e.g., “a man playing a guitar”).",
"While such tasks are useful to verify that a machine understands the content of an image, they are not engaging to humans as captions. ",
"With this in mind we define a new task, Personality-Captions, where the goal is to be as engaging to humans as possible by incorporating controllable style and personality traits.",
"We collect and release a large dataset of 201,858 of such captions conditioned over 215 possible traits. ",
"We build models that combine existing work from (i) sentence representations (Mazaré et al., 2018) with Transformers trained on 1.7 billion dialogue examples; and (ii) image representations (Mahajan et al., 2018) with ResNets trained on 3.5 billion social media images. ",
"We obtain state-of-the-art performance on Flickr30k and COCO, and strong performance on our new task.",
"Finally, online evaluations validate that our task and models are engaging to humans, with our best model close to human performance.",
"If we want machines to communicate with humans, they must be able to capture our interest, which means spanning both the ability to understand and the ability to be engaging, in particular to display emotion and personality as well as conversational function BID17 BID18 BID41 BID19 .Communication",
"grounded in images is naturally engaging to humans BID15 , and yet the majority of studies in the machine learning community have so far focused on function only: standard image captioning BID36 requires the machine to generate a sentence which factually describes the elements of the scene in a neutral tone. Similarly, visual",
"question answering BID2 and visual dialogue BID6 require the machine to answer factual questions about the contents of the image, either in single turn or dialogue form. They assess whether",
"the machine can perform basic perception over the image which humans take for granted. Hence, they are useful",
"for developing models that understand content, but are not useful as an end application unless the human cannot see the image, e.g. due to visual impairment BID13 .Standard image captioning",
"tasks simply state the obvious, and are not considered engaging captions by humans. For example, in the COCO",
"BID5 and Flickr30k BID52 tasks, some examples of captions include \"a large bus sitting next to a very tall building\" and \"a butcher cutting an animal to sell\", which describe the contents of those images in a personality-free, factual manner. However, humans consider",
"engaging and effective captions ones that \"avoid stating the obvious\", as shown by advice to human captioners outside of machine learning.1 For example, \"If the bride",
"and groom are smiling at each other, don't write that they are smiling at each other. The photo already visually shows what the subject is doing. Rephrase the caption to reflect the story behind the image\". Moreover, it is considered",
"that \"conversational language works best. Write the caption as though you are talking to a family member or friend\".2 These instructions for human",
"captioners to engage human readers seem to be in direct opposition to standard captioning datasets.In this work we focus on image captioning that is engaging for humans by incorporating personality. As no large dataset exists that",
"covers the range of human personalities, we build and release a new dataset, PERSONALITY-CAPTIONS, with 201,858 captions, each conditioned on one of 215 Standard captioning output: A plate with a sandwich and salad on it. Our model with different personality",
"traits: Sweet That is a lovely sandwich.",
"In this work we consider models that can simultaneously understand image content and provide engaging captions for humans.",
"To build strong models, we first leverage the latest advances in image and sentence encoding to create generative and retrieval models that perform well on standard image captioning tasks.",
"In particular, we attain a new state-of-the-art on caption generation on COCO, and introduce a new retrieval architecture, TransResNet, that yields the highest known hits@1 score on the Flickr30k dataset.To make the models more engaging to humans, we then condition them on a set of controllable personality traits.",
"To that end, we collect a large dataset, PERSONALITY-CAPTIONS to train such models.",
"Using automatic metrics and human evaluations, we show that our best system is able to produce captions that are close to matching human performance in terms of engagement.",
"Our benchmark will be made publicly available to encourage further model development, leaving the possibility of superhuman performance coming soon in this domain.A IMPACT OF PRETRAINED WORD EMBEDDINGS AND TEXT ENCODERS Table 7 : More detailed results for retrieval model performance on COCO Captions using the splits of BID20 .",
"For our TransResNet models, we compare two types of pretraining: Full indicates a model with a pretrained text encoder, while Word indicates a model with pretrained word embeddings only."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0
] | [
0.2666666507720947,
0.2926829159259796,
0.13333332538604736,
0.17142856121063232,
0.18518517911434174,
0.13333332538604736,
0.21621620655059814,
0.0714285671710968,
0.19672130048274994,
0.08888888359069824,
0.17142856121063232,
0.25,
0.2857142686843872,
0.0714285671710968,
0.1818181723356247,
0.11999999731779099,
0.1428571343421936,
0.23529411852359772,
0.23076923191547394,
0,
0.2222222238779068,
0.31111109256744385,
0.24137930572032928,
0.12903225421905518,
0.1395348757505417,
0.0937499925494194,
0.04878048226237297
] | HJN6DiAcKQ | true | [
"We develop engaging image captioning models conditioned on personality that are also state of the art on regular captioning tasks."
] |
[
"Machine learning (ML) models trained by differentially private stochastic gradient descent (DP-SGD) have much lower utility than the non-private ones.",
"To mitigate this degradation, we propose a DP Laplacian smoothing SGD (DP-LSSGD) to train ML models with differential privacy (DP) guarantees.",
"At the core of DP-LSSGD is the Laplacian smoothing, which smooths out the Gaussian noise used in the Gaussian mechanism.",
"Under the same amount of noise used in the Gaussian mechanism, DP-LSSGD attains the same DP guarantee, but a better utility especially for the scenarios with strong DP guarantees.",
"In practice, DP-LSSGD makes training both convex and nonconvex ML models more stable and enables the trained models to generalize better.",
"The proposed algorithm is simple to implement and the extra computational complexity and memory overhead compared with DP-SGD are negligible.",
"DP-LSSGD is applicable to train a large variety of ML models, including DNNs.",
"Many released machine learning (ML) models are trained on sensitive data that are often crowdsourced or contain private information (Yuen et al., 2011; Feng et al., 2017; Liu et al., 2017) .",
"With overparameterization, deep neural nets (DNNs) can memorize the private training data, and it is possible to recover them and break the privacy by attacking the released models (Shokri et al., 2017) .",
"For example, Fredrikson et al. demonstrated that a model-inversion attack can recover training images from a facial recognition system (Fredrikson et al., 2015) .",
"Protecting the private data is one of the most critical tasks in ML.",
"Differential privacy (DP) (Dwork et al., 2006 ) is a theoretically rigorous tool for designing algorithms on aggregated databases with a privacy guarantee.",
"The idea is to add a certain amount of noise to randomize the output of a given algorithm such that the attackers cannot distinguish outputs of any two adjacent input datasets that differ in only one entry.",
"For repeated applications of additive noise based mechanisms, many tools have been invented to analyze the DP guarantee for the model obtained at the final stage.",
"These include the basic and strong composition theorems and their refinements (Dwork et al., 2006; 2010; Kairouz et al., 2015) , the moments accountant (Abadi et al., 2016) , etc.",
"Beyond the original notion of DP, there are also many other ways to define the privacy, e.g., local DP (Duchi et al., 2014) , concentrated/zeroconcentrated DP (Dwork & Rothblum, 2016; Bun & Steinke, 2016) , and Rényi-DP (RDP) (Mironov, 2017) .",
"Differentially private stochastic gradient descent (DP-SGD) reduces the utility of the trained models severely compared with SGD.",
"As shown in Figure 1 , the training and validation losses of the logistic regression on the MNIST dataset increase rapidly when the DP guarantee becomes stronger.",
"The convolutional neural net (CNN) 1 trained by DP-SGD has much lower testing accuracy than the non-private one on the MNIST.",
"We will discuss the detailed experimental settings in Section 4.",
"A natural question raised from such performance degradations is:",
"Can we improve DP-SGD, with negligible extra computational complexity and memory cost, such that it can be used to train general ML models with improved utility?",
"We answer the above question affirmatively by proposing differentially private Laplacian smoothing SGD (DP-LSSGD) to improve the utility in privacy-preserving empirical risk minimization (ERM).",
"DP-LSSGD leverages the Laplacian smoothing (Osher et al., 2018) as a post-processing to smooth the injected Gaussian noise in the differentially private SGD (DP-SGD) to improve the convergence of DP-SGD in training ML models with DP guarantee.",
"In this paper, we integrated Laplacian smoothing with DP-SGD for privacy-presrving ERM.",
"The resulting algorithm is simple to implement and the extra computational cost compared with the DP-SGD is almost negligible.",
"We show that DP-LSSGD can improve the utility of the trained private ML models both numerically and theoretically.",
"It is straightforward to combine LS with other variance reduction technique, e.g., SVRG (Johoson & Zhang, 2013) ."
] | [
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.3720930218696594,
0.5,
0.05128204822540283,
0.21276594698429108,
0.1904761791229248,
0.1428571343421936,
0.1666666567325592,
0.1599999964237213,
0.18867923319339752,
0.045454539358615875,
0.05714285373687744,
0.13333332538604736,
0.07407406717538834,
0.04255318641662598,
0.04347825422883034,
0.06666666269302368,
0.3589743673801422,
0.04255318641662598,
0,
0.060606054961681366,
0,
0.2083333283662796,
0.30434781312942505,
0.2857142686843872,
0.17142856121063232,
0.14999999105930328,
0.25,
0.0952380895614624
] | BJlG5a4FvB | true | [
"We propose a differentially private Laplacian smoothing stochastic gradient descent to train machine learning models with better utility and maintain differential privacy guarantees."
] |
[
"We study the robust one-bit compressed sensing problem whose goal is to design an algorithm that faithfully recovers any sparse target vector\n",
"$\\theta_0\\in\\mathbb{R}^d$ \\emph{uniformly} from $m$ quantized noisy measurements.",
"Under the assumption that the measurements are sub-Gaussian, to recover any $k$-sparse $\\theta_0$ ($k\\ll d$) \\emph{uniformly} up to an error $\\varepsilon$ with high probability, the best known computationally tractable algorithm requires\\footnote{Here, an algorithm is ``computationally tractable'' if it has provable convergence guarantees.",
"The notation $\\tilde{\\mathcal{O}}(\\cdot)$ omits a logarithm factor of $\\varepsilon^{-1}$.",
"} $m\\geq\\tilde{\\mathcal{O}}(k\\log d/\\varepsilon^4)$.",
"In this paper, we consider a new framework for the one-bit sensing problem where the sparsity is implicitly enforced via mapping a low dimensional representation $x_0$ through a known $n$-layer ReLU generative network $G:\\mathbb{R}^k\\rightarrow\\mathbb{R}^d$.",
"Such a framework poses low-dimensional priors on $\\theta_0$ without a known basis.",
"We propose to recover the target $G(x_0)$ via an unconstrained empirical risk minimization (ERM) problem under a much weaker \\emph{sub-exponential measurement assumption}.",
" For such a problem, we establish a joint statistical and computational analysis",
". In particular, we prove that the ERM estimator in this new framework achieves an improved statistical rate of $m=\\tilde{\\mathcal{O}} (kn\\log d /\\epsilon^2)$ recovering any $G(x_0)$ uniformly up to an error $\\varepsilon$.",
"Moreover, from the lens of computation, we prove that under proper conditions on the ReLU weights, our proposed empirical risk, despite non-convexity, has no stationary point outside of small neighborhoods around the true representation $x_0$ and its negative multiple.",
"Furthermore, we show that the global minimizer of the empirical risk stays within the neighborhood around $x_0$ rather than its negative multiple.",
"Our analysis sheds some light on the possibility of inverting a deep generative model under partial and quantized measurements, complementing the recent success of using deep generative models for inverse problems.",
"Quantized compressed sensing investigates how to design the sensing procedure, quantizer and reconstruction algorithm so as to recover a high dimensional vector from a limited number of quantized measurements.",
"The problem of one-bit compressed sensing, which aims at recovering a target vector θ 0 ∈ R d from single-bit observations y i = sign( a i , θ 0 ), i ∈ {1, 2, · · · , m}, m d and random sensing vectors a i ∈ R d , is particularly challenging.",
"Previous theoretical successes on this problem (e.g. Jacques et al. (2013) ; Plan and Vershynin (2013) ) mainly rely on two key assumptions: (1) The Gaussianity of the sensing vector a i , (2) The sparsity of the vector θ 0 on a given basis.",
"However, the practical significance of these assumptions are rather limited in the sense that it is difficult to generate Gaussian vectors and high dimensional targets in practice are often distributed * Equal Contribution 1 Here, an algorithm is \"computationally tractable\" if it has provable convergence guarantees.",
"The notatioñ O(·) omits a logarithm factor of ε −1 .",
"near a low-dimensional manifold rather than sparse on some given basis.",
"The goal of this work is to make steps towards addressing these two limitations.",
"Specifically, we introduce a new framework for robust dithered one-bit compressed sensing where the structure of target vector θ 0 is represented via a ReLU network G :",
"Building upon this framework, we propose a new recovery algorithm by solving an unconstrained ERM.",
"We show this algorithm enjoys the following favorable properties:",
"• Statistically, when taking measurements a i to be sub-exponential random vectors, with high probability and uniformly for any",
"is the ball of radius R > 0 centered at the origin, the solution G( x m ) to the ERM recovers the true vector G(x 0 ) up to error ε when the number of samples m ≥ O(kn log 4 (ε −1 )(log d + log(ε −1 ))/ε 2 ).",
"In particular, our result does not require REC type assumptions adopted in previous analysis of generative signal recovery works and at the same time weakens the known sub-Gaussian assumption adopted in previous one-bit compressed sensing works.",
"When the number of layers n is small, this result meets the minimax optimal rate (up to a logarithm factor) for sparse recovery and simultaneously improves upon the best knownÕ(k log d/ε 4 ) statistical rate for computationally tractable algorithms.",
"• Computationally, we show that solving the ERM and approximate the true representation x 0 ∈ R k is tractable.",
"More specifically, we prove with high probability, there always exists a descent direction outside two small neighborhoods around x 0 and its negative multiple with radius O(ε 1/4 ), uniformly for any x 0 ∈ B k 2 (R ) with R = (0.5+ε) −n/2 R, when the ReLU network satisfies a weight distribution condition with parameter ε > 0 and m ≥ O(kn log 4 (ε −1 )(log d + log(ε −1 ))/ε 2 ).",
"Furthermore, when ε is small enough, one guarantees that the solution x m stays within the neighborhood around x 0 (rather than its negative multiple).",
"Our result is achieved without assuming the REC type conditions and under quantization errors, thereby improving upon previously known computational guarantees for ReLU generative signal recovery in linear models with small noise.",
"From a technical perspective, our proof makes use of the special piecewise linearity property of ReLU network.",
"The merits of such a property in the current scenario are two folds: (1) It allows us to replaces the generic chaining type bounds commonly adopted in previous works (e.g. Dirksen and Mendelson (2018a) ) by novel arguments that are \"sub-Gaussian free\".",
"(2) From a hyperplane tessellation point of view, we show that for a given accuracy level, a binary embedding of",
"2 (R) into Euclidean space is \"easier\" in that it requires less random hyperplanes than that of a bounded k sparse set.",
"Notations.",
"Throughout the paper, let S d−1 and B(x, r) denotes the unit sphere and the ball of radius r centered at",
"We say a random variable is sub-exponential if its ψ 1 -norm is bounded.",
"A random vector x ∈ R d is sub-exponential if there exists a a constant C > 0 such that sup t∈S d−1 x, t ψ1 ≤ C. We use x ψ1 to denote the minimal C such that this bound holds.",
"Furthermore, C, C , c, c 1 , c 2 , c 3 , c 4 , c 5 denote absolute constants, their actual values can be different per appearance."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.2631579041481018,
0,
0.037735845893621445,
0.1599999964237213,
0,
0.21276594698429108,
0.07407406717538834,
0.15789473056793213,
0.37037035822868347,
0.08695651590824127,
0.07692307233810425,
0.0555555522441864,
0.23255813121795654,
0.2380952388048172,
0.2545454502105713,
0.18518517911434174,
0.07017543166875839,
0.1538461446762085,
0.07407406717538834,
0.06666666269302368,
0.23255813121795654,
0.06451612710952759,
0.07999999821186066,
0.17142856121063232,
0.0357142798602581,
0.2978723347187042,
0.1538461446762085,
0.05714285373687744,
0.0731707289814949,
0,
0.1666666567325592,
0.1249999925494194,
0.1071428507566452,
0.12121211737394333,
0.10810810327529907,
0.11764705181121826,
0.13793103396892548,
0.07843136787414551,
0
] | rkxbn735IH | true | [
"We provide statistical and computational analysis of one-bit compressed sensing problem with a generative prior. "
] |
[
"We introduce an unsupervised structure learning algorithm for deep, feed-forward, neural networks.",
"We propose a new interpretation for depth and inter-layer connectivity where a hierarchy of independencies in the input distribution is encoded in the network structure.",
"This results in structures allowing neurons to connect to neurons in any deeper layer skipping intermediate layers.",
"Moreover, neurons in deeper layers encode low-order (small condition sets) independencies and have a wide scope of the input, whereas neurons in the first layers encode higher-order (larger condition sets) independencies and have a narrower scope.",
"Thus, the depth of the network is automatically determined---equal to the maximal order of independence in the input distribution, which is the recursion-depth of the algorithm.",
"The proposed algorithm constructs two main graphical models:",
"1) a generative latent graph (a deep belief network) learned from data and",
"2) a deep discriminative graph constructed from the generative latent graph.",
"We prove that conditional dependencies between the nodes in the learned generative latent graph are preserved in the class-conditional discriminative graph.",
"Finally, a deep neural network structure is constructed based on the discriminative graph.",
"We demonstrate on image classification benchmarks that the algorithm replaces the deepest layers (convolutional and dense layers) of common convolutional networks, achieving high classification accuracy, while constructing significantly smaller structures.",
"The proposed structure learning algorithm requires a small computational cost and runs efficiently on a standard desktop CPU.",
"Over the last decade, deep neural networks have proven their effectiveness in solving many challenging problems in various domains such as speech recognition BID17 , computer vision BID28 BID16 BID46 and machine translation BID9 .",
"As compute resources became more available, large scale models having millions of parameters could be trained on massive volumes of data, to achieve state-of-the-art solutions for these high dimensionality problems.",
"Building these models requires various design choices such as network topology, cost function, optimization technique, and the configuration of related hyper-parameters.In this paper, we focus on the design of network topology-structure learning.",
"Generally, exploration of this design space is a time consuming iterative process that requires close supervision by a human expert.",
"Many studies provide guidelines for design choices such as network depth BID46 , layer width BID55 , building blocks , and connectivity BID20 BID23 .",
"Based on these guidelines, these studies propose several meta-architectures, trained on huge volumes of data.",
"These were applied to other tasks by leveraging the representational power of their convolutional layers and fine-tuning their deepest layers for the task at hand BID21 BID33 .",
"However, these meta-architecture may be unnecessarily large and require large computational power and memory for training and inference.",
"The problem of model structure learning has been widely researched for many years in the probabilistic graphical models domain.",
"Specifically, Bayesian networks for density estimation and causal discovery BID42 BID50 .",
"Two main approaches were studied: score-based (search-and-score) and constraint-based.",
"Score-based approaches combine a scoring function, such as BDe BID10 and BIC BID44 , with a strategy for searching through the space of structures, such as greedy equivalence search BID6 .",
"BID1 introduced an algorithm for sampling deep belief networks (generative model) and demonstrated its applicability to high-dimensional image datasets.Constraint-based approaches BID42 BID50 find the optimal structures in the large sample limit by testing conditional independence (CI) between pairs of variables.",
"They are generally faster than score-based approaches BID54 ) and have a well-defined stopping criterion (e.g., maximal order of conditional independence).",
"However, these methods are sensitive to errors in the independence tests, especially in the case of high-order conditional-independence tests and small training sets.Motivated by these methods, we propose a new interpretation for depth and inter-layer connectivity in deep neural networks.",
"We derive a structure learning algorithm such that a hierarchy of independencies in the input distribution is encoded in the network structure, where the first layers encode higher-order independencies than deeper layers.",
"Thus, the number of layers is automatically determined.",
"Moreover, a neuron in a layer is allowed to connect to neurons in deeper layers skipping intermediate layers.",
"An example of a learned structure, for MNIST, is given in Figure 1 .We",
"describe our recursive algorithm in two steps. In",
"Section 2 we describe a base case-a singlelayer structure learning. In",
"Section 3 we describe multi-layer structure learning by applying the key concepts of the base case, recursively (proofs are provided in Appendix A). In",
"Section 4 we discuss related work. We",
"provide experimental results in Section 5, and conclude in Section 6. DISPLAYFORM0",
"a set of latent variables, and Y a class variable. Our algorithm",
"constructs three graphical models and an auxiliary graph. Each variable",
"is represented by a single node and a single edge may connect two distinct nodes. Graph G is a",
"generative DAG defined over the observed and latent variables X ∪ H. Graph G Inv is called a stochastic inverse of G. Graph G D is a discriminative model defined over the observed, latent, and class variables X ∪ H ∪ Y . An auxiliary",
"graph G X is defined over X (a CPDAG; an equivalence class of a Bayesian network) and is generated and maintained as an internal state of the algorithm. The parents",
"set of a node X in G is denoted P a(X; G). The order of",
"an independence relation is defined to be the condition set size. For example,",
"if X 1 and X 2 are independent given X 3 and X 4 , denoted X 1 ⊥ ⊥ X 2 |{X 3 , X 4 }, then the independence order is two. Figure 1 : An",
"example of a structure learned by our algorithm (classifying MNIST digits). Neurons in a",
"layer may connect to neurons in any deeper layer. Depth is determined",
"automatically. Each gather layer selects",
"a subset of the input, where each input variable is gathered only once. A neural route, starting",
"with a gather layer, passes through densely connected layers where it may split (copy) and merge (concatenate) with other routes in correspondence with the hierarchy of independencies identified by the algorithm. All routes merge into the",
"final output layer (e.g., a softmax layer).",
"We presented a principled approach for learning the structure of deep neural networks.",
"Our proposed algorithm learns in an unsupervised manner and requires small computational cost.",
"The resulting structures encode a hierarchy of independencies in the input distribution, where a node in one layer may connect another node in any deeper layer, and depth is determined automatically.We demonstrated that our algorithm learns small structures, and maintains high classification accuracies for common image classification benchmarks.",
"It is also demonstrated that while convolution layers are very useful at exploiting domain knowledge, such as spatial smoothness, translational invariance, and symmetry, they are mostly outperformed by a learned structure for the deeper layers.",
"Moreover, while the use of common topologies (meta-architectures), for a variety of classification tasks is computationally inefficient, we would expect our approach to learn smaller and more accurate networks for each classification task, uniquely.As only unlabeled data is required for learning the structure, we expect our approach to be practical for many domains, beyond image classification, such as knowledge discovery, and plan to explore the interpretability of the learned structures."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0
] | [
0.32258063554763794,
0.4878048598766327,
0,
0.1395348757505417,
0.10810810327529907,
0,
0.1875,
0.13793103396892548,
0,
0.25,
0.08510638028383255,
0.2222222238779068,
0.1538461446762085,
0.0833333283662796,
0.1249999925494194,
0.10526315122842789,
0.19512194395065308,
0.0624999962747097,
0.1395348757505417,
0.11764705181121826,
0.21052631735801697,
0.19999998807907104,
0.0714285671710968,
0.21739129722118378,
0.16949151456356049,
0.1428571343421936,
0.4363636374473572,
0.17777776718139648,
0.07407406717538834,
0.060606054961681366,
0.1818181723356247,
0,
0.19999998807907104,
0.1428571343421936,
0,
0.06896550953388214,
0.19999998807907104,
0.06896550953388214,
0.11764705181121826,
0.12244897335767746,
0.13636362552642822,
0.12121211737394333,
0,
0.04651162400841713,
0.1875,
0,
0,
0.2222222238779068,
0.16326530277729034,
0.0714285671710968,
0.625,
0.0624999962747097,
0.16129031777381897,
0.1538461446762085,
0.19178082048892975
] | ryjw_eAaZ | true | [
"A principled approach for structure learning of deep neural networks with a new interpretation for depth and inter-layer connectivity. "
] |
[
"L1 and L2 regularizers are critical tools in machine learning due to their ability to simplify solutions.",
"However, imposing strong L1 or L2 regularization with gradient descent method easily fails, and this limits the generalization ability of the underlying neural networks.",
"To understand this phenomenon, we investigate how and why training fails for strong regularization.",
"Specifically, we examine how gradients change over time for different regularization strengths and provide an analysis why the gradients diminish so fast.",
"We find that there exists a tolerance level of regularization strength, where the learning completely fails if the regularization strength goes beyond it.",
"We propose a simple but novel method, Delayed Strong Regularization, in order to moderate the tolerance level.",
"Experiment results show that our proposed approach indeed achieves strong regularization for both L1 and L2 regularizers and improves both accuracy and sparsity on public data sets.",
"Our source code is published.",
"Regularization has been very common for machine learning to prevent over-fitting and to obtain sparse solutions.",
"Deep neural networks (DNNs), which have shown huge success in many tasks such as computer vision BID9 BID15 BID5 and speech recognition , often contain a number of parameters in multiple layers with non-linear activation functions, in order to gain enough expressive power.",
"However, DNNs with many parameters are often prone to over-fitting, so the need for regularization has been emphasized.",
"While new regularization techniques such as dropout BID16 and pruning BID2 have been proposed to solve the problem, the traditional regularization techniques using L1 or L2 norms have cooperated with them to further improve the performance significantly.",
"L1 regularization, often called Lasso BID17 , obtains sparse solutions so that the required memory and power consumption are reduced while keeping reasonable accuracy.",
"On the other hand, L2 regularization smooths the parameter distribution and reduces the magnitude of parameters, so the resulting solution is simple (i.e., less prone to over-fitting) and effective.",
"Indeed, our empirical results show that applying strong L2 regularization to the deep neural networks that already has dropout layers can reduce the error rate by up to 24% on a public data set.Strong regularization is especially desired when the model contains too many parameters for the given amount of training data.",
"This is often the case for deep learning tasks in practice because DNNs often contain millions of parameters while labeled training data set is limited and expensive.",
"However, imposing strong L1 or L2 regularization on DNNs is difficult for gradient descent method due to the vanishing gradient problem.",
"If we impose too strong regularization, the gradient from regularization becomes dominant, and DNNs stop learning.",
"In this paper, we first study the interesting phenomenon that strong regularization fails in learning.",
"We also provide an analysis why the gradients diminish so quickly that learning completely fails.",
"Then, we propose a simple yet effective solution, Delayed Strong Regularization, which carries a time-dependent schedule of regularization strength.",
"We find that we can overcome the failure in learning by waiting for the model to reach an \"active learning\" phase, where the gradients' magnitudes are significant, and then enforcing strong regularization.",
"Delayed Strong Regularization enables us to obtain the superior performance that is otherwise hidden by learning failure in deep networks.",
"The proposed approach is general and does not require any additional computation.",
"The experiment results indicate that the proposed approach indeed achieves strong regularization, consistently yielding even higher accuracy and higher compression rate that could not be achieved.",
"In this work, we studied the problem of achieving strong regularization for deep neural networks.",
"Strong regularization with gradient descent algorithm easily fails for deep neural networks, but few work addressed this phenomenon in detail.",
"We provided investigation and analysis of the phenomenon, and we found that there is a strict tolerance level of regularization strength.",
"To avoid this problem, we proposed a novel but simple method: Delayed Strong Regularization.",
"We performed experiments with fine tuning of regularization strength.",
"Evaluation results show that (1) our model successfully achieves strong regularization on deep neural networks, verifying our hypothesis that the model will keep learning once it reaches an \"active learning\" phase, (2) with strong regularization, our model obtains higher accuracy and sparsity, (3) the number of hidden layers in neural networks affects the tolerance level, and (4) L1/L2 regularization is difficult to tune, but it can yield great performance boost when tuned well.There are limitations in this work.",
"Our proposed method can be especially useful when strong regularization is desired.",
"For example, deep learning projects that cannot afford a huge labeled data set can benefit from our method.",
"However, strong regularization may not be necessary in some other cases where the large labeled data set is available or the networks do not contain many parameters.",
"In addition, our experiments were not performed on a bigger data set such as ImageNet data set.",
"We need to fine-tune the models with different regularization parameters, and we also need multiple training sessions of each model to obtain confidence interval.",
"For example, the experiment results in FIG1 and 4 include 750 training sessions in total.",
"This is something we cannot afford with ImageNet data set, which requires several weeks of training for EACH session (unless we have GPU clusters).",
"Our approach cannot be applied to architectures containing normalization techniques for the reason in Section 2.2.",
"We actually tried to intentionally exclude normalization part from Residual Networks BID5 ) and train the model to see if we can apply our method to non-normalized Residual Networks.",
"However, we could not control the exploding gradients caused by the exclusion of normalization.Our work can be further extended in several ways.",
"Since our model can achieve strong regularization, it will be interesting to see how the strongly regularized model performs if combined with pruning-related methods BID2 .",
"We applied our approach to only L1 and L2 regularizers, but applying it to other regularizers such as group sparsity regularizers will be promising as they are often employed for DNNs to compress networks.",
"Lastly, our proposed Delayed Strong Regularization is very simple, so one can easily extend it to more complicated methods.",
"All these directions are left as our future work."
] | [
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.06451612710952759,
0.21052631735801697,
0.48275861144065857,
0.2222222238779068,
0.2222222238779068,
0.1875,
0.1538461446762085,
0,
0.06666666269302368,
0.0714285671710968,
0.060606054961681366,
0.08695651590824127,
0.05128204822540283,
0.0952380895614624,
0.13114753365516663,
0.04999999701976776,
0.17142856121063232,
0.19354838132858276,
0.19999998807907104,
0.19999998807907104,
0.1818181723356247,
0.2222222238779068,
0,
0.07407406717538834,
0.10256409645080566,
0.13333332538604736,
0.11428570747375488,
0.23529411852359772,
0.06896550953388214,
0.1666666567325592,
0.12345678359270096,
0.29629629850387573,
0.1818181723356247,
0.09999999403953552,
0.06666666269302368,
0.1621621549129486,
0.06896550953388214,
0,
0,
0.19999998807907104,
0.05405404791235924,
0.20512820780277252,
0.08888888359069824,
0.05882352590560913,
0
] | Bys_NzbC- | true | [
"We investigate how and why strong L1/L2 regularization fails and propose a method than can achieve strong regularization."
] |
[
"Despite neural network’s high performance, the lack of interpretability has been the main bottleneck for its safe usage in practice.",
"In domains with high stakes (e.g., medical diagnosis), gaining insights into the network is critical for gaining trust and being adopted.",
"One of the ways to improve interpretability of a NN is to explain the importance of a particular concept (e.g., gender) in prediction.",
"This is useful for explaining reasoning behind the networks’ predictions, and for revealing any biases the network may have.",
"This work aims to provide quantitative answers to \\textit{the relative importance of concepts of interest} via concept activation vectors (CAV).",
"In particular, this framework enables non-machine learning experts to express concepts of interests and test hypotheses using examples (e.g., a set of pictures that illustrate the concept).",
"We show that CAV can be learned given a relatively small set of examples.",
"Testing with CAV, for example, can answer whether a particular concept (e.g., gender) is more important in predicting a given class (e.g., doctor) than other set of concepts.",
"Interpreting with CAV does not require any retraining or modification of the network.",
"We show that many levels of meaningful concepts are learned (e.g., color, texture, objects, a person’s occupation), and we present CAV’s \\textit{empirical deepdream} — where we maximize an activation using a set of example pictures.",
"We show how various insights can be gained from the relative importance testing with CAV.",
"Neural networks (NNs) are capable of impressively good performance, yet understanding and interpreting their behavior remains a significant challenge.",
"Solving this challenge is an important problem for several reasons.",
"For example, explaining a system's behavior may be necessary to establish acceptability and see adoption for critical applications, such as those in the medical domain.",
"For scientists and engineers, any greater understanding of how neural networks function is appreciated, since it may lead to better models and help with debugging (30; 19) .Recent",
"work suggests that linear combinations of neurons may encode meaningful, insightful information (2; 19; 27) . However",
", we lack methods to 1) identify which linear combinations (if any) relate to a given concept, and 2) how these can aid in our quantitative understanding of concepts and classification decisions. For example",
", we may hypothesize that an image model that successfully classifies zebras may naturally encode concepts for 'stripe' and 'animal', somewhere in its internal representations, using a linear combination of neurons. How can we",
"formalize this notion, and test such a hypothesis?Neural networks",
"build internal representations that are far richer than the input features or output classes explicit in their training data. Unfortunately,",
"many machine learning interpretation methods provide results only in terms of input features. For example, the",
"learned coefficients in linear classifiers or logistic regression can be interpreted as each feature's classification importance. Similar first-order",
"importance measures for neural networks often use first derivatives as a proxy for input feature importance, as is done for pixel importance in saliency maps (8; 22) .It is critical that",
"model understanding and interpretation not be limited to only the concepts explicit in training data. This can be seen by",
"considering classification fairness-an increasingly relevant, difficult problem where interpretability can be useful-and noting that no input features may identify discriminated-against groups. For example, the Inception",
"model BID24 has an output class for 'doctor' but no input features identifying the concepts of 'man' or 'woman' in a way that would allow existing interpretability approaches to quantify gender bias in classification.This work introduces the method of concept activation vectors (CAV) for the following purposes. First, CAV can be used to",
"identify linear combinations of neurons in a layer of a model that correspond to given semantic concepts, even for new, user-provided concepts not explicit in the model's training data. Second, CAV provides quantitative",
"measures of the relative importance of userprovided concepts, which allows for hypothesis testing of the relationship between given concepts and the model's predictions.Testing with CAV (TCAV) is designed with the following desiderata in mind.1. accessibility: Requires little to",
"no user expertise in machine learning. 2. customization: Adapt to any concept",
"of interest (e.g., gender) on the fly without pre-listing a set of concepts before training. 3. plug-in readiness: Work without retraining",
"or modifying the model. BID2 . quantification: Provide quantitative explanation",
"that are",
"tied to human-relatable concept, and not input features.One of key ideas for TCAV is that we can test the relative importance between small set of concepts, rather than ranking the importance of all possible features/concepts. For example, we can gain insights by testing whether the",
"concept of gender was used more than the 'wearing scrubs' concept for the classification of doctor. We can also test whether or not a given concept was relevant",
"to the classification of a certain class. Similar forms of sparsity (i.e., only considering a few concepts",
"at a time) are used in many existing interpretable models (12; 7; 28; 31; 29; 4) . Note that interpretability does not mean understanding the entire",
"network's behavior on every feature/concept of the input BID4 . Such a goal may not be achievable, particularly for ML models with",
"super-human performance BID21 .TCAV satisfies these desiderata-accessibility, customization, plug-in",
"readiness and quantification -it enables quantitative relative importance testing for non-ML experts, for user-provided concepts without retraining or modifying the network. Users express their concepts of interest using examples-a set of data",
"points exemplifying the concept. For example, if gender is the concept of interest, users can collect",
"pictures of women. The use of examples has been shown to be powerful medium for communication",
"between machine learning (ML) models and non-expert users (16; 12; 13) . Cognitive studies on experts also support this approach (e.g., experts think",
"in terms of examples BID13 ).The structure of this paper is as follows: Section 2 relates this work to existing",
"interpretability methods. Section 3 explains the details of the TCAV method. In Section 4, we show 1) how this",
"framework can be used to identify semantically meaningful",
"directions in a layer and 2) the relative importance testing results that measure the relevance of concepts of interest to the classification output by the network.",
"We have introduced the notion of a \"concept activation vector,\" or CAV, which is a flexible way to probe the internal representation of a concept in a classification network.",
"Since CAVs may be defined via a set of example inputs, rather than custom coding, they are well suited to use by non-experts.",
"We then described a technique (Testing with CAVs, or TCAV) for quantifying the relation between a CAV and a particular class.",
"The TCAV technique allows us to provide quantitative answers to questions such as, \"How important are the stripes to the classification of a zebra?\"To provide evidence for the value of the TCAV technique, we described a series of experiments which supported common-sense intuition, for example, that stripes are indeed important to the identification of zebras.",
"In addition, we used the DeepDream technique to create images whose internal representations approximate certain CAVs.",
"The resulting pictures were strongly evocative of the original concepts.",
"Finally, we described how the TCAV technique may be used to find associations between concepts, both obvious (\"yellow\" and \"taxi\") and non-obvious (\"red\" and \"cucumber\").In",
"addition to analyzing a single network, TCAV can be also used to compare and contrast a pair of networks. For",
"example, one can compare the relative importance of concepts to determine how the different choices of training process or architecture influences learning of each concept. Based",
"on the results, users can perform model selection based on the concepts that are more or less important for the task.An interesting direction for future work may be to explore applications of using CAVs to adjust the results of a network during inference time. Adding",
"a scalar multiple of a CAV to the activations of an intermediate layer can, as shown in our experiments, allow us to deemphasize or enhance conceptual aspects of an input. One potential",
"application, for example, might be to reduce bias the network has learned from training data."
] | [
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.06185566633939743,
0.17999999225139618,
0.26530611515045166,
0.12631578743457794,
0.3333333432674408,
0.5094339847564697,
0.30434781312942505,
0.4528301954269409,
0.2857142686843872,
0.2678571343421936,
0.2150537669658661,
0.0824742242693901,
0.06818181276321411,
0.11650484800338745,
0.13333332538604736,
0.06382978707551956,
0.1818181723356247,
0.16513760387897491,
0.1149425283074379,
0.10204081237316132,
0.10638297349214554,
0.125,
0.11538460850715637,
0.1855670064687729,
0.0776698961853981,
0.2857142686843872,
0.20370370149612427,
0.2321428507566452,
0.1111111119389534,
0.2222222238779068,
0.06896551698446274,
0.27350425720214844,
0.2574257254600525,
0.1473684161901474,
0.11650484800338745,
0.12121211737394333,
0,
0.29906541109085083,
0.10869564861059189,
0.10752687603235245,
0.13861386477947235,
0.14432989060878754,
0.10526315867900848,
0.09302325546741486,
0.25999999046325684,
0.21568627655506134,
0.13861386477947235,
0.1855670064687729,
0.1551724076271057,
0.06382978707551956,
0.09090908616781235,
0.09803920984268188,
0.1458333283662796,
0.19801980257034302,
0.23931623995304108,
0.13333332538604736,
0.10752687603235245
] | S1viikbCW | true | [
"This work aims to provide quantitative answers to the relative importance of concepts of interest via concept activation vectors (CAV). In particular, this framework enables non-machine learning experts to express concepts of interest and test hypotheses using examples (e.g., a set of pictures that illustrate the concept). We show that CAV can be learned given a relatively small set of examples. Hypothesis testing with CAV can answer whether a particular concept (e.g., gender) is more important in predicting a given class (e.g., doctor) than other sets of concepts. Interpreting networks with CAV does not require any retraining or modification of the network. "
] |
[
"We present a new family of objective functions, which we term the Conditional Entropy Bottleneck (CEB).",
"These objectives are motivated by the Minimum Necessary Information (MNI) criterion.",
"We demonstrate the application of CEB to classification tasks.",
"We show that CEB gives: well-calibrated predictions; strong detection of challenging out-of-distribution examples and powerful whitebox adversarial examples; and substantial robustness to those adversaries.",
"Finally, we report that CEB fails to learn from information-free datasets, providing a possible resolution to the problem of generalization observed in Zhang et al. (2016).",
"The field of Machine Learning has suffered from the following well-known problems in recent years 1 :• Vulnerability to adversarial examples.",
"Essentially all machine-learned systems are currently believed by default to be highly vulnerable to adversarial examples.",
"Many defenses have been proposed, but very few have demonstrated robustness against a powerful, general-purpose adversary.",
"Lacking a clear theoretical framework for adversarial attacks, most proposed defenses are ad-hoc and fail in the presence of a concerted attacker BID8 BID5 ).•",
"Poor out-of-distribution detection. Classifiers",
"do a poor job of signaling that they have received data that is substantially different from the data they were trained on. Ideally, a",
"trained classifier would give less confident predictions for data that was far from the training distribution (as well as for adversarial examples). Barring that",
", there would be a clear, principled statistic that could be extracted from the model to tell whether the model should have made a low-confidence prediction. Many different",
"approaches to providing such a statistic have been proposed BID18 BID28 BID19 BID32 BID30 BID13 , but most seem to do poorly on what humans intuitively view as obviously different data.• Miscalibrated",
"predictions. Related to the",
"issues above, classifiers tend to be very overconfident in their predictions BID18 . This may be a",
"symptom, rather than a cause, but miscalibration does not give practitioners confidence in their models.• Overfitting",
"to the training data. BID48 demonstrated",
"that classifiers can memorize fixed random labelings of training data, which means that it is possible to learn a classifier with perfect inability to generalize. This critical observation",
"makes it clear that a fundamental test of generalization is that the model should fail to learn when given what we call information-free datasets.",
"We have presented the basic form of the Conditional Entropy Bottleneck (CEB), motivated by the Minimum Necessary Information (MNI) criterion for optimal representations.",
"We have shown through careful experimentation that simply by switching to CEB, you can expect substantial improvements in OoD detection, adversarial example detection and robustness, calibration, and generalization.",
"Additionally, we have shown that it is possible to get all of these advantages without using any additional form of regularization, and without any new hyperparameters.",
"We have argued empirically that objective hyperparameters can lead to hard-to-predict suboptimal behavior, such as memorizing random labels, or reducing robustness to adversarial examples.",
"In Appendix E and in future work, we will show how to generalize CEB beyond the simple case of two observed variables.It is our perspective that all of the issues explored here -miscalibration, failure at OoD tasks, vulnerability to adversarial examples, and dataset memorization -stem from the same underlying issue, which is retaining too much information about the training data in the learned representation.",
"We believe that the MNI criterion and CEB show a path forward for many tasks in machine learning, permitting fast, amortized inference while ameliorating major problems.",
"a b Figure 4 : Geometry of the optimal surfaces for both CEB (purple) and IB (green) for models that can only come within of the optimal surface (a: = 0.1I(X; Y); b: = 0.01I(X; Y)).",
"The tangent lines have the slope of the corresponding β -the tangent point on the ball corresponds to the point on the pareto-optimal frontier for the corresponding model.",
"Note that β determines the \"exchange rate\" between bits of I(X; Z) and I(Y; Z), which is how we determine the coordinate of the center of the ball.",
"For IB to achieve the MNI point, 2 bits of I(Y; Z) are needed for every bit of I(X; Z).",
"Consequently, even for an infitely powerful model (corresponding to = 0), the only value of β that hits the MNI point is β = 2.",
"Thus, knowing the function (β) for a given model and dataset completely determines the model's pareto-optimal frontier.Here we collect a number of results that are not critical to the core of the paper, but may be of interest to particular audiences.A Analysis of CEB and IB From Equation FORMULA5 and the definition of CEB in Equation (6), the following equivalence between CEB and IB is obvious: DISPLAYFORM0 where we are parameterizing IB with β on the I(Y; Z) term for convenience.",
"This equivalence generalizes as follows: DISPLAYFORM1 DISPLAYFORM2 In Figure 4 , we show the combined information planes for CEB and IB given the above parameterization.",
"The figures show the simple geometry that determines a point on the pareto-optimal frontier for both objectives.",
"Every such point is fully determined by the function (β) for a given model and dataset, where is the closest the model can approach the true optimal surface.",
"(β) = 0 corresponds to the \"infinite\" model family that exactly traces out the boundaries of the feasible region.",
"The full feasible regions can be seen in Figure 2 .From",
"this geometry we can immediately conclude that if an IB model and a CEB model have the same value of > 0 at equivalent β, the CEB model will always yield a value of I(Y; Z) closer to I(X; Y). This",
"is because the slope of the tangent lines for CEB are always lower, putting the tangent points higher on the ball. This",
"gives part of a theoretical justification for the empirical observations above that V IB 0.5 (equivalent to IB 2 in the parameterization we are describing here) fails to capture as much of the necessary information as the CEB model. Even",
"at the pareto-optimal frontier, V IB 0.5 cannot get I(Y; Z) as close to I(X; Y) as CEB can. Of course",
", we do not want to claim that this effect accounts for the fairly substantial difference in performance -that is likely to be due to a combination of other factors, including the fact that it is often easier to train continuous conditional distributions (like b(z|y)) than it is to train continuous marginal distributions (like m(z)).We also think",
"that this analysis of the geometry of IB and CEB supports our preference for targeting the MNI point and treating CEB as an objective without hyperparameters. First, there",
"are only a maximum of 4 points of interest in both the IB and CEB information planes (all 4 are visibile in Figure 2 ): the origin, where there is no information in the representation; the MNI point; the point at (I(Y; Z) = I(X; Y), I(X; Z) = H(X)) (which is an MDL-compatible representation BID17 ); and the point at (I(Y; Z) = 0, I(X; Z) = H(X|Y)) (which would be the optimal decoder for an MNI representation). These are the",
"only points naturally identified by the dataset -selecting a point on one of the edges between those four points seems to need additional justification. Second, if you",
"do agree with the MNI criterion, for a given model it is impossible to get any closer to the MNI point than by setting CEB's β = 1, due to the convexity of the pareto-optimal frontier. Much more useful",
"is making changes to the model, architecture, dataset, etc in order to make smaller. One possibility",
"in that direction that IB and CEB models offer is inspecting training examples with high rate or residual information to check for label noise, leading to a natural human-in-the-loop model improvement algorithm. Another is using",
"CEB's residual information as a measure of the quality of the trained model, as mentioned in Appendix C."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.27586206793785095,
0,
0,
0,
0,
0.05882352590560913,
0,
0,
0.052631575614213943,
0,
0.060606054961681366,
0.05714285373687744,
0,
0,
0,
0,
0,
0,
0.05128204822540283,
0.0555555522441864,
0.3529411852359772,
0,
0.0555555522441864,
0.0555555522441864,
0.028985504060983658,
0.05128204822540283,
0.08888888359069824,
0.1249999925494194,
0.0555555522441864,
0.06451612710952759,
0.17142856121063232,
0.08219178020954132,
0.05405404791235924,
0.13793103396892548,
0.2222222238779068,
0,
0.0833333283662796,
0.04255318641662598,
0.12903225421905518,
0.04255318641662598,
0,
0.06896551698446274,
0.1621621549129486,
0.1230769231915474,
0,
0.08695651590824127,
0.0714285671710968,
0.08888888359069824,
0
] | rkVOXhAqY7 | true | [
"The Conditional Entropy Bottleneck is an information-theoretic objective function for learning optimal representations."
] |
[
"Deep representation learning has become one of the most widely adopted approaches for visual search, recommendation, and identification.",
"Retrieval of such representations from a large database is however computationally challenging.",
"Approximate methods based on learning compact representations, have been widely explored for this problem, such as locality sensitive hashing, product quantization, and PCA.",
"In this work, in contrast to learning compact representations, we propose to learn high dimensional and sparse representations that have similar representational capacity as dense embeddings while being more efficient due to sparse matrix multiplication operations which can be much faster than dense multiplication.",
"Following the key insight that the number of operations decreases quadratically with the sparsity of embeddings provided the non-zero entries are distributed uniformly across dimensions, we propose a novel approach to learn such distributed sparse embeddings via the use of a carefully constructed regularization function that directly minimizes a continuous relaxation of the number of floating-point operations (FLOPs) incurred during retrieval.",
"Our experiments show that our approach is competitive to the other baselines and yields a similar or better speed-vs-accuracy tradeoff on practical datasets.",
"Learning semantic representations using deep neural networks (DNN) is now a fundamental facet of applications ranging from visual search (Jing et al., 2015; Hadi Kiapour et al., 2015) , semantic text matching (Neculoiu et al., 2016) , oneshot classification (Koch et al., 2015) , clustering (Oh Song et al., 2017) , and recommendation (Shankar et al., 2017) .",
"The high-dimensional dense embeddings generated from DNNs however pose a computational challenge for performing nearest neighbor search in large-scale problems with millions of instances.",
"In particular, when the embedding dimension is high, evaluating the distance of any query to all the instances in a large database is expensive, so that efficient search without sacrificing accuracy is difficult.",
"Representations generated using DNNs typically have a higher dimension compared to hand-crafted features such as SIFT (Lowe, 2004) , and moreover are dense.",
"The key caveat with dense features is that unlike bag-of-words features they cannot be efficiently searched through an inverted index, without approximations.",
"Since accurate search in high dimensions is prohibitively expensive in practice (Wang, 2011) , one has to typically sacrifice accuracy for efficiency by resorting to approximate methods.",
"Addressing the problem of efficient approximate Nearest-Neighbor Search (NNS) (Jegou et al., 2011) or Maximum Inner-Product Search (MIPS) (Shrivastava and Li, 2014) is thus an active area of research, which we review in brief in the related work section.",
"Most approaches (Charikar, 2002; Jegou et al., 2011) aim to learn compact lower-dimensional representations that preserve distance information.",
"While there has been ample work on learning compact representations, learning sparse higher dimensional representations have been addressed only recently (Jeong and Song, 2018; Cao et al., 2018) .",
"As a seminal instance, Jeong and Song (2018) propose an end-to-end approach to learn sparse and high-dimensional hashes, showing significant speed-up in retrieval time on benchmark datasets compared to dense embeddings.",
"This approach has also been motivated from a biological viewpoint (Li et al., 2018) by relating to a fruit fly's olfactory circuit, thus suggesting the possibility of hashing using higher dimensions instead of reducing the dimensionality.",
"Furthermore, as suggested by Glorot et al. (2011) , sparsity can have additional advantages of linear separability and information disentanglement.",
"In a similar vein, in this work, we propose to learn high dimensional embeddings that are sparse and hence efficient to retrieve using sparse matrix multiplication operations.",
"In contrast to compact lowerdimensional ANN-esque representations that typically lead to decreased representational power, a key facet of our higher dimensional sparse embeddings is that they can have the same representational capacity as the initial dense embeddings.",
"The core idea behind our approach is inspired by two key observations:",
"(i) retrieval of d (high) dimensional sparse embeddings with fraction p of non-zero values on an average, can be sped up by a factor of 1/p.",
"(ii) The speed up can be further improved to a factor of 1/p 2 by ensuring that the non-zero values are evenly distributed across all the dimensions.",
"This indicates that sparsity alone is not sufficient to ensure maximal speedup; the distribution of the non-zero values plays a significant role as well.",
"This motivates us to consider the effect of sparsity on the number of floating point operations (FLOPs) required for retrieval with an inverted index.",
"We propose a penalty function on the embedding vectors that is a continuous relaxation of the exact number of FLOPs, and encourages an even distribution of the non-zeros across the dimensions.",
"We apply our approach to the large scale metric learning problem of learning embeddings for facial images.",
"Our training loss consists of a metric learning (Weinberger and Saul, 2009 ) loss aimed at learning embeddings that mimic a desired metric, and a FLOPs loss to minimize the number of operations.",
"We perform an empirical evaluation of our approach on the Megaface dataset (Kemelmacher-Shlizerman et al., 2016) , and show that our proposed method successfully learns high-dimensional sparse embeddings that are orders-of-magnitude faster.",
"We compare our approach to multiple baselines demonstrating an improved or similar speed-vs-accuracy trade-off.",
"The rest of the paper is organized as follows.",
"In Section 3 we analyze the expected number of FLOPs, for which we derive an exact expression.",
"In Section 4 we derive a continuous relaxation that can be used as a regularizer, and optimized using gradient descent.",
"We also provide some analytical justifications for our relaxation.",
"In Section 5 we then compare our method on a large metric learning task showing an improved speed-accuracy trade-off compared to the baselines.",
"In this paper we proposed a novel approach to learn high dimensional embeddings with the goal of improving efficiency of retrieval tasks.",
"Our approach integrates the FLOPs incurred during retrieval into the loss function as a regularizer and optimizes it directly through a continuous relaxation.",
"We provide further insight into our approach by showing that the proposed approach favors an even distribution of the non-zero activations across all the dimensions.",
"We experimentally showed that our approach indeed leads to a more even distribution when compared to the 1 regularizer.",
"We compared our approach to a number of other baselines and showed that it has a better speed-vs-accuracy trade-off.",
"Overall we were able to show that sparse embeddings can be around 50× faster compared to dense embeddings without a significant loss of accuracy.",
"Proof.",
"Follows directly from Lemma 3.",
"Lemma 5.",
"Proof.",
"Follows directly from Lemma 2."
] | [
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.13636362552642822,
0.15789473056793213,
0,
0.2769230604171753,
0.39436620473861694,
0.20408162474632263,
0.08955223113298416,
0.07999999821186066,
0.1818181723356247,
0.12244897335767746,
0.08510638028383255,
0.11764705181121826,
0.09677419066429138,
0.17777776718139648,
0.11320754140615463,
0.2545454502105713,
0.19999998807907104,
0.08695651590824127,
0.3921568691730499,
0.27586206793785095,
0.10526315122842789,
0.23999999463558197,
0.26923075318336487,
0.20408162474632263,
0.25,
0.3529411852359772,
0.2380952388048172,
0.307692289352417,
0.2857142686843872,
0.19999998807907104,
0.11428570747375488,
0.1904761791229248,
0.08888888359069824,
0.05714285373687744,
0.16326530277729034,
0.3404255211353302,
0.2978723347187042,
0.3333333432674408,
0.27272728085517883,
0.3181818127632141,
0.25,
0.06451612710952759,
0.06451612710952759
] | SygpC6Ntvr | true | [
"We propose an approach to learn sparse high dimensional representations that are fast to search, by incorporating a surrogate of the number of operations directly into the loss function."
] |
[
"Model interpretability and systematic, targeted model adaptation present central challenges in deep learning.",
"In the domain of intuitive physics, we study the task of visually predicting stability of block towers with the goal of understanding and influencing the model's reasoning.",
"Our contributions are two-fold.",
"Firstly, we introduce neural stethoscopes as a framework for quantifying the degree of importance of specific factors of influence in deep networks as well as for actively promoting and suppressing information as appropriate.",
"In doing so, we unify concepts from multitask learning as well as training with auxiliary and adversarial losses.",
"Secondly, we deploy the stethoscope framework to provide an in-depth analysis of a state-of-the-art deep neural network for stability prediction, specifically examining its physical reasoning.",
"We show that the baseline model is susceptible to being misled by incorrect visual cues.",
"This leads to a performance breakdown to the level of random guessing when training on scenarios where visual cues are inversely correlated with stability.",
"Using stethoscopes to promote meaningful feature extraction increases performance from 51% to 90% prediction accuracy.",
"Conversely, training on an easy dataset where visual cues are positively correlated with stability, the baseline model learns a bias leading to poor performance on a harder dataset.",
"Using an adversarial stethoscope, the network is successfully de-biased, leading to a performance increase from 66% to 88%.",
"Intuitive physics in the deep learning community describes physical understanding acquired by neural networks in a data-driven as opposed to a rule-based manner: With an increasing amount of training examples, we expect an algorithm to develop a better understanding of its (physical) environment, especially when the task it is trained on is inherently linked to the physical rules governing the scene.",
"However, what type of understanding the network develops highly depends on the types of scenarios it is confronted with and the task it is trying to solve.",
"Furthermore, it depends on the network architecture, on regularisation techniques, on the training procedure, etc.",
"As a result, in contrast to a rule-based approach, it is often hard to assess what form of physical understanding a neural network has developed.",
"We are specifically interested in whether the network uses visual cues as shortcuts which reflect correlations in the dataset but are incommensurate with the underlying laws of physics the network was intended to learn.In this paper, we specifically focus on stability prediction of block towers, a task which has gained interest in both the deep learning BID10 BID22 BID8 and the robotics community in recent years BID11 b) .",
"Images of towers of blocks stacked on top of each other are shown to a neural network.",
"Its task is to predict whether the tower will fall over or not resulting in a binary classification problem.",
"End-to-end learning approaches as well as simulation-based approaches achieve super-human performance on a real dataset BID10 BID22 BID8 .",
"However, with investigation of trained deep learning models limited to occlusion-based attention analyses BID10 BID8 , it is not clear to what extent neural networks trained on this task take into account physical principles such as centre-of-mass or whether they follow visual cues instead.",
"To this end, we introduce a variation of the ShapeStacks dataset presented by BID8 which facilitates the analysis of the effects of visual cues on the learning process.",
"The stethoscope framework.",
"The main network (blue), comprised of an encoder and a decoder, is trained for global stability prediction of block towers.",
"The stethoscope (orange), a two layered perceptron, is trained to predict a nuisance parameter (local stability) where the input is Z, a learned feature from an arbitrary layer of the main network.",
"The stethoscope loss is back-propagated with weighting factor λ to the main network.",
"The value of λ determines whether the stethoscope operates in analytic (λ \" 0), auxiliary (λ ą 0) or adversarial manner (λ ă 0).Motivated",
"by the need for an effective tool to understand and guide the physical reasoning of the neural network and inspired by prior research in interpretability, multi-task learning and adversarial training, we present neural stethoscopes as a unified framework for the interrogation and perturbation of task-specific information at any layer. A stethoscope",
"can be deployed in a purely analytic fashion whereby a question is posed via a stethoscope loss which is not propagated back into the main network. It can also be",
"used to promote or suppress specific information by deploying either an auxiliary or an adversarial training mechanism. The concept is",
"illustrated in FIG0 . We demonstrate",
"that deploying an auxiliary stethoscope can be used to promote information conducive to the main task improving overall network performance. Conversely, we",
"show that an adversarial stethoscope can mitigate a specific bias by effectively suppressing information. Moreover, the",
"main network does not need to be changed in order to apply a neural stethoscope.In this work, we present two contributions: (1) An in-depth analysis of the state-of-the-art approach for intuitive stability prediction. To that end,",
"we also introduce an extension to the existing ShapeStacks dataset which will be made publicly available. (2) A framework",
"for interpreting, suppressing or promoting extraction of features specific to a secondary task unifying existing approaches from interpretability, auxiliary and adversarial learning. While we frame",
"this work in the context of intuitive physics, questions regarding model interpretability and, consequently, systematic, targeted model adaptation find applicability in all domains of deep learning. For a study of",
"two MNIST toy problems with neural stethoscopes, please see Appendix C.",
"We study the state-of-the-art approach for stability prediction of block towers and test its physical understanding.",
"To that end, we create a new dataset and introduce the framework of neural stethoscopes unifying multiple threads of work in machine learning related to analytic, auxiliary and adversarial probing of neural networks.",
"The analytic application of stethoscopes allows measuring relationships between different prediction tasks.",
"We show that the network trained on stability prediction also obtains a more fine-grained physical understanding of the scene (origin of instability) but at the same time is susceptible to potentially misleading visual cues (i.e., local stability).",
"In addition to the analysis, the auxiliary and adversarial modes of the stethoscopes are used to support beneficial complementary information (origin of instability) and suppress harmful nuisance information (visual cues) without changing the network architecture of the main predictor.",
"This yields substantial performance gains in unfavourable training conditions where data is biased or labels are partially unavailable.",
"We encourage the use of neural stethoscopes for other application scenarios in the future as a general tool to analyse task relationships and suppress or promote extraction of specific features.",
"This can be done by collecting additional labels or using existing multi-modal data."
] | [
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.08695651590824127,
0.12903225421905518,
0,
0.054054051637649536,
0.29629629850387573,
0.11428570747375488,
0.07999999821186066,
0.12121211737394333,
0.0833333283662796,
0.11428570747375488,
0.14814814925193787,
0.13793103396892548,
0.1875,
0.09090908616781235,
0.1875,
0.060606058686971664,
0.07999999821186066,
0.06896550953388214,
0,
0.07692307233810425,
0,
0,
0.06896550953388214,
0.052631575614213943,
0.08695651590824127,
0.1249999925494194,
0.15686273574829102,
0,
0.2857142686843872,
0,
0.12903225421905518,
0.07692307233810425,
0.043478257954120636,
0.06896550953388214,
0.22857142984867096,
0,
0,
0.23076923191547394,
0.20512820780277252,
0,
0.1304347813129425,
0.20000000298023224,
0.0714285671710968,
0.10526315122842789,
0
] | BylctiCctX | true | [
"Combining auxiliary and adversarial training to interrogate and help physical understanding."
] |
[
"Flow based models such as Real NVP are an extremely powerful approach to density estimation.",
"However, existing flow based models are restricted to transforming continuous densities over a continuous input space into similarly continuous distributions over continuous latent variables.",
"This makes them poorly suited for modeling and representing discrete structures in data distributions, for example class membership or discrete symmetries.",
"To address this difficulty, we present a normalizing flow architecture which relies on domain partitioning using locally invertible functions, and possesses both real and discrete valued latent variables. ",
"This Real and Discrete (RAD) approach retains the desirable normalizing flow properties of exact sampling, exact inference, and analytically computable probabilities, while at the same time allowing simultaneous modeling of both continuous and discrete structure in a data distribution.",
"Latent generative models are one of the prevailing approaches for building expressive and tractable generative models.",
"The generative process for a sample x can be expressed as DISPLAYFORM0 where z is a noise vector, and g a parametric generator network (typically a deep neural network).",
"This paradigm has several incarnations, including variational autoencoders (Kingma & Welling, 2014; Rezende et al., 2014) , generative adversarial networks (Goodfellow et al., 2014) , and flow based models BID0 BID9 BID5 Kingma & Dhariwal, 2018; BID3 Grathwohl et al., 2019) .The",
"training process and model architecture for many existing latent generative models, and for all published flow based models, assumes a unimodal smooth distribution over latent variables z. Given",
"the parametrization of g as a neural network, the mapping to x is a continuous function. This",
"imposed structure makes it challenging to model data distributions with discrete structure -for instance, multi-modal distributions, distributions with holes, distributions with discrete symmetries, or distributions that lie on a union of manifolds (as may approximately be true for natural images, see BID11 . Indeed",
", such cases require the model to learn a generator whose input Jacobian has highly varying or infinite magnitude to separate the initial noise source into different clusters. Such",
"variations imply a challenging optimization problem due to large changes in curvature. This",
"shortcoming can be critical as several problems of interest are hypothesized to follow a clustering structure, i.e. the distributions is concentrated along several disjoint connected sets (Eghbal-zadeh et al., 2018) .A standard",
"way to address this issue has been to use mixture models BID16 Richardson & Weiss, 2018; Eghbal-zadeh et al., 2018) or structured priors (Johnson et al., 2016) . In order to",
"efficiently parametrize the model, mixture models are often formulated as a discrete latent variable models (Hinton & Salakhutdinov, 2006; BID4 Mnih & Gregor, 2014 ; van den Oord model (1c, 1d) . Note the dependency",
"of K on Z in 1d. While this is not necessary",
", we will exploit this structure as highlighted later in the main text and in Figure 4 . et al., 2017) , some of which",
"can be expressed as a deep mixture model BID10 BID14 BID13 . Although the resulting exponential",
"number of mixture components with depth in deep mixture models is an advantage in terms of expressivity, it is an impediment to inference, evaluation, and training of such models, often requiring as a result the use of approximate methods like hard-EM or variational inference (Neal & Hinton, 1998) .In this paper we combine piecewise",
"invertible functions with discrete auxiliary variables, selecting which invertible function applies, to describe a deep mixture model. This framework enables a probabilistic",
"model's latent space to have both real and discrete valued units, and to capture both continuous and discrete structure in the data distribution. It achieves this added capability while",
"preserving the exact inference, exact sampling, exact evaluation of log-likelihood, and efficient training that make standard flow based models desirable.",
"We introduced an approach to tractably evaluate and train deep mixture models using piecewise invertible maps as a folding mechanism.",
"This allows exact inference, exact generation, and exact evaluation of log-likelihood, avoiding many issues in previous discrete variables models.",
"This method can easily be combined with other flow based architectural components, allowing flow based models to better model datasets with discrete as well as continuous structure.",
"Figure 11: RAD and REAL NVP inference processes on the ring Gaussian mixture problem.",
"Each column correspond to a RAD or affine coupling layer.",
"RAD effectively uses foldings in order to bridge the multiple modes of the distribution into a single mode, primarily in the last layers of the transformation, whereas REAL NVP struggles to bring together these modes under the standard Gaussian distribution using continuous bijections.",
"A CONTINUITYThe standard approach in learning a deep probabilistic model has been stochastic gradient descent on the negative log-likelihood.",
"Although the model enables the computation of a gradient almost everywhere, the log-likelihood is unfortunately discontinuous.",
"Let's decompose the log-likelihood DISPLAYFORM0 There are two sources of discontinuity in this expression: f K is a function with discrete values (therefore discontinuous) and ∂f Z ∂x T is discontinuous because of the transition between the subsets A k , leading to the expression of interest DISPLAYFORM1 which takes a role similar to the log-Jacobian determinant, a pseudo log-Jacobian determinant.Let's build from now on the simple scalar case and a piecewise linear function DISPLAYFORM2 In this case, s(z) = log p K|Z k | z k≤N + C1 |K| can be seen as a vector valued function.We can attempt at parametrizing the model such that the pseudo log-Jacobian determinant becomes continuous with respect to β by expressing the boundary condition at x = β DISPLAYFORM3 ⇒s(−α 2 β) 2 + log(α 2 ) = s(−α 2 β) 3 + log(α 3 ).",
"DISPLAYFORM4 − log(α 1 ), log(α 2 ), log(α 3 ) + β 2 1 + cos (zα DISPLAYFORM5 Another type of boundary condition can be found at between the non-invertible area and the invertible area z = α 2 β, as ∀z > α 2 β, p 3|Z (3 | z) = 1, therefore DISPLAYFORM6 Since the condition ∀k < 3, p K|Z k | z) → 0 when z → (α 2 β) − will lead to an infinite loss barrier at x = −β, another way to enforce this boundary condition is by adding linear pieces FIG1 ): DISPLAYFORM7 The inverse is defined as DISPLAYFORM8 In order to know the values of s at the boundaries ±α 2 β, we can use the logit function DISPLAYFORM9 Given those constraints, the model can then be reliably learned through gradient descent methods.",
"Note that the resulting tractability of the model results from the fact that the discrete variables k is only interfaced during inference with the distribution p K|Z , unlike discrete variational autoencoders approaches (Mnih & Gregor, 2014; BID15 where it is fed to a deep neural network.",
"Similar to BID7 , the learning of discrete variables is achieved by relying on the the continuous component of the model, and, as opposed as other approaches (Jang et al., 2017; BID12 Grathwohl et al., 2018; BID12 , this gradient signal extracted is exact and closed form."
] | [
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.23999999463558197,
0.20000000298023224,
0.06896550953388214,
0.10526315122842789,
0.045454543083906174,
0,
0,
0.04444444179534912,
0.1764705777168274,
0.07999999821186066,
0.08695651590824127,
0.10810810327529907,
0.08695651590824127,
0.04651162400841713,
0.054054051637649536,
0.04878048226237297,
0,
0,
0,
0.07017543911933899,
0.13333332538604736,
0.11764705181121826,
0.0714285671710968,
0.06666666269302368,
0.14814814925193787,
0.1818181723356247,
0,
0.09999999403953552,
0.045454543083906174,
0,
0,
0.03539822995662689,
0.0178571417927742,
0.11999999731779099,
0.12765957415103912
] | HJeZNLIt_4 | true | [
"Flow based models, but non-invertible, to also learn discrete variables"
] |
[
"We investigate the loss surface of neural networks.",
"We prove that even for one-hidden-layer networks with \"slightest\" nonlinearity, the empirical risks have spurious local minima in most cases.",
"Our results thus indicate that in general \"no spurious local minim\" is a property limited to deep linear networks, and insights obtained from linear networks may not be robust.",
"Specifically, for ReLU(-like) networks we constructively prove that for almost all practical datasets there exist infinitely many local minima.",
"We also present a counterexample for more general activations (sigmoid, tanh, arctan, ReLU, etc.), for which there exists a bad local minimum.",
"Our results make the least restrictive assumptions relative to existing results on spurious local optima in neural networks.",
"We complete our discussion by presenting a comprehensive characterization of global optimality for deep linear networks, which unifies other results on this topic.",
"Neural network training reduces to solving nonconvex empirical risk minimization problems, a task that is in general intractable.",
"But success stories of deep learning suggest that local minima of the empirical risk could be close to global minima.",
"BID5 use spherical spin-glass models from statistical physics to justify how the size of neural networks may result in local minima that are close to global.",
"However, due to the complexities introduced by nonlinearity, a rigorous understanding of optimality in deep neural networks remains elusive.Initial steps towards understanding optimality have focused on deep linear networks.",
"This area has seen substantial recent progress.",
"In deep linear networks there is no nonlinear activation; the output is simply a multilinear function of the input.",
"BID1 prove that some shallow networks have no spurious local minima, and Kawaguchi (2016) extends this result to squared error deep linear networks, showing that they only have global minima and saddle points.",
"Several other works on linear nets have also appeared (Lu & Kawaguchi, 2017; Freeman & Bruna, 2017; Yun et al., 2018; Zhou & Liang, 2018; Laurent & Brecht, 2018a; b) .The",
"theory of nonlinear neural networks (which is the actual setting of interest), however, is still in its infancy. There",
"have been attempts to extend the \"local minima are global\" property from linear to nonlinear networks, but recent results suggest that this property does not usually hold (Zhou & Liang, 2018) . Although",
"not unexpected, rigorously proving such results turns out to be non-trivial, forcing several authors (e.g., Safran & Shamir (2018) ; BID8 ; Wu et al. (2018) ) to make somewhat unrealistic assumptions (realizability and Gaussianity) on data.In contrast, we prove existence of spurious local minima under the least restrictive (to our knowledge) assumptions. Since seemingly",
"subtle changes to assumptions can greatly influence the analysis as well as the applicability of known results, let us first summarize what is known; this will also help provide a better intuitive perspective on our results (as the technical details are somewhat involved).",
"We use the shorthandX := X (C1.",
"DISPLAYFORM0",
"3) The activation function h ish s+,s− .(C1.4",
") The",
"hidden layer has at least width 2: DISPLAYFORM1 Then, there is a spurious local minimum whose risk is the same as linear least squares model. Moreover",
", due to nonnegative homogeneity ofh s+,s− , there are infinitely many such local minima.Noticing that most real world datasets cannot be perfectly fit with linear models, Theorem 1 shows that when we use the activationh s+,s− , the empirical risk has bad local minima for almost all datasets that one may encounter in practice. Although",
"it is not very surprising that neural networks have spurious local minima, proving this rigorously is non-trivial. We provide",
"a constructive and deterministic proof for this problem that holds for general datasets, which is in contrast to experimental results of Safran & Shamir (2018) . We emphasize",
"that Theorem 1 also holds even for \"slightest\" nonlinearities, e.g., when s + = 1 + and s − = 1 where > 0 is small. This suggests",
"that the \"local min is global\" property is limited to the simplified setting of linear neural networks.Existing results on squared error loss either provide one counterexample (Swirszcz et al., 2016; Zhou & Liang, 2018) , or assume realizability and Gaussian input (Safran & Shamir, 2018; BID8 . Realizability",
"is an assumption that the output is generated by a network with unknown parameters. In real datasets",
", neither input is Gaussian nor output is generated by neural networks; in contrast, our result holds for most realistic situations, and hence delivers useful insight.There are several results proving sufficient conditions for global optimality of nonlinear neural networks (Soudry & Carmon, 2016; Xie et al., 2016; Nguyen & Hein, 2017) . But they rely on",
"assumptions that the network width scales with the number of data points. For instance, applying",
"Theorem 3.4 of Nguyen & Hein (2017) to our network proves that ifX has linearly independent columns and other assumptions hold, then any critical point with W 2 = 0 is a global minimum. However, linearly independent",
"columns already imply row(X) = R m , so even linear models RX can fit any Y ; i.e., there is less merit in using a complex model to fit Y . Theorem 1 does not make any structural",
"assumption other than d 1 ≥ 2, and addresses the case where it is impossible to fit Y with linear models, which is much more realistic.It is worth comparing our result with Laurent & Brecht (2018a) , who use hinge loss based classification and assume linear separability to prove \"no spurious local minima\" for Leaky-ReLU networks. Their result does not contradict our theorem",
"because the losses are different and we do not assume linear separability.One might wonder if our theorem holds even with d 1 ≥ m. Venturi et al. (2018) showed that onehidden-layer",
"neural networks with d 1 ≥ m doesn't have spurious valleys, hence there is no strict spurious local minima; however, due to nonnegative homogeneity ofh s+,s− we only have non-strict local minima. Based on BID2 , one might claim that with wide enough",
"hidden layer and random W 1 and b 1 , one can fit any Y ; however, this is not the case, by our assumption that linear models RX cannot fit Y . Note that for any d 1 , there is a non-trivial region",
"(measure > 0) in the parameter space where entry-wise) . In this region, the output of neural networkŶ is still",
"a linear combination of rows ofX, soŶ cannot fit Y ; in fact, it can only do as well as linear models. We will see in the Step 1 of Section 2.2 that the bad",
"local minimum that we construct \"kills\" d 1 − 1 neurons; however, killing many neurons is not a necessity, and it is just to simply the exposition. In fact, any local minimum in the region W 1 X + b 1",
"1 T m > 0 is a spurious local minimum. DISPLAYFORM2",
"We are now ready to state our first main theorem, whose proof is deferred to Appendix A7.",
"Theorem 4.",
"Suppose that for all j, d j ≥ min{d x , d y }, and that the loss is given by (4), where 0 is differentiable on R dy×dx .",
"For any critical point (Ŵ j ) H+1 j=1 of the loss , the following claims hold: DISPLAYFORM0 j=1 is a saddle of .",
"DISPLAYFORM1 j=1 is a local min (max) of ifŴ H+1:1 is a local min (max) of 0 ; moreover, DISPLAYFORM2 j=1 is a global min (max) of if and only ifŴ H+1:1 is a global min (max) of 0 .3.",
"If there exists j * ∈ [H + 1] such thatŴ H+1:j * +1 has full row rank andŴ j * −1:1 has full column rank, then ∇ 0 (Ŵ H+1:1 ) = 0, so 2(a",
") and 2(b",
") hold. Also",
", DISPLAYFORM3 j=1 is a local min (max) of .Let",
"us paraphrase Theorem 4 in words. In",
"particular, it states that if the hidden layers are \"wide enough\" so that the product W H+1:1 can attain full rank and if the loss assumes the form (4) for a differentiable loss 0 , then the type (optimal or saddle point) of a critical point (Ŵ j ) H+1 j=1 of is governed by the behavior of 0 at the productŴ H+1:1 .Note",
"that for any critical point (Ŵ j ) H+1 j=1 of the loss , either ∇ 0 (Ŵ H+1:1 ) = 0 or ∇ 0 (Ŵ H+1:1 ) = 0. Parts",
"1 and 2 handle these two cases. Also",
"observe that the condition in Part 3 implies ∇ 0 = 0, so Part 3 is a refinement of Part 2. A notable",
"fact is that a sufficient condition for Part 3 isŴ H+1:1 having full rank. For example",
", if d x ≥ d y , full-rankŴ H+1:1 implies rank(Ŵ H+1:2 ) = d y , whereby the condition in Part 3 holds with j * = 1.IfŴ H+1:1 is not critical for 0 , then (Ŵ j ) H+1 j=1 must be a saddle point of . IfŴ H+1:1",
"is a local min/max of 0 , (Ŵ j ) H+1 j=1 is also a local min/max of . Notice, however",
", that Part 2(a) does not address",
"the case of saddle points; whenŴ H+1:1 is a saddle point of 0 , the tuple (Ŵ j ) H+1 j=1 can behave arbitrarily. However, with the condition",
"in Part 3, statements 2(a) and 3(a) hold at the same",
"time, so",
"thatŴ H+1:1 is a local min/max of 0 if and only if (Ŵ j ) H+1 j=1 is a local min/max of . Observe that the same \"if and",
"only if\" statement holds for saddle points due to their definition; in summary, the types (min/max/saddle) of the critical points (Ŵ j ) H+1 j=1 andŴ H+1:1 match exactly. Although Theorem 4 itself is",
"of interest, the following corollary highlights its key implication for deep linear networks. Corollary 5. In addition to",
"the assumptions",
"in Theorem 4, assume that any critical point of 0 is a global min (max). For any critical point (Ŵ j )",
"Corollary 5 shows that for any differentiable loss function 0 whose critical points are global minima, the loss has only global minima and saddle points, therefore satisfying the \"local minima are global\" property. In other words, for such an 0",
", the multilinear re-parametrization introduced by deep linear networks does not introduce any spurious local minima/maxima; it only introduces saddle points. Importantly, Corollary 5 also",
"provides a checkable condition that distinguishes global minima from saddle points. Since is nonconvex, it is remarkable",
"that such a simple necessary and sufficient condition for global optimality is available. DISPLAYFORM4 Our result generalizes",
"previous works on linear networks such as Kawaguchi FORMULA2 ; Yun et al. (2018) ; Zhou & Liang (2018) , because it provides conditions for global optimality for a broader range of loss functions without assumptions on datasets. Laurent & Brecht (2018b) proved that",
"if DISPLAYFORM5 j=1 is a local min of , thenŴ H+1:1 is a critical point of 0 . First, observe that this result is implied",
"by Theorem 4.1. So our result, which was proved in parallel",
"and independently, is strictly more general. With additional assumption that critical points",
"of 0 are global minima, Laurent & Brecht (2018b) showed that \"local min is global\" property holds for linear neural networks; our Corollay 5 gives a simple and efficient test condition as well as proving there are only global minima and saddles, which is clearly stronger.",
"We investigated the loss surface of deep linear and nonlinear neural networks.",
"We proved two theorems showing existence of spurious local minima on nonlinear networks, which apply to almost all datasets (Theorem 1) and a wide class of activations (Theorem 2).",
"We concluded by Theorem 4, showing a general result studying the behavior of critical points in multilinearly parametrized functions, which unifies other existing results on linear neural networks.",
"Given that spurious local minima are common in neural networks, a valuable future research direction will be investigating how far local minima are from global minima in general, and how the size of the network affects this gap.",
"Another thing to note is that even though we showed the existence of spurious local minima in the whole parameter space, things can be different in restricted sets of parameter space (e.g., by adding regularizers).",
"Understanding the loss surface in such sets would be valuable.",
"Additionally, one can try to show algorithmic/trajectory results of (stochastic) gradient descent.",
"We hope that our paper will be a stepping stone to such future research.",
"A2 PROOF OF THEOREM 1, STEP 2, CASE 2"
] | [
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.1538461446762085,
0.42105263471603394,
0.21739129722118378,
0.3333333134651184,
0.20512819290161133,
0.17142856121063232,
0.09756097197532654,
0.1111111044883728,
0.1666666567325592,
0.1395348757505417,
0.045454539358615875,
0,
0.11428570747375488,
0.25,
0,
0.11428570747375488,
0.1249999925494194,
0.138888880610466,
0.03448275476694107,
0.1599999964237213,
0.07407406717538834,
0.1428571343421936,
0.1515151411294937,
0.277777761220932,
0.22727271914482117,
0.1860465109348297,
0.0952380895614624,
0.1764705777168274,
0.08695651590824127,
0.1249999925494194,
0.07407406717538834,
0.03703703358769417,
0.1621621549129486,
0.1599999964237213,
0.10526315122842789,
0.15094339847564697,
0.0555555522441864,
0.1249999925494194,
0.1538461446762085,
0.13793103396892548,
0.05882352590560913,
0.1818181723356247,
0.052631575614213943,
0.10810810327529907,
0,
0.0952380895614624,
0,
0.0714285671710968,
0,
0.11764705181121826,
0.1538461446762085,
0.07692307233810425,
0.10526315122842789,
0.11764705181121826,
0.06896550953388214,
0.060606054961681366,
0.07999999821186066,
0.0476190410554409,
0.13793103396892548,
0.20512819290161133,
0.07999999821186066,
0.1111111044883728,
0.05405404791235924,
0.19607841968536377,
0.1860465109348297,
0.05882352590560913,
0.17142856121063232,
0.14035087823867798,
0.10526315122842789,
0,
0.19999998807907104,
0.1355932205915451,
0.2666666507720947,
0.2666666507720947,
0.1304347813129425,
0.20408162474632263,
0.19607841968536377,
0.0714285671710968,
0,
0.1249999925494194,
0
] | rke_YiRct7 | true | [
"We constructively prove that even the slightest nonlinear activation functions introduce spurious local minima, for general datasets and activation functions."
] |
[
"The tasks that an agent will need to solve often aren’t known during training.",
"However, if the agent knows which properties of the environment we consider im- portant, then after learning how its actions affect those properties the agent may be able to use this knowledge to solve complex tasks without training specifi- cally for them.",
"Towards this end, we consider a setup in which an environment is augmented with a set of user defined attributes that parameterize the features of interest.",
"We propose a model that learns a policy for transitioning between “nearby” sets of attributes, and maintains a graph of possible transitions.",
"Given a task at test time that can be expressed in terms of a target set of attributes, and a current state, our model infers the attributes of the current state and searches over paths through attribute space to get a high level plan, and then uses its low level policy to execute the plan.",
"We show in grid-world games and 3D block stacking that our model is able to generalize to longer, more complex tasks at test time even when it only sees short, simple tasks at train time.\n",
"Deep reinforcement learning has demonstrated impressive successes in building agents that can solve difficult tasks, e.g. BID20 ; .",
"However, these successes have mostly been confined to situations where it is possible to train a large number of times on a single known task or distribution of tasks.",
"On the other hand, in some situations, the tasks of interest are not known at training time or are too complex to be completed by uninformed exploration on a sparse set of rewards.",
"In these situations, it may be that the cost of the supervision required to identify the important features of the environment, or to describe the space of possible tasks within it, is not so onerous.",
"Recently several papers have taken this approach, for example Reed & de Freitas (2015) ; BID2 ; BID22 ; BID7 .If",
"we expect an agent to be able to solve many different kinds of tasks, the representation of the task space is particularly important. In",
"this paper, we impose structure on the task space through the use of attribute sets, a high-level abstraction of the environment state. The",
"form of these are chosen by hand to capture task-relevant concepts, allowing both end goals as well as intermediate sub-tasks to be succinctly represented. As",
"in Reed & de Freitas (2015) ; BID2 ; BID22 , we thus trade extra supervision for generalization.The attributes yield a natural space in which to plan: instead of searching over possible sequences of actions, we instead search over attribute sets. Once",
"the agent learns how its actions affect the environment in terms of its relevant attributes, novel tasks can be solved compositionally by executing a plan consisting of a sequence of transitions between abstract states defined by those attributes. In the",
"experiments below, we will show that in various environments, training only on simple tasks, our agents are able to generalize to novel, more complex tasks.",
"Our results show that structuring the space of tasks with high level attributes allows an agent to compose policies for the solutions of simple tasks into solutions of more complex tasks.",
"The agent plans a path to the final goal at the level of the attributes, and executes the steps in this path with a reactive policy.",
"Thus, supervision of an agent by labeling attributes can lead to generalization from simple tasks at train time to more complex tasks at test time.",
"Nevertheless, there are many fronts for further work:Sample complexity of the planning module: In Table 5 we can see both the benefits and the liabilities of the explicit non-parametric form for c.",
"By 10K samples, the parametric lower level policy is already able to have a reasonable success rate.",
"However, because in this environment, there are roughly 200K edges in the graph, most of the edges have not been seen, and without any weight-sharing, our model cannot estimate these transition probabilities.",
"On the other hand, by 100K samples the model has seen enough of the graph to make nontrivial plans; and the non-parametric form of the graph makes planning straightforward.",
"In future work, we hope to combine parametric models for c with search to increase the sample efficiency of the planning module.",
"Alternatively, In frame 4 of this example, the policy is directed to place the green block in front of the red and blue blocks, but this is impossible because the blue and red are already in the frontmost position.we might hope to make progress on dynamic abstraction (projecting out some of the attributes) depending on the current state and goal, which would make the effective number of edges of the graph smaller.Exploration Although we discuss an agent in an environment, we have elided many of the difficult problems of reinforcement learning.",
"In particular, the environments considered in this work allow sampling low level transitions by starting at random states and following random policies, and these are sufficient to cover the state space, although we note that the method for training the model described in Section 2.1 allows for more sophisticated exploration policies.",
"Thus we sidestep the exploration problem, one of the key difficulties of reinforcement learning.",
"Nevertheless, building composable models even in this setting is nontrivial, and our view is that it is important to demonstrate success here (and decouple issues of exploration and composability) before moving on to the full RL problem.We believe that the attributes ρ and c, in addition to their usefulness for planning, provide a framework for incentivizing exploration.",
"The agent can be rewarded for finding unseen (or rarely-seen) high level transitions, or for validating or falsifying hypotheses about the existence of entries of c.Learning the attributes: Discovering the attributes automatically would remove much of the need for human supervision.",
"Recent work, such as BID33 , demonstrates how this could be done.",
"Another avenue for discovering attributes is to use a few \"seed\" attributes; and use aliasing as a signal that some attributes need to be refined."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.19354838132858276,
0.07407406717538834,
0.04878048226237297,
0.0555555522441864,
0.09999999403953552,
0.20408162474632263,
0.1111111044883728,
0.1395348757505417,
0.12765957415103912,
0.13333332538604736,
0.0555555522441864,
0.10526315122842789,
0.05405404791235924,
0.04999999329447746,
0.07407406717538834,
0.03999999538064003,
0.2926829159259796,
0.1904761791229248,
0.052631575614213943,
0.21052631735801697,
0.045454539358615875,
0.05882352590560913,
0,
0.09999999403953552,
0.10810810327529907,
0.049382712692022324,
0.06451612710952759,
0,
0.0937499925494194,
0,
0,
0.10526315122842789
] | r154_g-Rb | true | [
"Compositional attribute-based planning that generalizes to long test tasks, despite being trained on short & simple tasks."
] |
[
"The ability to learn new concepts with small amounts of data is a critical aspect of intelligence that has proven challenging for deep learning methods.",
"Meta-learning has emerged as a promising technique for leveraging data from previous tasks to enable efficient learning of new tasks.",
"However, most meta-learning algorithms implicitly require that the meta-training tasks be mutually-exclusive, such that no single model can solve all of the tasks at once.",
"For example, when creating tasks for few-shot image classification, prior work uses a per-task random assignment of image classes to N-way classification labels.",
"If this is not done, the meta-learner can ignore the task training data and learn a single model that performs all of the meta-training tasks zero-shot, but does not adapt effectively to new image classes. ",
"This requirement means that the user must take great care in designing the tasks, for example by shuffling labels or removing task identifying information from the inputs.",
"In some domains, this makes meta-learning entirely inapplicable.",
"In this paper, we address this challenge by designing a meta-regularization objective using information theory that places precedence on data-driven adaptation.",
"This causes the meta-learner to decide what must be learned from the task training data and what should be inferred from the task testing input.",
"By doing so, our algorithm can successfully use data from non-mutually-exclusive tasks to efficiently adapt to novel tasks.",
"We demonstrate its applicability to both contextual and gradient-based meta-learning algorithms, and apply it in practical settings where applying standard meta-learning has been difficult.",
"Our approach substantially outperforms standard meta-learning algorithms in these settings. ",
"The ability to learn new concepts and skills with small amounts of data is a critical aspect of intelligence that many machine learning systems lack.",
"Meta-learning (Schmidhuber, 1987) has emerged as a promising approach for enabling systems to quickly learn new tasks by building upon experience from previous related tasks (Thrun & Pratt, 2012; Koch et al., 2015; Santoro et al., 2016; Ravi & Larochelle, 2016; Finn et al., 2017) .",
"Meta-learning accomplishes this by explicitly optimizing for few-shot generalization across a set of meta-training tasks.",
"The meta-learner is trained such that, after being presented with a small task training set, it can accurately make predictions on test datapoints for that meta-training task.",
"While these methods have shown promising results, current methods require careful design of the meta-training tasks to prevent a subtle form of task overfitting, distinct from standard overfitting in supervised learning.",
"If the task can be accurately inferred from the test input alone, then the task training data can be ignored while still achieving low meta-training loss.",
"In effect, the model will collapse to one that makes zero-shot decisions.",
"This presents an opportunity for overfitting where the meta-learner generalizes on meta-training tasks, but fails to adapt when presented with training data from novel tasks.",
"We call this form of overfitting the memorization problem in meta-learning because the meta-learner memorizes a function that solves all of the meta-training tasks, rather than learning to adapt.",
"Existing meta-learning algorithms implicitly resolve this problem by carefully designing the metatraining tasks such that no single model can solve all tasks zero-shot; we call tasks constructed in this Implementation and examples available here: https://github.com/google-research/ google-research/tree/master/meta_learning_without_memorization.",
"way mutually-exclusive.",
"For example, for N -way classification, each task consists of examples from N randomly sampled classes.",
"The N classes are labeled from 1 to N , and critically, for each task, we randomize the assignment of classes to labels {1, 2, . . . , N } (visualized in Appendix Figure 3 ).",
"This ensures that the task-specific class-to-label assignment cannot be inferred from a test input alone.",
"However, the mutually-exclusive tasks requirement places a substantial burden on the user to cleverly design the meta-training setup (e.g., by shuffling labels or omitting goal information).",
"While shuffling labels provides a reasonable mechanism to force tasks to be mutually-exclusive with standard few-shot image classification datasets such as MiniImageNet (Ravi & Larochelle, 2016) , this solution cannot be applied to all domains where we would like to utilize meta-learning.",
"For example, consider meta-learning a pose predictor that can adapt to different objects: even if N different objects are used for meta-training, a powerful model can simply learn to ignore the training set for each task, and directly learn to predict the pose of each of the N objects.",
"However, such a model would not be able to adapt to new objects at meta-test time.",
"The primary contributions of this work are:",
"1) to identify and formalize the memorization problem in meta-learning, and",
"2) to propose an meta-regularizer (MR) using information theory as a general approach for mitigating this problem without placing restrictions on the task distribution.",
"We formally differentiate the meta-learning memorization problem from overfitting problem in conventional supervised learning, and empirically show that naïve applications of standard regularization techniques do not solve the memorization problem in meta-learning.",
"The key insight of our metaregularization approach is that the model acquired when memorizing tasks is more complex than the model that results from task-specific adaptation because the memorization model is a single model that simultaneously performs well on all tasks.",
"It needs to contain all information in its weights needed to do well on test points without looking at training points.",
"Therefore we would expect the information content of the weights of a memorization model to be larger, and hence the model should be more complex.",
"As a result, we propose an objective that regularizes the information complexity of the meta-learned function class (motivated by Alemi et al. (2016) ; Achille & Soatto (2018) ).",
"Furthermore, we show that meta-regularization in MAML can be rigorously motivated by a PAC-Bayes bound on generalization.",
"In a series of experiments on non-mutually-exclusive task distributions entailing both few-shot regression and classification, we find that memorization poses a significant challenge for both gradient-based (Finn et al., 2017) and contextual (Garnelo et al., 2018a ) meta-learning methods, resulting in near random performance on test tasks in some cases.",
"Our meta-regularization approach enables both of these methods to achieve efficient adaptation and generalization, leading to substantial performance gains across the board on non-mutually-exclusive tasks.",
"Meta-learning has achieved remarkable success in few-shot learning problems.",
"However, we identify a pitfall of current algorithms: the need to create task distributions that are mutually exclusive.",
"This requirement restricts the domains that meta-learning can be applied to.",
"We formalize the failure mode, i.e. the memorization problem, that results from training on non-mutually-exclusive tasks and distinguish it as a function-level overfitting problem compared to the the standard label-level overfitting in supervised learning.",
"We illustrate the memorization problem with different meta-learning algorithms on a number of domains.",
"To address the problem, we propose an algorithm-agnostic meta-regularization (MR) approach that leverages an information-theoretic perspective of the problem.",
"The key idea is that by placing a soft restriction on the information flow from meta-parameters in prediction of test set labels, we can encourage the meta-learner to use task training data during meta-training.",
"We achieve this by successfully controlling the complexity of model prior to the task adaptation.",
"The memorization issue is quite broad and is likely to occur in a wide range of real-world applications, for example, personalized speech recognition systems, learning robots that can adapt to different environments (Nagabandi et al., 2018) , and learning goal-conditioned manipulation skills using trial-and-error data.",
"Further, this challenge may also be prevalent in other conditional prediction problems, beyond meta-learning, an interesting direction for future study.",
"By both recognizing the challenge of memorization and developing a general and lightweight approach for solving it, we believe that this work represents an important step towards making meta-learning algorithms applicable to and effective on any problem domain.",
"We present the detailed algorithm for meta-regularization on weights with conditional neural processes (CNP) in Algorithm 1 and with model-agnostic meta-learning (MAML) in Algorithm 2.",
"For CNP, we add the regularization on the weights θ of encoder and leave other weightsθ unrestricted.",
"For MAML, we similarly regularize the weights θ from input to an intermediate hidden layer and leave the weightsθ for adaptation unregularized.",
"In this way, we restrict the complexity of the pre-adaptation model not the post-adaptation model.",
"Algorithm 1: Meta-Regularized CNP input : Task distribution p(T ); Encoder weights distribution q(θ; τ ) = N (θ; τ ) with Gaussian parameters τ = (θ µ , θ σ ); Prior distribution r(θ) and Lagrangian multiplier β;θ that parameterizes feature extractor hθ(·) and decoder Tθ(·).",
"Stepsize α.",
"output: Network parameter τ ,θ.",
"Initialize τ ,θ randomly; while not converged do Sample a mini-batch of {T i } from p(T ); Sample θ ∼ q(θ; τ ) with reparameterization ;",
"Algorithm 2: Meta-Regularized MAML input : Task distribution p(T ); Weights distribution q(θ; τ ) = N (θ; τ ) with Gaussian parameters τ = (θ µ , θ σ ); Prior distribution r(θ) and Lagrangian multiplier β; Stepsize α, α .",
"output: Network parameter τ ,θ.",
"Initialize τ ,θ randomly; while not converged do Sample a mini-batch of {T i } from p(T ); Sample θ ∼ q(θ; τ ) with reparameterization ;",
"Compute task specific parameter φ i =θ + α ∇θ log q(y i |z i ,θ) ; Updateθ ←θ + α∇θ Ti log q(y Algorithm 3: Meta-Regularized Methods in Meta-testing input : Meta-testing task T with training data D = (x, y) and testing input x * , optimized parameters τ,θ.",
"(hθ(z k , y) ) for MR-CNP and",
"We show that I(x * ;ŷ * |D, z * , θ) ≤ I(ŷ * ; D|z * , θ).",
"By Figure 4 , we have that I(ŷ * ; x * |θ, D, z * ) = 0.",
"By the chain rule of mutual information we have",
"A.3",
"META REGULARIZATION ON WEIGHTS Similar to (Achille & Soatto, 2018) , we use ξ to denote the unknown parameters of the true data generating distribution.",
"This defines a joint distribution",
"The meta-training loss in Eq. 1 is an upper bound for the cross entropy H p,q (y",
"Here the only negative term is the I(y * 1:N , D 1:N ; θ|x * 1:N , ξ), which quantifies the information that the meta-parameters contain about the meta-training data beyond what can be inferred from the data generating parameters (i.e., memorization).",
"Without proper regularization, the cross entropy loss can be minimized by maximizing this term.",
"We can control its value by upper bounding it",
"where the second equality follows because θ and ξ are conditionally independent given M. This gives the regularization in Section 4.2."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.11999999731779099,
0.04444443807005882,
0.25,
0.0416666604578495,
0.20338982343673706,
0.11764705181121826,
0.11764705181121826,
0.17391303181648254,
0.17777776718139648,
0.1428571343421936,
0.2083333283662796,
0.10810810327529907,
0.1599999964237213,
0.0307692252099514,
0.04878048226237297,
0.1538461446762085,
0.1090909019112587,
0.12765957415103912,
0.15789473056793213,
0.19607841968536377,
0.3461538553237915,
0.29999998211860657,
0,
0.1428571343421936,
0.1463414579629898,
0.11538460850715637,
0.15625,
0.19672130048274994,
0.09756097197532654,
0.060606058686971664,
0.4444444477558136,
0.19999998807907104,
0.3461538553237915,
0.14035087823867798,
0.13333332538604736,
0.21739129722118378,
0.07407406717538834,
0.2790697515010834,
0.17142856121063232,
0.19999998807907104,
0.05714285373687744,
0.1818181723356247,
0.3243243098258972,
0.35087719559669495,
0.3499999940395355,
0.1860465109348297,
0.20338982343673706,
0.19999998807907104,
0.17910447716712952,
0.1304347813129425,
0.3870967626571655,
0.3333333432674408,
0.1428571343421936,
0.12765957415103912,
0.10526315122842789,
0.0923076868057251,
0,
0.039215680211782455,
0.06666666269302368,
0,
0.039215680211782455,
0.08695651590824127,
0.05882352590560913,
0.09999999403953552,
0.04651162400841713,
0.05714285373687744,
0.08163265138864517,
0,
0.09302324801683426,
0.1666666567325592,
0.19999998807907104,
0.11428570747375488,
0.12765957415103912
] | BklEFpEYwS | true | [
"We identify and formalize the memorization problem in meta-learning and solve this problem with novel meta-regularization method, which greatly expand the domain that meta-learning can be applicable to and effective on."
] |
[
"Learning an efficient update rule from data that promotes rapid learning of new tasks from the same distribution remains an open problem in meta-learning.",
"Typically, previous works have approached this issue either by attempting to train a neural network that directly produces updates or by attempting to learn better initialisations or scaling factors for a gradient-based update rule.",
"Both these approaches pose challenges.",
"On one hand, directly producing an update forgoes a useful inductive bias and can easily lead to non-converging behaviour.",
"On the other hand, approaches that try to control a gradient-based update rule typically resort to computing gradients through the learning process to obtain their meta-gradients, leading to methods that can not scale beyond few-shot task adaptation.",
"In this work we propose Warped Gradient Descent (WarpGrad), a method that intersects these approaches to mitigate their limitations.",
"WarpGrad meta-learns an efficiently parameterised preconditioning matrix that facilitates gradient descent across the task distribution.",
"Preconditioning arises by interleaving non-linear layers, referred to as warp-layers, between the layers of a task-learner.",
"Warp-layers are meta-learned without backpropagating through the task training process in a manner similar to methods that learn to directly produce updates.",
"WarpGrad is computationally efficient, easy to implement, and can scale to arbitrarily large meta-learning problems.",
"We provide a geometrical interpretation of the approach and evaluate its effectiveness in a variety of settings, including few-shot, standard supervised, continual and reinforcement learning.",
"to learn implies inferring a learning strategy from some set of past experiences via a meta-learner that a task-learner can leverage when learning a new task.",
"One approach is to directly parameterise an update rule via the memory of a recurrent neural network (Andrychowicz et al., 2016; Ravi & Larochelle, 2016; Li & Malik, 2016; Chen et al., 2017) .",
"Such memory-based methods can, in principle, represent any learning rule by virtue of being universal function approximators (Cybenko, 1989; Hornik, 1991; Schäfer & Zimmermann, 2007) .",
"They can also scale to long learning processes by using truncated backpropagation through time, but they lack an inductive bias as to what constitutes a reasonable learning rule.",
"This renders them hard to train and brittle to generalisation as their parameter updates have no guarantees of convergence.",
"An alternative family of approaches defines a gradient-based update rule and meta-learns a shared initialisation that facilitates task adaptation across a distribution of tasks (Finn et al., 2017; Nichol et al., 2018; Flennerhag et al., 2019) .",
"Such methods are imbued with a strong inductive biasgradient descent-but restrict knowledge transfer to the initialisation.",
"Recent work has shown that it is beneficial to more directly control gradient descent by meta-learning an approximation of a parameterised matrix (Li et al., 2017; Lee et al., 2017; Park & Oliva, 2019 ) that preconditions gradients during task training, similarly to second-order and Natural Gradient Descent methods (Nocedal & Wright, 2006; Amari & Nagaoka, 2007) .",
"To meta-learn preconditioning, these methods backpropagate through the gradient descent process, limiting them to few-shot learning.",
"In this paper, we propose a novel framework called Warped Gradient Descent (WarpGrad) , that relies on the inductive bias of gradient-based meta-learners by defining an update rule that preconditions gradients, but that is meta-learned using insights from memory-based methods.",
"In particular, we",
"We propose WarpGrad, a novel meta-learner that combines the expressive capacity and flexibility of memory-based meta-learners with the inductive bias of gradient-based meta-learners.",
"WarpGrad meta-learns to precondition gradients during task adaptation without backpropagating through the adaptation process and we find empirically that it retains the inductive bias of MAML-based few-shot learners while being able to scale to complex problems and architectures.",
"Further, by expressing preconditioning through warp-layers that are universal function approximators, WarpGrad is able to express geometries beyond the block-diagonal structure of prior works.",
"WarpGrad provides a principled framework for general-purpose meta-learning that integrates learning paradigms, such as continual learning, an exciting avenue for future research.",
"We introduce novel means for preconditioning, for instance with residual and recurrent warp-layers.",
"Understanding how WarpGrad manifolds relate to second-order optimisation methods will further our understanding of gradient-based meta-learning and aid us in designing warp-layers with stronger inductive bias.",
"In their current form, WarpGrad methods share some of the limitations of many popular metalearning approaches.",
"While WarpGrad is designed to avoid backpropagating through the task training process, as in Warp-Leap, the WarpGrad objective samples from parameter trajectories and has therefore linear computational complexity in the number of adaptation steps, currently an unresolved limitation of gradient-based meta-learning.",
"Our offline algorithm (Algorithm 2) hints at exciting possibilities for overcoming this limitation.",
"WarpGrad is a model-embedded meta-learned optimiser that allows for a number of implementation strategies.",
"Indeed, there is a number of ways warp-layers can be embedded in an architecture of choice.",
"To embed warp-layers given a task-learner architecture, we may either insert new warp-layers in the given architecture or designate some layers as warp-layers and some as task layers.",
"We found that WarpGrad can both be used in a high-capacity mode, where task-learners are relatively weak to avoid overfitting, as well as in a low-capacity mode where task-learners are powerful and warp-layers are relatively weak.",
"The best approach depends on the problem at hand.",
"We highlight three approaches to designing WarpGrad optimisers, starting from a given architecture:",
"(a) Model partitioning.",
"Given a desired architecture, designate some operations as task-adaptable and the rest as warp-layers.",
"Task layers do not have to interleave exactly with warp-layers as gradient warping arises both through the forward pass and through backpropagation.",
"This was how we approached the tieredImageNet and miniImageNet experiments.",
"(b) Model augmentation.",
"Given a model, designate all layers as task-adaptable and interleave warplayers.",
"Warp-layers can be relatively weak as backpropagation through non-linear activations ensures expressive gradient warping.",
"This was our approach to the Omniglot experiment; our main architecture interleaves linear warp-layers in a standard architecture.",
"(c) Information compression.",
"Given a model, designate all layers as warp and interleave weak task layers.",
"In this scenario, task-learners are prone to overfitting.",
"Pushing capacity into the warp allows it to encode general information the task-learner can draw on during task adaptation.",
"This approach is similar to approaches in transfer and meta-learning that restrict the number of free parameters during task training (Rebuffi et al., 2017; Lee & Choi, 2018; Zintgraf et al., 2019 Figure 6 illustrates this process."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.25531914830207825,
0.25925925374031067,
0,
0.1818181723356247,
0.31578946113586426,
0.1818181723356247,
0.04999999701976776,
0.1463414579629898,
0.1304347813129425,
0.20512820780277252,
0.2978723347187042,
0.21276594698429108,
0.2222222238779068,
0.1599999964237213,
0.15686273574829102,
0.1395348757505417,
0.2545454502105713,
0.09756097197532654,
0.18421052396297455,
0.1463414579629898,
0.317460298538208,
0,
0.35555556416511536,
0.17241378128528595,
0.20408162474632263,
0.3478260934352875,
0.21621620655059814,
0.19607841968536377,
0.04999999701976776,
0.19672130048274994,
0.052631575614213943,
0.2631579041481018,
0.14999999105930328,
0.08510638028383255,
0.19230768084526062,
0,
0.15789473056793213,
0,
0.10526315122842789,
0.08695651590824127,
0.05714285373687744,
0,
0.1111111044883728,
0,
0.09756097197532654,
0,
0.10810810327529907,
0.060606058686971664,
0.04651162400841713,
0.19672130048274994
] | rkeiQlBFPB | true | [
"We propose a novel framework for meta-learning a gradient-based update rule that scales to beyond few-shot learning and is applicable to any form of learning, including continual learning."
] |
[
"In recent years, deep neural network approaches have been widely adopted for machine learning tasks, including classification.",
"However, they were shown to be vulnerable to adversarial perturbations: carefully crafted small perturbations can cause misclassification of legitimate images.",
"We propose Defense-GAN, a new framework leveraging the expressive capability of generative models to defend deep neural networks against such attacks.",
"Defense-GAN is trained to model the distribution of unperturbed images.",
"At inference time, it finds a close output to a given image which does not contain the adversarial changes.",
"This output is then fed to the classifier.",
"Our proposed method can be used with any classification model and does not modify the classifier structure or training procedure.",
"It can also be used as a defense against any attack as it does not assume knowledge of the process for generating the adversarial examples.",
"We empirically show that Defense-GAN is consistently effective against different attack methods and improves on existing defense strategies.",
"Despite their outstanding performance on several machine learning tasks, deep neural networks have been shown to be susceptible to adversarial attacks BID20 BID4 .",
"These attacks come in the form of adversarial examples: carefully crafted perturbations added to a legitimate input sample.",
"In the context of classification, these perturbations cause the legitimate sample to be misclassified at inference time BID20 BID4 BID16 BID11 .",
"Such perturbations are often small in magnitude and do not affect human recognition but can drastically change the output of the classifier.Recent literature has considered two types of threat models: black-box and white-box attacks.",
"Under the black-box attack model, the attacker does not have access to the classification model parameters; whereas in the white-box attack model, the attacker has complete access to the model architecture and parameters, including potential defense mechanisms BID21 BID2 .Various",
"defenses have been proposed to mitigate the effect of adversarial attacks. These defenses",
"can be grouped under three different approaches: (1) modifying the training data to make the classifier more robust against attacks, e.g., adversarial training which augments the training data of the classifier with adversarial examples BID20 BID4 , (2) modifying the training procedure of the classifier to reduce the magnitude of gradients, e.g., defensive distillation BID18 , and (3) attempting to remove the adversarial noise from the input samples BID6 BID13 . All of these approaches",
"have limitations in the sense that they are effective against either white-box attacks or black-box attacks, but not both BID21 BID13 . Furthermore, some of these",
"defenses are devised with specific attack models in mind and are not effective against new attacks.In this paper, we propose a novel defense mechanism which is effective against both white-box and black-box attacks. We propose to leverage the",
"representative power of Generative Adversarial Networks (GAN) to diminish the effect of the adversarial perturbation, by \"projecting\" input images onto the range of the GAN's generator prior to feeding them to the classifier. In the GAN framework, two",
"models are trained simultaneously in an adversarial setting: a generative model that emulates the data distribution, and a discriminative model that predicts whether a certain input came from real data or was artificially created. The generative model learns",
"a mapping G from a low-dimensional vector z ∈ R k to the high-dimensional input sample space R n . During training of the GAN,",
"G is encouraged to generate samples which resemble the training data. It is, therefore, expected",
"that legitimate samples will be close to some point in the range of G, whereas adversarial samples will be further away from the range of G. Furthermore, \"projecting\" the adversarial examples onto the range of the generator G can have the desirable effect of reducing the adversarial perturbation. The projected output, computed",
"using Gradient Descent (GD), is fed into the classifier instead of the original (potentially adversarially modified) image. We empirically demonstrate that",
"this is an effective defense against both black-box and white-box attacks on two benchmark image datasets.The rest of the paper is organized as follows. We introduce the necessary background",
"regarding known attack models, defense mechanisms, and GANs in Section 2. Our defense mechanism, which we call",
"Defense-GAN, is formally motivated and introduced in Section 3. Finally, experimental results, under",
"different threat models, as well as comparisons to other defenses are presented in Section 4.",
"In this paper, we proposed Defense-GAN, a novel defense strategy utilizing GANs to enhance the robustness of classification models against black-box and white-box adversarial attacks.",
"Our method does not assume a particular attack model and was shown to be effective against most commonly considered attack strategies.",
"We empirically show that Defense-GAN consistently provides adequate defense on two benchmark computer vision datasets, whereas other methods had many shortcomings on at least one type of attack.It is worth mentioning that, although Defense-GAN was shown to be a feasible defense mechanism against adversarial attacks, one might come across practical difficulties while implementing and deploying this method.",
"The success of Defense-GAN relies on the expressiveness and generative power of the GAN.",
"However, training GANs is still a challenging task and an active area of research, and if the GAN is not properly trained and tuned, the performance of Defense-GAN will suffer on both original and adversarial examples.",
"Moreover, the choice of hyper-parameters L and R is also critical to the effectiveness of the defense and it may be challenging to tune them without knowledge of the attack."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0
] | [
0.060606054961681366,
0.05714285373687744,
0.3243243098258972,
0.1538461446762085,
0.11764705181121826,
0.0833333283662796,
0.1111111044883728,
0.10256409645080566,
0.1764705777168274,
0.10526315122842789,
0.23529411852359772,
0.0555555522441864,
0.2083333283662796,
0.2666666507720947,
0.1428571343421936,
0.08955223858356476,
0.24390242993831635,
0.36734694242477417,
0.13333332538604736,
0.1702127605676651,
0.10810810327529907,
0.06451612710952759,
0.07843136787414551,
0,
0.22727271914482117,
0.1249999925494194,
0.13793103396892548,
0.13333332538604736,
0.4390243887901306,
0.2222222238779068,
0.1428571343421936,
0.1428571343421936,
0.1304347813129425,
0.10256409645080566
] | BkJ3ibb0- | true | [
"Defense-GAN uses a Generative Adversarial Network to defend against white-box and black-box attacks in classification models."
] |
[
"We study the problem of learning similarity functions over very large corpora using neural network embedding models.",
"These models are typically trained using SGD with random sampling of unobserved pairs, with a sample size that grows quadratically with the corpus size, making it expensive to scale.\n",
"We propose new efficient methods to train these models without having to sample unobserved pairs.",
"Inspired by matrix factorization, our approach relies on adding a global quadratic penalty and expressing this term as the inner-product of two generalized Gramians.",
"We show that the gradient of this term can be efficiently computed by maintaining estimates of the Gramians, and develop variance reduction schemes to improve the quality of the estimates.",
"We conduct large-scale experiments that show a significant improvement both in training time and generalization performance compared to sampling methods.",
"We consider the problem of learning a similarity function h : X × Y → R, that maps each pair of items, represented by their feature vectors (x, y) ∈ X × Y, to a real number h(x, y), representing their similarity.",
"We will refer to x and y as the left and right feature vectors, respectively.",
"Many problems can be cast in this form: In natural language processing, x represents a context (e.g. a bag of words), y represents a candidate word, and the target similarity measures the likelihood to observe y in context x BID14 BID16 BID13 .",
"In recommender systems, x represents a user query, y represents a candidate item, and the target similarity is a measure of relevance of item y to query x, e.g. a movie rating BID0 , or the likelihood to watch a given movie BID12 Rendle, 2010) .",
"Other applications include image similarity, where x and y are pixel-representations of images BID5 BID6 Schroff et al., 2015) , and network embedding models BID10 Qiu et al., 2018) , where x and y are nodes in a graph and the similarity is whether an edge connects them.A popular approach to learning similarity functions is to train an embedding representation of each item, such that items with high similarity are mapped to vectors that are close in the embedding space.",
"A common property of such problems is that only a small subset of all possible pairs X × Y is present in the training set, and those examples typically have high similarity.",
"Training exclusively on observed examples has been demonstrated to yield poor generalization performance.",
"Intuitively, when trained only on observed pairs, the model places the embedding of a given item close to similar items, but does not learn to place it far from dissimilar ones (Shazeer et al., 2016; Xin et al., 2017) .",
"Taking into account unobserved pairs is known to improve the embedding quality in many applications, including recommendation BID12 BID1 and word analogy tasks (Shazeer et al., 2016) .",
"This is often achieved by adding a low-similarity prior on all pairs, which acts as a repulsive force between all embeddings.",
"But because it involves a number of terms quadratic in the corpus size, this term is computationally intractable (except in the linear case), and it is typically optimized using sampling: for each observed pair in the training set, a set of random unobserved pairs is sampled and used to compute an estimate of the repulsive term.",
"But as the corpus size increases, the quality of the estimates deteriorates unless the sample size is increased, which limits scalability.In this paper, we address this issue by developing new methods to efficiently estimate the repulsive term, without sampling unobserved pairs.",
"Our approach is inspired by matrix factorization models, which correspond to the special case of linear embedding functions.",
"They are typically trained using alternating least squares BID12 , or coordinate descent methods BID2 , which circumvent the computational burden of the repulsive term by writing it as a matrix-inner-product of two Gramians, and computing the left Gramian before optimizing over the right embeddings, and viceversa.",
"Unfortunately, in non-linear embedding models, each update of the model parameters induces a simulateneous change in all embeddings, making it impractical to recompute the Gramians at each iteration.",
"As a result, the Gramian formulation has been largely ignored in the non-linear setting, where models are instead trained using stochastic gradient methods with sampling of unobserved pairs, see BID7 .",
"Vincent et al. (2015) were, to our knowledge, the first to attempt leveraging the Gramian formulation in the non-linear case.",
"They consider a model where only one of the embedding functions is non-linear, and show that the gradient can be computed efficiently in that case.",
"Their result is remarkable in that it allows exact gradient computation, but this unfortunately does not generalize to the case where both embedding functions are non-linear.Contributions We propose new methods that leverage the Gramian formulation in the non-linear case, and that, unlike previous approaches, are efficient even when both left and right embeddings are non-linear.",
"Our methods operate by maintaining stochastic estimates of the Gram matrices, and using different variance reduction schemes to improve the quality of the estimates.",
"We perform several experiments that show these methods scale far better than traditional sampling approaches on very large corpora.We start by reviewing preliminaries in Section 2, then derive the Gramian-based methods and analyze them in Section 3.",
"We conduct large-scale experiments on the Wikipedia dataset in Section 4, and provide additional experiments in the appendix.",
"All the proofs are deferred to Appendix A.",
"We showed that the Gramian formulation commonly used in low-rank matrix factorization can be leveraged for training non-linear embedding models, by maintaining estimates of the Gram matrices and using them to estimate the gradient.",
"By applying variance reduction techniques to the Gramians, one can improve the quality of the gradient estimates, without relying on large sample size as is done in traditional sampling methods.",
"This leads to a significant impact on training time and generalization quality, as indicated by our experiments.",
"While we focused on problems with very large vocabulary size, where traditional approaches are inefficient, it will be interesting to evaluate our methods on other applications such as word-analogy tasks BID14 Schnabel et al. (2015) .",
"Another direction of future work is to extend this formulation to a larger family of penalty functions, such as the spherical loss family studied in (Vincent et al., 2015; BID8"
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.260869562625885,
0.21052631735801697,
0.2790697515010834,
0.22641508281230927,
0.3396226465702057,
0.2448979616165161,
0.2153846174478531,
0.1860465109348297,
0.1875,
0.1538461446762085,
0.23255813121795654,
0.20338982343673706,
0.0476190447807312,
0.1538461446762085,
0.17543859779834747,
0.0833333283662796,
0.1944444328546524,
0.1846153736114502,
0.21276594698429108,
0.17142856121063232,
0.2222222238779068,
0.24137930572032928,
0.1304347813129425,
0.23076923191547394,
0.21333332359790802,
0.40816324949264526,
0.1904761791229248,
0.1818181723356247,
0.10810810327529907,
0.39344263076782227,
0.17543859779834747,
0.17391303181648254,
0.0952380895614624,
0.17543859779834747
] | Hke20iA9Y7 | true | [
"We develop efficient methods to train neural embedding models with a dot-product structure, by reformulating the objective function in terms of generalized Gram matrices, and maintaining estimates of those matrices."
] |
[
"For natural language understanding (NLU) technology to be maximally useful, it must be able to process language in a way that is not exclusive to a single task, genre, or dataset.",
"In pursuit of this objective, we introduce the General Language Understanding Evaluation (GLUE) benchmark, a collection of tools for evaluating the performance of models across a diverse set of existing NLU tasks.",
"By including tasks with limited training data, GLUE is designed to favor and encourage models that share general linguistic knowledge across tasks.",
"GLUE also includes a hand-crafted diagnostic test suite that enables detailed linguistic analysis of models.",
"We evaluate baselines based on current methods for transfer and representation learning and find that multi-task training on all tasks performs better than training a separate model per task.",
"However, the low absolute performance of our best model indicates the need for improved general NLU systems.",
"The human ability to understand language is general, flexible, and robust.",
"In contrast, most NLU models above the word level are designed for a specific task and struggle with out-of-domain data.",
"If we aspire to develop models with understanding beyond the detection of superficial correspondences between inputs and outputs, then it is critical to develop a more unified model that can learn to execute a range of different linguistic tasks in different domains.To facilitate research in this direction, we present the General Language Understanding Evaluation (GLUE) benchmark: a collection of NLU tasks including question answering, sentiment analysis, and textual entailment, and an associated online platform for model evaluation, comparison, and analysis.",
"GLUE does not place any constraints on model architecture beyond the ability to process single-sentence and sentence-pair inputs and to make corresponding predictions.",
"For some GLUE tasks, training data is plentiful, but for others it is limited or fails to match the genre of the test set.",
"GLUE therefore favors models that can learn to represent linguistic knowledge in a way that facilitates sample-efficient learning and effective knowledge-transfer across tasks.",
"None of the datasets in GLUE were created from scratch for the benchmark; we rely on preexisting datasets because they have been implicitly agreed upon by the NLP community as challenging and interesting.",
"Four of the datasets feature privately-held test data, which will be used to ensure that the benchmark is used fairly.",
"Table 1 : Task descriptions and statistics.",
"All tasks are single sentence or sentence pair classification, except STS-B, which is a regression task.",
"MNLI has three classes; all other classification tasks have two.",
"Test sets shown in bold use labels that have never been made public in any form.To better understand the challenged posed by GLUE, we conduct experiments with simple baselines and state-of-the-art sentence representation models.",
"We find that unified multi-task trained models slightly outperform comparable models trained on each task separately.",
"Our best multi-task model makes use of ELMo BID2 , a recently proposed pre-training technique.",
"However, this model still achieves a fairly low absolute score.",
"Analysis with our diagnostic dataset reveals that our baseline models deal well with strong lexical signals but struggle with deeper logical structure.In summary, we offer:",
"(i) A suite of nine sentence or sentence-pair NLU tasks, built on established annotated datasets and selected to cover a diverse range of text genres, dataset sizes, and degrees of difficulty.(ii",
") An online evaluation platform and leaderboard, based primarily on privately-held test data. The",
"platform is model-agnostic, and can evaluate any method capable of producing results on all nine tasks. (iii",
") An expert-constructed diagnostic evaluation dataset. (iv",
") Baseline results for several major existing approaches to sentence representation learning.",
"We introduce GLUE, a platform and collection of resources for evaluating and analyzing natural language understanding systems.",
"We find that, in aggregate, models trained jointly on our tasks see better performance than the combined performance of models trained for each task separately.",
"We confirm the utility of attention mechanisms and transfer learning methods such as ELMo in NLU systems, which combine to outperform the best sentence representation models on the GLUE benchmark, but still leave room for improvement.",
"When evaluating these models on our diagnostic dataset, we find that they fail (often spectacularly) on many linguistic phenomena, suggesting possible avenues for future work.",
"In sum, the question of how to design general-purpose NLU models remains unanswered, and we believe that GLUE can provide fertile soil for addressing this challenge.",
"A ADDITIONAL BENCHMARK DETAILS QNLI To construct a balanced dataset, we select all pairs in which the most similar sentence to the question was not the answer sentence, as well as an equal amount of cases in which the correct sentence was the most similar to the question, but another distracting sentence was a close second.",
"Our similarity metric is based on CBoW representations with pre-trained GloVe embeddings.",
"This approach to converting pre-existing datasets into NLI format is closely related to recent work by BID16 , as well as to the original motivation for textual entailment presented by Dagan et al. (2006) .",
"Both argue that many NLP tasks can be productively reduced to textual entailment."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.2380952388048172,
0.1395348757505417,
0.05405404791235924,
0.12903225421905518,
0.2380952388048172,
0.1249999925494194,
0.14814814925193787,
0.1666666567325592,
0.1975308656692505,
0.05405404791235924,
0.052631575614213943,
0.15789473056793213,
0.1304347813129425,
0.05882352590560913,
0.08695651590824127,
0.06451612710952759,
0,
0.07999999821186066,
0.13333332538604736,
0.12903225421905518,
0.07692307233810425,
0,
0.08888888359069824,
0.13333332538604736,
0.12121211737394333,
0,
0.0714285671710968,
0.625,
0.15789473056793213,
0.1599999964237213,
0.09999999403953552,
0.0952380895614624,
0.0714285671710968,
0,
0.043478257954120636,
0
] | rJ4km2R5t7 | true | [
"We present a multi-task benchmark and analysis platform for evaluating generalization in natural language understanding systems."
] |
[
"A variety of cooperative multi-agent control problems require agents to achieve individual goals while contributing to collective success.",
"This multi-goal multi-agent setting poses difficulties for recent algorithms, which primarily target settings with a single global reward, due to two new challenges: efficient exploration for learning both individual goal attainment and cooperation for others' success, and credit-assignment for interactions between actions and goals of different agents.",
"To address both challenges, we restructure the problem into a novel two-stage curriculum, in which single-agent goal attainment is learned prior to learning multi-agent cooperation, and we derive a new multi-goal multi-agent policy gradient with a credit function for localized credit assignment.",
"We use a function augmentation scheme to bridge value and policy functions across the curriculum.",
"The complete architecture, called CM3, learns significantly faster than direct adaptations of existing algorithms on three challenging multi-goal multi-agent problems: cooperative navigation in difficult formations, negotiating multi-vehicle lane changes in the SUMO traffic simulator, and strategic cooperation in a Checkers environment.",
"Many real-world scenarios that require cooperation among multiple autonomous agents are multi-goal multi-agent control problems: each agent needs to achieve its own individual goal, but the global optimum where all agents succeed is only attained when agents cooperate to allow the success of other agents.",
"In autonomous driving, multiple vehicles must execute cooperative maneuvers when their individual goal locations and nominal trajectories are in conflict (e.g., double lane merges) (Cao et al., 2013) .",
"In social dilemmas, mutual cooperation has higher global payoff but agents' individual goals may lead to defection out of fear or greed (Van Lange et al., 2013) .",
"Even settings with a global objective that seem unfactorizable can be formulated as multi-goal problems: in Starcraft II micromanagement, a unit that gathers resources must not accidentally jeopardize a teammate's attempt to scout the opponent base (Blizzard Entertainment, 2019) ; in traffic flow optimization, different intersection controllers may have local throughput goals but must cooperate for high global performance (Zhang et al., 2019) .",
"While the framework of multi-agent reinforcement learning (MARL) (Littman, 1994; Stone and Veloso, 2000; Shoham et al., 2003) has been equipped with methods in deep reinforcement learning (RL) (Mnih et al., 2015; Lillicrap et al., 2016) and shown promise on high-dimensional problems with complex agent interactions (Lowe et al., 2017; Mordatch and Abbeel, 2018; Foerster et al., 2018; Lin et al., 2018; Srinivasan et al., 2018) , learning multi-agent cooperation in the multi-goal scenario involves significant open challenges.",
"First, given that exploration is crucial for RL (Thrun, 1992) and even more so in MARL with larger state and joint action spaces, how should agents explore to learn both individual goal attainment and cooperation for others' success?",
"Uniform random exploration is common in deep MARL (Hernandez-Leal et al., 2018) but can be highly inefficient as the value of cooperative actions may be discoverable only in small regions of state space where cooperation is needed.",
"Furthermore, the conceptual difference between attaining one's own goal and cooperating for others' success calls for more modularized and targeted approaches.",
"Second, while there are methods for multi-agent credit assignment when all agents share a single goal (i.e., a global reward) (Chang et al., 2004; Foerster et al., 2018; Nguyen et al., 2018) , and while one could treat the cooperative multi-goal scenario as a problem with a single joint goal, this coarse approach makes it extremely difficult to evaluate the impact of an agent's action on another agent's success.",
"Instead, the multi-goal scenario can benefit from fine-grained credit assignment that leverages available structure in action-goal interactions, such as local interactions where only few agents affect another agent's goal attainment at any time.",
"Given these open challenges, our paper focuses on the cooperative multi-goal multi-agent setting where each agent is assigned a goal 1 and must learn to cooperate with other agents with possibly different goals.",
"To tackle the problems of efficient exploration and credit assignment in this complex problem setting, we develop CM3, a novel general framework involving three synergistic components:",
"1. We approach the difficulty of multi-agent exploration from a novel curriculum learning perspective, by first training an actor-critic pair to achieve different goals in an induced single-agent setting (Stage 1), then using them to initialize all agents in the multi-agent environment (Stage 2).",
"The key insight is that agents who can already act toward individual objectives are better prepared for discovery of cooperative solutions with additional exploration once other agents are introduced.",
"In contrast to hierarchical learning where sub-goals are selected sequentially in time (Sutton et al., 1999) , all agents act toward their goals simultaneously in Stage 2 of our curriculum.",
"2. Observing that a wide array of complex MARL problems permit a decomposition of agents' observations and state vectors into components of self, others, and non-agent specific environment information (Hernandez-Leal et al., 2018) , we employ function augmentation to bridge Stages 1-2: we reduce the number of trainable parameters of the actor-critic in Stage 1 by limiting their input space to the part that is sufficient for single-agent training, then augment the architecture in Stage 2 with additional inputs and trainable parameters for learning in the multi-agent environment.",
"3. We propose a credit function, which is an action-value function that specifically evaluates actiongoal pairs, for localized credit assignment in multi-goal MARL.",
"We use it to derive a multi-goal multi-agent policy gradient for Stage 2.",
"In synergy with the curriculum, the credit function is constructed via function augmentation from the critic in Stage 1.",
"We evaluate our method on challenging multi-goal multi-agent environments with high-dimensional state spaces: cooperative navigation with difficult formations, double lane merges in the SUMO simulator (Lopez et al., 2018) , and strategic teamwork in a Checkers game.",
"CM3 solved all domains significantly faster than IAC and COMA (Tan, 1993; Foerster et al., 2018) , and solved four out of five environments significantly faster than QMIX (Rashid et al., 2018) .",
"Exhaustive ablation experiments show that the combination of all three components is crucial for CM3's overall high performance.",
"CM3 finds optimal or near-optimal policies significantly faster than IAC and COMA on all domains, and performs significantly higher than QMIX in four out of five.",
"We report absolute runtime in Appendix F and account for CM3's Stage 1 episodes (Appendix J) when comparing sample efficiency.",
"Main comparison.",
"Over all cooperative navigation scenarios (Figures 5a to 5c), CM3 (with 1k episodes in Stage 1) converged more than 15k episodes faster than IAC.",
"IAC reached the same final performance as CM3 because dense individual rewards simplifies the learning problem for IAC's fully decentralized approach, but CM3 benefited significantly from curriculum learning, as evidenced by comparison to \"Direct\" in Figure 5f .",
"QMIX and COMA settled at suboptimal behavior.",
"Both learn global critics that use all goals as input, in contrast to CM3 and IAC that process each goal separately.",
"This indicates the difficulty of training agents for individual goals under a purely global approach.",
"While COMA was shown to outperform IAC in SC2 micromanagement where IAC must learn from a single team reward (Foerster et al., 2018) , our IAC agents have access to individual rewards that resolve the credit assignment issue and improve performance (Singh et al., 2019) .",
"In SUMO (Figure 5d ), CM3 and QMIX found cooperative solutions with performances within the margin of error, while COMA and IAC could not break out of local optima where vehicles move straight but do not perform merge maneuvers.",
"Since initial states force agents into the region of state space requiring cooperation, credit assignment rather than exploration is the dominant challenge, which CM3 addressed via the credit function, as evidenced in Figure 5i .",
"IAC underperformed because SUMO requires a longer sequence of cooperative actions and gave much sparser rewards than the \"Merge\" scenario in cooperative navigation.",
"We also show that centralized training of merely two decentralized agents allows them to generalize to settings with much heavier traffic (Appendix E).",
"In Checkers (Figure 5e ), CM3 (with 5k episodes in Stage 1) converged 10k episodes faster than COMA and QMIX to the global optimum with score 24.",
"Both exploration of the combinatorially large joint trajectory space and credit assignment for path clearing are challenges that CM3 successfully addressed.",
"COMA only solved Checkers among all domains, possibly because the small bounded environment alleviates COMA's difficulty with individual goals in large state spaces.",
"IAC underperformed all centralized learning methods because cooperative actions that give no instantaneous reward are hard for selfish agents to discover in Checkers.",
"These results demonstrate CM3's ability to attain individual goals and find cooperative solutions in diverse multi-agent systems.",
"Ablations.",
"The significantly better performance of CM3 versus \"Direct\" (Figures 5f to 5j) shows that learning individual goal attainment prior to learning multi-agent cooperation, and initializing Stage 2 with Stage 1 parameters, are crucial for improving learning speed and stability.",
"It gives evidence that while global action-value and credit functions may be difficult to train from scratch, function augmentation significantly eases the learning problem.",
"While \"QV\" initially learns quickly to attain individual goals, it does so at the cost of frequent collisions, higher variance, and inability to maintain a cooperative solution, giving clear evidence for the necessity of the credit function.",
"We presented CM3, a general framework for cooperative multi-goal MARL.",
"CM3 addresses the need for efficient exploration to learn both individual goal attainment and cooperation, via a two-stage curriculum bridged by function augmentation.",
"It achieves local credit assignment between action and goals using a credit function in a multi-goal policy gradient.",
"In diverse experimental domains, CM3 attains significantly higher performance, faster learning, and overall robustness than existing MARL methods, displaying strengths of both independent learning and centralized credit assignment while avoiding shortcomings of existing methods.",
"Ablations demonstrate each component is crucial to the whole framework.",
"Our results motivate future work on analyzing CM3's theoretical properties and generalizing to inhomogeneous systems or settings without known goal assignments.",
"Hernandez-Leal, P., Kartal, B., and Taylor Tampuu, A., Matiisen, T., Kodelja, D., Kuzovkin, I., Korjus, K., Aru, J., Aru, J., and Vicente, R. Instantiate N > 1 agents 8: Set all target network weights to equal main networks weights 13:",
"Initialize exploration parameter = start and empty replay buffer B"
] | [
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.15789473056793213,
0.2539682388305664,
0.24137930572032928,
0.1111111044883728,
0.1666666567325592,
0.06557376682758331,
0.07843136787414551,
0,
0.05128204822540283,
0.18421052396297455,
0.1071428507566452,
0.072727270424366,
0.09999999403953552,
0.20000000298023224,
0.18518517911434174,
0.18867923319339752,
0.21276594698429108,
0.1355932205915451,
0.1249999925494194,
0.07999999821186066,
0.09195402264595032,
0.1860465109348297,
0.1764705777168274,
0.05405404791235924,
0.2142857164144516,
0.04444443807005882,
0.051282044500112534,
0.09090908616781235,
0.09756097197532654,
0.04651162400841713,
0.1818181723356247,
0.0714285671710968,
0.04878048226237297,
0.0555555522441864,
0.09836065024137497,
0.07017543166875839,
0.11538460850715637,
0.09302324801683426,
0,
0.04255318641662598,
0.2380952388048172,
0,
0.13636362552642822,
0.15789473056793213,
0.145454540848732,
0.13333332538604736,
0.14814814925193787,
0.19354838132858276,
0.22727271914482117,
0.21621620655059814,
0.19230768084526062,
0,
0.0952380895614624,
0.06896550953388214,
0.12903225421905518
] | S1lEX04tPr | true | [
"A modular method for fully cooperative multi-goal multi-agent reinforcement learning, based on curriculum learning for efficient exploration and credit assignment for action-goal interactions."
] |
[
"We identify two issues with the family of algorithms based on the Adversarial Imitation Learning framework.",
"The first problem is implicit bias present in the reward functions used in these algorithms.",
"While these biases might work well for some environments, they can also lead to sub-optimal behavior in others.",
"Secondly, even though these algorithms can learn from few expert demonstrations, they require a prohibitively large number of interactions with the environment in order to imitate the expert for many real-world applications.",
"In order to address these issues, we propose a new algorithm called Discriminator-Actor-Critic that uses off-policy Reinforcement Learning to reduce policy-environment interaction sample complexity by an average factor of 10.",
"Furthermore, since our reward function is designed to be unbiased, we can apply our algorithm to many problems without making any task-specific adjustments.",
"The Adversarial Imitation Learning (AIL) class of algorithms learns a policy that robustly imitates an expert's actions via a collection of expert demonstrations, an adversarial discriminator and a reinforcement learning method.",
"For example, the Generative Adversarial Imitation Learning (GAIL) algorithm BID19 ) uses a discriminator reward and a policy gradient algorithm to imitate an expert RL policy.",
"Similarly, the Adversarial Inverse Reinforcement Learning (AIRL) algorithm BID10 ) makes use of a modified GAIL discriminator to recover a reward function to perform Inverse Reinforcement Learning (IRL) BID1 .",
"Additionally, this subsequent dense reward is robust to changes in dynamics or environment properties.",
"Importantly, AIL algorithms such as GAIL and AIRL, obtain higher performance than supervised Behavioral Cloning (BC) when using a small number of expert demonstrations; experimentally suggesting that AIL algorithms alleviate some of the distributional drift BID35 issues associated with BC.",
"However, these AIL methods suffer from two important issues that will be addressed by this work:",
"1) a large number of policy interactions with the learning environment is required for policy convergence and",
"2) although in principle these methods can learn rewards for absorbing states, the original implementations suffer from improper handling of the environment terminal states.",
"This introduces implicit rewards priors which can either improve or degrade policy performance.",
"Figure 1 : The Discriminator-Actor-Critic imitation learning framework combined with a method to explicitly learn rewards for the absorbing states.While GAIL requires as little as 200 expert frame transitions (from 4 expert trajectories) to learn a robust reward function on most MuJoCo BID41 tasks, the number of policy frame transitions sampled from the environment can be as high as 25 million in order to reach convergence.",
"If PPO ) is used in place of TRPO BID37 , the sample complexity can be improved (for example, as in Figure 3 , 25 million steps reduces to approximately 10 million steps), however it is still intractable for many robotics or real-world applications.",
"In this work we address this issue by incorporating an off-policy RL algorithm (TD3 BID11 ) and an off-policy discriminator to dramatically decrease the sample complexity by orders of magnitude.In this work, we also illustrate how specific design choices for AIL algorithms and MDPs used in practice, have a large impact on agent performance for environments with absorbing states.",
"For instance, as we will demonstrate, if the implementation assigns zero rewards for absorbing states, a strictly positive reward function can prevent the agent from solving tasks with a minimal number of steps, while a strictly negative reward function is unable to emulate a survival bonus.",
"Therefore, one must have some knowledge of the true environment reward and incorporate such priors to choose a suitable reward function for successful application of GAIL and AIRL.",
"We will discuss these issues formally, and present a simple -yet effective -solution that drastically improves policy performance for environments with absorbing states; we explicitly handle absorbing state transitions by learning the reward associated with these states.First we propose a new algorithm, which we call Discriminator-Actor-Critic (DAC) (Figure",
"1) , that is compatible with the GAIL and AIRL frameworks by extending them with an off-policy discriminator and an off-policy actor-critic reinforcement learning algorithm.",
"Then we propose a general approach to handling absorbing states in inverse reinforcement learning and reward learning methods.",
"We experimentally demonstrate that this removes the bias due to incorrect absorbing state handling in both GAIL-like and AIRL-like variants of our DAC algorithm.",
"In our experiments, we demonstrate that DAC achieves state-of-the-art AIL performance for a number of difficult imitation learning tasks, where proper handling of terminal states is crucial for matching expert performance in the presence of absorbing states.",
"More specifically, in this work we:• Identify, and propose solutions for the problem of handling terminal states of policy rollouts in standard RL benchmarks in the context of AIL algorithms.•",
"Accelerate learning from demonstrations by providing an off-policy variant for AIL algorithms, which significantly reduces the number of agent-environment interactions.•",
"Illustrate the robustness of DAC to noisy, multi-modal and constrained expert demonstrations, by performing experiments with human demonstrations on non-trivial robotic tasks.",
"In this work we address several important issues associated with the popular GAIL framework.",
"In particular, we address",
"1) sample inefficiency with respect to policy transitions in the environment and",
"2) we demonstrate a number of reward biases that can either implicitly impose prior knowledge about the true reward, or alternatively, prevent the policy from imitating the optimal expert.",
"To Figure 6 : Effect of learning absorbing state rewards when using an AIRL discriminator within the DAC Framework in OpenAI Gym environments.address reward bias, we propose a simple mechanism whereby the rewards for absorbing states are also learned, which negates the need to hand-craft a discriminator reward function for the properties of the task at hand.",
"In order to improve sample efficiency, we perform off-policy training of the discriminator and use an off-policy RL algorithm.",
"We show that our algorithm reaches state-of-theart performance for an imitation learning algorithm on several standard RL benchmarks, and is able to recover the expert policy given a significantly smaller number of samples than in recent GAIL work."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0
] | [
0.12903225421905518,
0.2666666507720947,
0.05882352590560913,
0.08695651590824127,
0.08888888359069824,
0.05405404791235924,
0.1860465109348297,
0.10256409645080566,
0.09999999403953552,
0.13333332538604736,
0.18867924809455872,
0,
0.1249999925494194,
0.05128204822540283,
0,
0.1690140813589096,
0.1071428507566452,
0.1492537260055542,
0.072727270424366,
0.24390242993831635,
0.1355932205915451,
0.21621620655059814,
0.24242423474788666,
0.19999998807907104,
0.1249999925494194,
0.1428571343421936,
0.05405404791235924,
0.052631575614213943,
0.13333332538604736,
0.09999999403953552,
0.2857142686843872,
0.04651162400841713,
0.158730149269104,
0.11764705181121826,
0.22641508281230927
] | Hk4fpoA5Km | true | [
"We address sample inefficiency and reward bias in adversarial imitation learning algorithms such as GAIL and AIRL."
] |
[
"Capsule Networks have shown encouraging results on defacto benchmark computer vision datasets such as MNIST, CIFAR and smallNORB.",
"Although, they are yet to be tested on tasks where (1) the entities detected inherently have more complex internal representations and (2) there are very few instances per class to learn from and (3) where point-wise classification is not suitable.",
"Hence, this paper carries out experiments on face verification in both controlled and uncontrolled settings that together address these points.",
"In doing so we introduce Siamese Capsule Networks, a new variant that can be used for pairwise learning tasks.",
"The model is trained using contrastive loss with l2-normalized capsule encoded pose features.",
"We find that Siamese Capsule Networks perform well against strong baselines on both pairwise learning datasets, yielding best results in the few-shot learning setting where image pairs in the test set contain unseen subjects.",
"Convolutional Neural networks (CNNs) have been a mainstay model for a wide variety of tasks in computer vision.",
"CNNs are effective at detecting local features in the receptive field, although the spatial relationship between features is lost when crude routing operations are performed to achieve translation invariance, as is the case with max and average pooling.",
"Essentially, pooling results in viewpoint invariance so that small perturbations in the input do not effect the output.",
"This leads to a significant loss of information about the internal properties of present entities (e.g location, orientation, shape and pose) in an image and relationships between them.",
"The issue is usually combated by having large amounts of annotated data from a wide variety of viewpoints, albeit redundant and less efficient in many cases.",
"As noted by hinton1985shape, from a psychology perspective of human shape perception, pooling does not account for the coordinate frames imposed on objects when performing mental rotation to identify handedness BID20 ; BID16 BID10 .",
"Hence, the scalar output activities from local kernel regions that summarize sets of local inputs are not sufficient for preserving reference frames that are used in human perception, since viewpoint information is discarded.",
"Spatial Transformer Networks (STN) BID11 have acknowledged the issue by using dynamic spatial transformations on feature mappings to enhance the geometric invariance of the model, although this approach addresses changes in viewpoint by learning to remove rotational and scale variance, as opposed to viewpoint variance being reflected in the model activations.",
"Instead of addressing translation invariance using pooling operations, BID6 have worked on achieving translation equivariance.The recently proposed Capsule Networks BID21 ; BID5 have shown encouraging results to address these challenges.",
"Thus far, Capsule Networks have only been tested on datasets that have (1) a relatively sufficient number of instances per class to learn from and (2) utilized on tasks in the standard classification setup.",
"This paper extends Capsule Networks to the pairwise learning setting to learn relationships between whole entity encodings, while also demonstrating their ability to learn from little data that can perform few-shot learning where instances from new classes arise during testing (i.e zero-shot prediction).",
"The Siamese Capsule Network is trained using a contrastive loss with 2 -normalized encoded features and demonstrated on two face verification tasks.",
"BID6 first introduced the idea of using whole vectors to represent internal properties (referred to as instantiation parameters that include pose) of an entity with an associated activation probability where each capsule represents a single instance of an entity within in an image.",
"This differs from the single scalar outputs in conventional neural networks where pooling is used as a crude routing operation over filters.",
"Pooling performs sub-sampling so that neurons are invariant to viewpoint change, instead capsules look to preserve the information to achieve equivariance, akin to perceptual systems.",
"Hence, pooling is replaced with a dynamic routing scheme to send lowerlevel capsule (e.g nose, mouth, ears etc.) outputs as input to parent capsule (e.g face) that represent part-whole relationships to achieve translation equivariance and untangles the coordinate frame of an entity through linear transformations.",
"The idea has its roots in computer graphics where images are rendered given an internal hierarchical representation, for this reason the brain is hypothesized to solve an inverse graphics problem where given an image the cortex deconstructs it to its latent hierarchical properties.",
"The original paper by BID21 describes a dynamic routing scheme that represent these internal representations as vectors given a group of designated neurons called capsules, which consist of a pose vector u ∈ R d and activation α ∈ [0, 1].",
"The architecture consists of two convolutional layers that are used as the initial input representations for the first capsule layer that are then routed to a final class capsule layer.",
"The initial convolutional layers allow learned knowledge from local feature representations to be reused and replicated in other parts of the receptive field.",
"The capsule inputs are determined using a Iterative Dynamic Routing scheme.",
"A transformation W ij is made to output vector u i of capsule C L i .",
"The length of the vector u i represents the probability that this lower-level capsule detected a given object and the direction corresponds to the state of the object (e.g orientation, position or relationship to upper capsule).",
"The output vector u i is transformed into a prediction vectorû j|i , whereû j|i = W ij u i .",
"Then,û j|i is weighted by a coupling coefficient c ij to obtain s j = i c ijûj|i , where coupling coefficients for each capsule j c ij = 1 and c ij is got by log prior probabilities b ij from a sigmoid function, followed by the softmax, c ij = e bij / k e b ik .",
"Ifû L j|i has high scalar magnitude when multiplied by u L+1 j then the coupling coefficient c ij is increased and the remaining potential parent capsules coupling coefficients are decreased.",
"Routing By Agreement is then performed using coincidence filtering to find tight clusters of nearby predictions.",
"The entities output vector length is represented as the probability of an entity being present by using the nonlinear normalization shown in Equation 1 where vote v j is the output from total input s j , which is then used to compute the agreement a ij = v jûj|i that is added to the log prior b ij .",
"This paper has introduced the Siamese Capsule Network, a novel architecture that extends Capsule Networks to the pairwise learning setting with a feature 2 -normalized contrastive loss that maximizes inter-class variance and minimizes intra-class variance.",
"The results indicate Capsule Networks perform better at learning from only few examples and converge faster when a contrastive loss is used that takes face embeddings in the form of encoded capsule pose vectors.",
"We find Siamese Capsule Networks to perform particularly well on the AT&T dataset in the few-shot learning setting, which is tested on unseen classes (i.e subjects) during testing, while competitive against baselines for the larger Labeled Faces In The Wild dataset."
] | [
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.09302324801683426,
0.13114753365516663,
0.08888888359069824,
0.5,
0.052631575614213943,
0.3571428656578064,
0.2380952388048172,
0.06896550953388214,
0.1463414579629898,
0.11538460850715637,
0.07999999821186066,
0.10169491171836853,
0.2181818187236786,
0.14705881476402283,
0.1111111044883728,
0.24561403691768646,
0.25,
0.12765957415103912,
0.16393442451953888,
0.1702127605676651,
0.08510638028383255,
0.11764705181121826,
0.10169491171836853,
0.06451612710952759,
0.23999999463558197,
0.1666666567325592,
0.0555555522441864,
0.14999999105930328,
0.145454540848732,
0,
0.09090908616781235,
0.03703703358769417,
0.04878048226237297,
0.138888880610466,
0.290909081697464,
0.33898305892944336,
0.2539682388305664
] | SyeQFiCcF7 | true | [
"A variant of capsule networks that can be used for pairwise learning tasks. Results shows that Siamese Capsule Networks work well in the few shot learning setting."
] |
[
"Animals excel at adapting their intentions, attention, and actions to the environment, making them remarkably efficient at interacting with a rich, unpredictable and ever-changing external world, a property that intelligent machines currently lack.",
"Such adaptation property strongly relies on cellular neuromodulation, the biological mechanism that dynamically controls neuron intrinsic properties and response to external stimuli in a context dependent manner.",
"In this paper, we take inspiration from cellular neuromodulation to construct a new deep neural network architecture that is specifically designed to learn adaptive behaviours.",
"The network adaptation capabilities are tested on navigation benchmarks in a meta-learning context and compared with state-of-the-art approaches.",
"Results show that neuromodulation is capable of adapting an agent to different tasks and that neuromodulation-based approaches provide a promising way of improving adaptation of artificial systems.",
"We are now seeing the emergence of highly efficient algorithms that are capable of learning and solving complex problems.",
"However, it remains difficult to learn models that generalise or adapt themselves efficiently to new, unforeseen problems based on past experiences.",
"This calls for the development of novel architectures specifically designed to enhance adaptation capabilities of current deep neural networks (DNN).",
"In biological nervous systems, cellular neuromodulation provides the ability to continuously tune neurons input/output behavior to shape their response to external inputs in different contexts, generally in response to an external signal carried by biochemicals called neuromodulators [2, 9] .",
"Neuromodulation regulates many critical nervous system properties that cannot be achieved solely through synaptic plasticity [7, 8] , which represents the ability for neurons to tune their connectivity during learning.",
"Neuromodulation has been shown to be critical to the adaptive control of continuous behaviours, such as in motor control among others [7, 8] .",
"We propose a new neural architecture specifically designed for DNNs and inspired from cellular neuromodulation which we call NMN, standing for \"Neuro-Modulated Network\".",
"At its core, the NMN architecture is made of two neural networks: a main network and a neuromodulatory network.",
"The main network is a feed-forward DNN composed of neurons equipped with a parametric activation function specifically designed for neuromodulation.",
"It allows the main network to be adapted to new unforeseen problems.",
"The neuromodulatory network, on the other hand, controls the neuronal dynamics of the main network via the parameters of its activation functions.",
"Both networks have different inputs: whereas the main network is in charge of processing samples, the neuromodulatory network processes feedback and contextual data.",
"In [11] , the authors take inspiration from Hebbian plasticity to build networks with plastic weights, allowing them to tune their weights dynamically.",
"In [10] the same authors extand their work by learning a neuromodulatory signal that dictates which and when connections should be plastic.",
"Our architecture is also related to hypernetworks [5] , in which a network's weights are computed through another network.",
"Other recent works focused on learning fixed activation functions [1, 6] .",
"The NMN architecture revolves around the neuromodulatory interaction between the neuromodulatory and main networks.",
"We mimick biological cellular neuromodulation [3] in a DNN by assigning the neuromodulatory network the task to tune the slope and bias of the main network activation functions.",
"Let σ(x) : R → R denote any activation function and its neuromodulatory capable version σ NMN (x, z; w s , w b ) = σ z T (xw s + w b ) where z ∈ R k is a neuromodulatory signal and w s , w b ∈ R k are two parameter vectors of the activation function, respectively governing a scale factor and an offset.",
"In this work, we propose to replace all the main network's neurons activation function with their neuromodulatory capable counterparts.",
"The neuromodulatory signal z, which size k is a free parameter, is shared for all these neurons and computed by the neuromodulatory network as z = f (c).",
"The function f can be any DNN taking as input the vector c representing some contextual inputs (e.g. c may have a dynamic size in which case f would be parameterized as a recurrent neural network (RNN) or a conditional neural process [4] ).",
"The complete NMN architecture and the change made to the activation functions are depicted on Figure 1 .",
"Notably, the number of newly introduced parameters scales linearly with the number of neurons in the main network whereas it would scale linearly with the number of connections between neurons if the neuromodulatory network was affecting connection weights, as seen for instance in the context of hypernetworks [5] .",
"Therefore this approach can be extanded to very large networks.",
"In this work, we use a high level view of a nervous system mechanism called cellular neuromodulation to improve artificial neural networks adaptive capabilities.",
"The results obtained on three meta-RL benchmark problems showed that this new architecture was able to perform better than classical RNN.",
"The work reported in this paper could be extended along several lines.",
"First, it would be interesting to explore other types of machine-learning problems where adaptation is required.",
"Second, research work could also be carried out to further improve the NMN introduced here.",
"For instance, one could introduce new types of parametric activation functions which are not linear, or spiking neurons.",
"It would also be of interest to look at sharing activation function parameters per layer.",
"Furthermore, analysing more in-depth the neuromodulatory signal (and its impact on activation functions) with respect to different more complex tasks could also be worth-while.",
"Finally, let us emphasize that even if the results obtained by our NMN are good and also rather robust with respect to a large choice of parameters, further research is certainly still needed to better characterise their performances."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0,
0.05714285373687744,
0.1249999925494194,
0.07692307233810425,
0.1249999925494194,
0,
0,
0.2222222238779068,
0.09756097197532654,
0,
0.06896550953388214,
0.13333332538604736,
0.07999999821186066,
0.07407406717538834,
0,
0,
0.13793103396892548,
0.06666666269302368,
0,
0.07407406717538834,
0,
0.09999999403953552,
0.1249999925494194,
0,
0,
0,
0.08695651590824127,
0,
0.04878048598766327,
0.1111111044883728,
0.25806450843811035,
0,
0.19999998807907104,
0,
0,
0,
0,
0,
0
] | H1xI7XYULr | true | [
"This paper introduces neuromodulation in artificial neural networks."
] |
[
"Convolution operator is the core of convolutional neural networks (CNNs) and occupies the most computation cost.",
"To make CNNs more efficient, many methods have been proposed to either design lightweight networks or compress models.",
"Although some efficient network structures have been proposed, such as MobileNet or ShuffleNet, we find that there still exists redundant information between convolution kernels.",
"To address this issue, we propose a novel dynamic convolution method named \\textbf{DyNet} in this paper, which can adaptively generate convolution kernels based on image contents.",
"To demonstrate the effectiveness, we apply DyNet on multiple state-of-the-art CNNs.",
"The experiment results show that DyNet can reduce the computation cost remarkably, while maintaining the performance nearly unchanged.",
"Specifically, for ShuffleNetV2 (1.0), MobileNetV2 (1.0), ResNet18 and ResNet50, DyNet reduces 40.0%, 56.7%, 68.2% and 72.4% FLOPs respectively while the Top-1 accuracy on ImageNet only changes by +1.0%, -0.27%, -0.6% and -0.08%.",
"Meanwhile, DyNet further accelerates the inference speed of MobileNetV2 (1.0), ResNet18 and ResNet50 by 1.87x,1.32x and 1.48x on CPU platform respectively.",
"To verify the scalability, we also apply DyNet on segmentation task, the results show that DyNet can reduces 69.3% FLOPs while maintaining the Mean IoU on segmentation task.",
"Convolutional neural networks (CNNs) have achieved state-of-the-art performance in many computer vision tasks (Krizhevsky et al., 2012; Szegedy et al., 2013) , and the neural architectures of CNNs are evolving over the years (Krizhevsky et al., 2012; Simonyan & Zisserman, 2014; Szegedy et al., 2015; He et al., 2016; Hu et al., 2018; Zhong et al., 2018a; b) .",
"However, modern high-performance CNNs often require a lot of computation resources to execute large amount of convolution kernel operations.",
"Aside from the accuracy, to make CNNs applicable on mobile devices, building lightweight and efficient deep models has attracting much more attention recently (Howard et al., 2017; Sandler et al., 2018; Ma et al., 2018) .",
"These methods can be roughly categorized into two types: efficient network design and model compression.",
"Representative methods for the former category are MobileNet (Howard et al., 2017; Sandler et al., 2018) and ShuffleNet (Ma et al., 2018; , which use depth-wise separable convolution and channel-level shuffle techniques to reduce computation cost.",
"On the other hand, model compression based methods tend to obtain a smaller network by compressing a larger network via pruning, factorization or mimic (Chen et al., 2015; Han et al., 2015a; Jaderberg et al., 2014; Lebedev et al., 2014; Ba & Caruana, 2014) .",
"Although some handcrafted efficient network structures have been designed, we observe that the significant correlations still exist among convolutional kernels, and introduce large amount of redundant calculations.",
"Moreover, these small networks are hard to compress.",
"For example, Liu et al. (2019) compress MobileNetV2 to 124M, but the accuracy drops by 5.4% on ImageNet.",
"We theoretically analyze above observation, and find that this phenomenon is caused by the nature of static convolution, where correlated kernels are cooperated to extract noise-irrelevant features.",
"Thus it is hard to compress the fixed convolution kernels without information loss.",
"We also find that if we linearly fuse several convolution kernels to generate one dynamic kernel based on the input, we can obtain the noise-irrelevant features without the cooperation of multiple kernels, and further reduce the computation cost of convolution layer remarkably.",
"Based on above observations and analysis, in this paper, we propose a novel dynamic convolution method named DyNet.",
"The overall framework of DyNet is shown in Figure 1 , which consists of a coefficient prediction module and a dynamic generation module.",
"The coefficient prediction module is trainable and designed to predict the coefficients of fixed convolution kernels.",
"Then the dynamic generation module further generates a dynamic kernel based on the predicted coefficients.",
"Our proposed dynamic convolution method is simple to implement, and can be used as a drop-in plugin for any convolution layer to reduce computation cost.",
"We evaluate the proposed DyNet on state-of-the-art networks such as MobileNetV2, ShuffleNetV2 and ResNets.",
"Experiment results show that DyNet reduces 37.0% FLOPs of ShuffleNetV2 (1.0) while further improve the Top-1 accuracy on ImageNet by 1.0%.",
"For MobileNetV2 (1.0), ResNet18 and ResNet50, DyNet reduces 54.7%, 67.2% and 71.3% FLOPs respectively, the Top-1 accuracy on ImageNet changes by −0.27%, −0.6% and −0.08%.",
"Meanwhile, DyNet further accelerates the inference speed of MobileNetV2 (1.0), ResNet18 and ResNet50 by 1.87×,1.32×and 1.48× on CPU platform respectively."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.1249999925494194,
0.11428570747375488,
0.04878048226237297,
0.24390242993831635,
0.1428571343421936,
0.1764705777168274,
0.11538460850715637,
0.14999999105930328,
0.1463414579629898,
0.10344827175140381,
0.2857142686843872,
0.12244897335767746,
0,
0.1249999925494194,
0.11320754140615463,
0.09090908616781235,
0.07999999821186066,
0.1666666567325592,
0.1818181723356247,
0.19999998807907104,
0.22641508281230927,
0.2857142686843872,
0.1621621549129486,
0.24242423474788666,
0.19999998807907104,
0.25,
0.12903225421905518,
0.19999998807907104,
0.08888888359069824,
0.14999999105930328
] | SyeZIkrKwS | true | [
"We propose a dynamic convolution method to significantly accelerate inference time of CNNs while maintaining the accuracy."
] |
[
"Operating deep neural networks on devices with limited resources requires the reduction of their memory footprints and computational requirements.",
"In this paper we introduce a training method, called look-up table quantization (LUT-Q), which learns a dictionary and assigns each weight to one of the dictionary's values.",
"We show that this method is very flexible and that many other techniques can be seen as special cases of LUT-Q.",
"For example, we can constrain the dictionary trained with LUT-Q to generate networks with pruned weight matrices or restrict the dictionary to powers-of-two to avoid the need for multiplications.",
"In order to obtain fully multiplier-less networks, we also introduce a multiplier-less version of batch normalization.",
"Extensive experiments on image recognition and object detection tasks show that LUT-Q consistently achieves better performance than other methods with the same quantization bitwidth.",
"In this paper, we propose a training method for reducing the size and the number of operations of a deep neural network (DNN) that we call look-up table quantization (LUT-Q).",
"As depicted in Fig. 1 , LUT-Q trains a network that represents the weights W ∈ R O×I of one layer by a dictionary d ∈ R K and assignments A ∈ [1, . . . , K] O×I such that Q oi = d Aoi , i.e., elements of Q are restricted to the K dictionary values in d.",
"To learn the assignment matrix A and dictionary d, we iteratively update them after each minibatch.",
"Our LUT-Q algorithm, run for each mini-batch, is summarized in TAB1 LUT-Q has the advantage to be very flexible.",
"By simple modifications of the dictionary d or the assignment matrix A, it can implement many weight compression schemes from the literature.",
"For example, we can constrain the assignment matrix and the dictionary in order to generate a network with pruned weight matrices.",
"Alternatively, we can constrain the dictionary to contain only the values {−1, 1} and obtain a Binary Connect Network BID3 , or to {−1, 0, 1} resulting in a Ternary Weight Network BID12 .",
"Furthermore, with LUT-Q we can also achieve Multiplier-less networks by either choosing a dictionary d whose elements d k are of the form d k ∈ {±2 b k } for all k = 1, . . . , K with b k ∈ Z, or by rounding the output of the k-means algorithm to powers-of-two.",
"In this way we can learn networks whose weights are powers-of-two and can, hence, be implemented without multipliers.The memory used for the parameters is dominated by the weights in affine/convolution layers.",
"Using LUT-Q, instead of storing W, the dictionary d and the assignment matrix A are stored.",
"Hence, for an affine/convolution layer with N parameters, we reduce the memory usage in bits from N B float to just KB float + N ⌈log 2 K⌉, where B float is the number of bits used to store one weight.",
"Furthermore, using LUT-Q we also achieve a reduction in the number of computations: for example, affine layers trained using LUT-Q need to compute just K multiplications at inference time, instead of I multiplications for a standard affine layer with I input nodes.",
"DISPLAYFORM0",
"We have presented look-up table quantization, a novel approach for the reduction of size and computations of deep neural networks.",
"After each minibatch update, the quantization values and assignments are updated by a clustering step.",
"We show that the LUT-Q approach can be efficiently used for pruning weight matrices and training multiplier-less networks as well.",
"We also introduce a new form of batch normalization that avoids the need for multiplications during inference.As argued in this paper, if weights are quantized to very low bitwidth, the activations may dominate the memory footprint of the network during inference.",
"Therefore, we perform our experiments with activations quantized uniformly to 8-bit.",
"We believe that a non-uniform activation quantization, where the quantization values are learned parameters, will help quantize activations to lower precision.",
"This is one of the promising directions for continuing this work.Recently, several papers have shown the benefits of training quantized networks using a distillation strategy BID8 BID14 .",
"Distillation is compatible with our training approach and we are planning to investigate LUT-Q training together with distillation."
] | [
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.13333332538604736,
1,
0.1304347813129425,
0.20408162474632263,
0.2926829159259796,
0.11999999731779099,
0.42307692766189575,
0.23188404738903046,
0.2380952388048172,
0.13636362552642822,
0.17391303181648254,
0.30434781312942505,
0.2641509473323822,
0.17910447716712952,
0.178571417927742,
0.19512194395065308,
0.20338982343673706,
0.1666666567325592,
0.2666666507720947,
0.2926829159259796,
0.17391303181648254,
0.19354838132858276,
0.10810810327529907,
0.21276594698429108,
0.23076923191547394,
0.1904761791229248
] | SJfUvruDom | true | [
"In this paper we introduce a training method, called look-up table quantization (LUT-Q), which learns a dictionary and assigns each weight to one of the dictionary's values"
] |
[
"An unintended consequence of feature sharing is the model fitting to correlated tasks within the dataset, termed negative transfer. ",
"In this paper, we revisit the problem of negative transfer in multitask setting and find that its corrosive effects are applicable to a wide range of linear and non-linear models, including neural networks.",
"We first study the effects of negative transfer in a principled way and show that previously proposed counter-measures are insufficient, particularly for trainable features.",
"We propose an adversarial training approach to mitigate the effects of negative transfer by viewing the problem in a domain adaptation setting.",
"Finally, empirical results on attribute prediction multi-task on AWA and CUB datasets further validate the need for correcting negative sharing in an end-to-end manner.",
"Advances in machine learning have led to proficient supervised learning models with powerful representations in various prediction tasks.",
"We now expect an ideal classification model to restrict itself to a pertinent set of evidences available to it from the input for prediction.",
"Further, we expect the model to disregard any unrelated evidences in the data to enable better generalization.",
"Figure 1: A supervised classifier 'cheetah vs. snow-leopard' that uses unrelated evidence (of habitat) over relevant evidence (of fur patterns).",
"As shown by the pixel importance maps, the model suffers from the negative transfer prevalent in a typical animal image dataset skewed towards the animal's typical habitat and fails to generalize to rare samples.Let us consider the task of training an animal classifier \"cheetah vs. snow-leopards\" from a dataset of images of these animals, such as those illustrated in Figure 1 -a task which ideally should focus on the animal's appearance features.",
"However, a large portion of these images also contain various cues of the typical habitat of the animals in the background, i.e., tall grass and snow (see Figures 1",
"(a) and",
"(b)) which are, in principle, unrelated to the animal's appearance.",
"An archetypal model is deceived by the co-occurrence of such unrelated, yet easily detectable cues of habitat over the animal's appearance features such as complex fur patterns.",
"However, a proficient supervised learning model must identify relevant evidences for the label of interest and at the same time discard various unrelated evidences such as presence of snow, even though it tends to co-occur frequently with snow-leopard.",
"Consequently, it would be more likely that such a model would perform better on rare-instances (such as those in Figures 1",
"(c) and",
"(d)) and generalize better to unseen instances.This phenomenon of co-occurring but unrelated evidences being present in training data and thereby having a debilitating effect on model performance has been described in literature BID8 ; BID16 ; BID9 ; BID13 ; BID15 ).",
"These techniques utilize the easy availability of labels for unrelated evidences (e.g. background habitat labels above), called negative labels which constitutes an auxiliary task, and seek to mitigate its debilitating performance on the primary task (e.g. animal classification above) with techniques referred to as negative-sharing or negative-transfer.While all of these techniques have tackled this problem utilizing various forms of regularization, we describe several shortcomings of this class of approaches, most notable of which is their inapplicability to the popular paradigm of trainable features obtained via neural representation learning.",
"Motivated by these limitations, in this paper we depart from the direction of regularization-based approaches and examine methods inspired from a domain-adaptation viewpoint to propose an adversarial training-based formulation.",
"We uniquely view such a scenario as an instance of adversarial multi-task learning, where the classification tasks are either the primary task of interest (i.e., predicting the presence of fur pattern and color) or the auxiliary negative tasks (i.e., characteristics of habitat) to be avoided.",
"Since the 2 tasks are unrelated, any label correlation between primary and auxiliary labels in the training data is only by chance and therefore from a domain-adaptation perspective, we envision a target-domain as possibly having a different correlation between the primary and auxiliary labels.",
"The effects of negative transfer are hence mitigated when the classification task is trained in this domain.We discuss advantages of our proposed formulation, inspired from domain-adaptation, to alleviate the negative transfer over existing techniques, including ready applicability to neural networks in an end-to-end fashion.",
"It must be noted that, while the formulation of the problem is motivated with multi-task learning, negative-transfer is a disposition of any supervised learning task from simple binary classification to recent popular supervised tasks such as image detection, captioning, or visual dialog.",
"We present motivating literature that prelude this work next.",
"In this work, we show that adversarial learning is the natural answer to prevent negative transfer.",
"This leads to potential improvement in any supervised learning of natural data that is seeking generalization.",
"We find that even in relatively straight-forward linear models presented above, co-occurrence of unrelated labels hampers performance and must be explicitly treated.",
"We address the problem of negative transfer in a multi-task scenario, and also show the applicability of our solution in any supervised task.",
"Supervised learning practitioners can utilize domain expertise to acquire and leverage additional negative labels for this purpose.",
"Recent work in explainability of machine learning models can also be appropriately leveraged to facilitate this task."
] | [
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.21621620655059814,
0.20408162474632263,
0.2380952388048172,
0.5128204822540283,
0.09756097197532654,
0.11764705181121826,
0.29999998211860657,
0.060606054961681366,
0,
0.18666666746139526,
0.08888888359069824,
0.0714285671710968,
0.0476190410554409,
0.18867924809455872,
0.052631575614213943,
0.1071428507566452,
0.10989010334014893,
0.260869562625885,
0.28070175647735596,
0.07843136787414551,
0.28070175647735596,
0.178571417927742,
0.07407406717538834,
0.29411762952804565,
0.1764705777168274,
0.09999999403953552,
0.2631579041481018,
0.22857142984867096,
0.17142856121063232
] | HJgJS30qtm | true | [
"We look at negative transfer from a domain adaptation point of view to derive an adversarial learning algorithm."
] |
[
"Recent theoretical and experimental results suggest the possibility of using current and near-future quantum hardware in challenging sampling tasks.",
"In this paper, we introduce free-energy-based reinforcement learning (FERL) as an application of quantum hardware.",
"We propose a method for processing a quantum annealer’s measured qubit spin configurations in approximating the free energy of a quantum Boltzmann machine (QBM).",
"We then apply this method to perform reinforcement learning on the grid-world problem using the D-Wave 2000Q quantum annealer.",
"The experimental results show that our technique is a promising method for harnessing the power of quantum sampling in reinforcement learning tasks.",
"Reinforcement learning (RL) BID33 ; BID6 has been successfully applied in fields such as engineering BID11 ; BID35 , sociology BID12 ; BID30 , and economics BID22 ; BID31 .",
"The training samples in reinforcement learning are provided by the interaction of an agent with an ambient environment.",
"For example, in a motion planning problem in uncharted territory, it is desirable for the agent to learn to correctly navigate in the fastest way possible, making the fewest blind decisions.",
"That is, neither exploration nor exploitation can be pursued exclusively without either facing a penalty or failing at the task.",
"Our goal is, therefore, not only to design an algorithm that eventually converges to an optimal policy, but for the algorithm to be able to generate suboptimal policies early in the learning process.",
"Free-energy-based reinforcement learning (FERL) using a restricted Boltzmann machine (RBM), as suggested by BID27 , relies on approximating a utility function for the agent, called the Q-function, using the free energy of an RBM.",
"RBMs have the advantage that their free energy can be efficiently calculated using closed formulae.",
"RBMs can represent any joint distribution over binary variables BID20 ; BID15 ; Le BID19 ; however, this property of universality may require exponentially large RBMs BID20 ; Le BID19 .General",
"Boltzmann machines (GBM) are proposed in an effort to devise universal Q-function approximators with polynomially large Boltzmann networks BID10 . Traditionally",
", Monte Carlo simulation is used to perform the computationally expensive tasks of approximating the free energy of GBMs under a Boltzmann distribution. One way to speed",
"up the approximation process is to represent a GBM by an equivalent physical system and try to find its Boltzmann distribution. An example of such",
"a physical system is a quantum annealer consisting of a network of pair-wise interacting quantum bits (qubits). Although quantum annealers",
"have already been used in many areas of computational science, including combinatorial optimization and machine learning, their application in RL has not been explored.In order to use quantum annealing for RL, we first represent the Q-function as the free energy of a physical system, that is, that of a quantum annealer. We then slowly evolve the",
"state of the physical system from a well-known initial state toward a state with a Boltzmann-like probability distribution. Repeating the annealing process",
"sufficiently long can provide us with samples from the Boltzmann distribution so that we can empirically approximate the free energy of the physical system under this distribution. Finally, approximating the free",
"energy of the system would give us an estimate of the Q-function. Up until the past few years, studies",
"were limited to the classical Boltzmann machines. 1 Recently, BID10 generalized the classical",
"method toward a quantum or quantum-inspired algorithm for approximating the free energy of GBMs. Using simulated quantum annealing (SQA) BID10",
"showed that FERL using a deep Boltzmann machine (DBM) can provide a drastic improvement in the early stages of learning, yet performing the same procedure on an actual quantum device remained a difficult task. This is because sampling from a quantum system",
"representing a quantum Boltzmann machine is harder than the classical case, since at the end of each anneal the quantum system is in a superposition. Any attempt to measure the final state of the",
"quantum system is doomed to fail since the superposition would collapse into a classical state that does not carry the entirety of information about the final state.In this work, we have two main contributions. We first employ a quantum annealer as a physical",
"device to approximate the free energy of a classical Boltzmann machine. Second, we generalize the notion of classical Boltzmann",
"machines to quantum Boltzmann machines within the field of RL and utilize a quantum annealer to approximate the free energy of a quantum system. In order to deal with the issue of superposition mentioned",
"above, we propose a novel stacking procedure in that we attempt to reconstruct the full state of superposition from the partial information that we get from sampling after each anneal. Finally we report proof-of-concept results using the D-Wave",
"2000Q quantum processor to provide experimental evidence for the applicability of a quantum annealer in reinforcement learning as predicted by BID10 .",
"We solve the grid-world problem using various Q-learning methods with the Q-function parametrized by different neural networks.",
"For comparison, we demonstrate the performance of a fully connected deep Q-network method that can be considered state of the art.",
"This method efficiently processes every training sample, but, as shown in Fig. 4 , requires a very large number of training samples to converge to the optimal policy.",
"Another conventional method is free-energy-based RL using an RBM.",
"This method is also very successful at learning the optimal policy at the scale of the RL task considered in our experiment.",
"Although this method does not outperform other FERL methods that take advantage of a highly efficient sampling oracle, the processing of each training sample is efficient, as it is based on closed formulae.",
"In fact, for the size of problem considered, the RBM-based FERL outperforms the fully connected deep Q-network method.The comparison of results in Fig. 6 suggests that replica stacking is a successful method for estimating effective classical configurations obtained from a quantum annealer, given that the spins can only be measured in measurement bases.",
"For practical use in RL, this method provides a means of treating the quantum annealer as a QBM.",
"FERL using the quantum annealer, in conjunction with the replica stacking technique, provides significant improvement over FERL using classical Boltzmann machines.",
"The curve representing SQA-based FERL using a Boltzmann machine on the Chimera graph is almost coincident with the one obtained using the D-Wave 2000Q, whereas the SQA-based FERL using a DBM slightly outperforms it.",
"This suggests that quantum annealing chips with greater connectivity and more control over annealing time can further improve the performance of the replica stacking method applied to RL tasks.",
"This is further supported by comparing the performance of SA-based FERL using a DBM versus SA-based FERL using the Chimera graph.",
"This result shows that DBM is, due to its additional connections, a better choice of neural network compared to the Chimera graph.For practical reasons, we aim to associate an identical choice of virtual parameters β and Γ to all of the TFIMs constructed using FERL.",
"BID5 and BID25 provide methods for estimating the effective inverse temperature β for other applications.",
"However, in both studies, the samples obtained from the quantum annealer are matched to the Boltzmann distribution of a classical Ising model.",
"In fact, the transverse-field strength is a second virtual parameter that we consider.",
"The optimal choice Γ \" 0.5 corresponds to 2{3 of the annealing time, in agreement with the work of BID0 , who also considers TFIM with 16 qubits.The agreement of FERL using quantum annealer reads treated as classical Boltzmann samples with that of FERL using SA and classical Boltzmann machines suggests that, at least for this task and this size of Boltzmann machine, the measurements provided by the D-Wave 2000Q can be considered good approximations of Boltzmann distribution samples of classical Ising models.The extended undirected graphical model developed in this paper using the replica stacking method is not limited to Q-function approximation in RL tasks.",
"Potentially, this method can be applied to tasks where Boltzmann machines can be used.",
"This method provides a mechanism for approximating the activations and partition functions of quantum Boltzmann machines that have a significant transverse field.",
"In this paper, we describe a free-energy-based reinforcement learning algorithm using an existing quantum annealer, namely the D-Wave 2000Q.",
"Our method relies on the Suzuki-Trotter decomposition and the use of the measured configurations by the D-Wave 2000Q as replicas of an effective classical Ising model of one dimension higher.",
"The results presented here are first-step proofs of concept of a proposed quantum algorithm with a promising path towards outperforming reinforcement learning algorithms devised for digital hardware.",
"Given appropriate advances in quantum annealing hardware, future research can employ the proposed principles to solve larger-scale reinforcement learning tasks in the emerging field of quantum machine learning."
] | [
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.1666666567325592,
0.1818181723356247,
0.25641024112701416,
0.5,
0.25,
0.09302324801683426,
0.11428570747375488,
0.09090908616781235,
0.10526315122842789,
0.08888888359069824,
0.2083333283662796,
0.060606054961681366,
0,
0.10810810327529907,
0.1904761791229248,
0.1904761791229248,
0.1818181723356247,
0.1818181723356247,
0.05714285373687744,
0.04651162400841713,
0,
0.13793103396892548,
0.1621621549129486,
0.1818181723356247,
0.1818181723356247,
0.178571417927742,
0.1818181723356247,
0.2857142686843872,
0.16326530277729034,
0.31578946113586426,
0.11764705181121826,
0.10810810327529907,
0.13636362552642822,
0.14814814925193787,
0.1621621549129486,
0.08163265138864517,
0.158730149269104,
0.22857142984867096,
0.277777761220932,
0.13636362552642822,
0.2666666507720947,
0.11428570747375488,
0.14035087823867798,
0.0624999962747097,
0.2631579041481018,
0.06451612710952759,
0.2083333283662796,
0.19999998807907104,
0.25641024112701416,
0.2702702581882477,
0.09302324801683426,
0.1860465109348297,
0.1904761791229248
] | HkMhoDITb | true | [
"We train Quantum Boltzmann Machines using a replica stacking method and a quantum annealer to perform a reinforcement learning task."
] |
[
"Deep learning models are vulnerable to adversarial examples crafted by applying human-imperceptible perturbations on benign inputs.",
"However, under the black-box setting, most existing adversaries often have a poor transferability to attack other defense models.",
"In this work, from the perspective of regarding the adversarial example generation as an optimization process, we propose two new methods to improve the transferability of adversarial examples, namely Nesterov Iterative Fast Gradient Sign Method (NI-FGSM) and Scale-Invariant attack Method (SIM).",
"NI-FGSM aims to adapt Nesterov accelerated gradient into the iterative attacks so as to effectively look ahead and improve the transferability of adversarial examples.",
"While SIM is based on our discovery on the scale-invariant property of deep learning models, for which we leverage to optimize the adversarial perturbations over the scale copies of the input images so as to avoid \"overfitting” on the white-box model being attacked and generate more transferable adversarial examples.",
"NI-FGSM and SIM can be naturally integrated to build a robust gradient-based attack to generate more transferable adversarial examples against the defense models.",
"Empirical results on ImageNet dataset demonstrate that our attack methods exhibit higher transferability and achieve higher attack success rates than state-of-the-art gradient-based attacks.",
"Deep learning models have been shown to be vulnerable to adversarial examples Szegedy et al., 2014) , which are generated by applying human-imperceptible perturbations on benign input to result in the misclassification.",
"In addition, adversarial examples have an intriguing property of transferability, where adversarial examples crafted by the current model can also fool other unknown models.",
"As adversarial examples can help identify the robustness of models (Arnab et al., 2018) , as well as improve the robustness of models by adversarial training , learning how to generate adversarial examples with high transferability is important and has gained increasing attentions in the literature.",
"Several gradient-based attacks have been proposed to generate adversarial examples, such as onestep attacks and iterative attacks (Kurakin et al., 2016; .",
"Under the white-box setting, with the knowledge of the current model, existing attacks can achieve high success rates.",
"However, they often exhibit low success rates under the black-box setting, especially for models with defense mechanism, such as adversarial training (Madry et al., 2018; and input modification Xie et al., 2018) .",
"Under the black-box setting, most existing attacks fail to generate robust adversarial examples against defense models.",
"In this work, by regarding the adversarial example generation process as an optimization process, we propose two new methods to improve the transferability of adversarial examples: Nesterov Iterative Fast Gradient Sign Method (NI-FGSM) and Scale-Invariant attack Method (SIM).",
"• Inspired by the fact that Nesterov accelerated gradient (Nesterov, 1983 ) is superior to momentum for conventionally optimization (Sutskever et al., 2013) , we adapt Nesterov accelerated gradient into the iterative gradient-based attack, so as to effectively look ahead and improve the transferability of adversarial examples.",
"We expect that NI-FGSM could replace the momentum iterative gradient-based method in the gradient accumulating portion and yield higher performance.",
"• Besides, we discover that deep learning models have the scale-invariant property, and propose a Scale-Invariant attack Method (SIM) to improve the transferability of adversarial examples by optimizing the adversarial perturbations over the scale copies of the input images.",
"SIM can avoid \"overfitting\" on the white-box model being attacked and generate more transferable adversarial examples against other black-box models.",
"• We found that combining our NI-FGSM and SIM with existing gradient-based attack methods (e.g., diverse input method (Xie et al., 2019) ) can further boost the attack success rates of adversarial examples.",
"Extensive experiments on the ImageNet dataset (Russakovsky et al., 2015) show that our methods attack both normally trained models and adversarially trained models with higher attack success rates than existing baseline attacks.",
"Our best attack method, SI-NI-TI-DIM (Scale-Invariant Nesterov Iterative FGSM integrated with translation-invariant diverse input method), reaches an average success rate of 93.5% against adversarially trained models under the black-box setting.",
"For further demonstration, we evaluate our methods by attacking the latest robust defense methods Xie et al., 2018; Liu et al., 2019; Jia et al., 2019; Cohen et al., 2019) .",
"The results show that our attack methods can generate adversarial examples with higher transferability than state-of-theart gradient-based attacks."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.09756097197532654,
0.1860465109348297,
0.49180328845977783,
0.2978723347187042,
0.1846153736114502,
0.2978723347187042,
0.17391303181648254,
0.1090909019112587,
0.21276594698429108,
0.2295081913471222,
0.13333332538604736,
0.1463414579629898,
0.1428571343421936,
0.1463414579629898,
0.5,
0.27272728085517883,
0.1818181723356247,
0.41379308700561523,
0.2222222238779068,
0.33898305892944336,
0.145454540848732,
0.178571417927742,
0.0416666604578495,
0.2790697515010834
] | SJlHwkBYDH | true | [
"We proposed a Nesterov Iterative Fast Gradient Sign Method (NI-FGSM) and a Scale-Invariant attack Method (SIM) that can boost the transferability of adversarial examples for image classification."
] |
[
"Low bit-width weights and activations are an effective way of combating the increasing need for both memory and compute power of Deep Neural Networks.",
"In this work, we present a probabilistic training method for Neural Network with both binary weights and activations, called PBNet.",
"By embracing stochasticity during training, we circumvent the need to approximate the gradient of functions for which the derivative is zero almost always, such as $\\textrm{sign}(\\cdot)$, while still obtaining a fully Binary Neural Network at test time.",
"Moreover, it allows for anytime ensemble predictions for improved performance and uncertainty estimates by sampling from the weight distribution.",
"Since all operations in a layer of the PBNet operate on random variables, we introduce stochastic versions of Batch Normalization and max pooling, which transfer well to a deterministic network at test time. ",
"We evaluate two related training methods for the PBNet: one in which activation distributions are propagated throughout the network, and one in which binary activations are sampled in each layer.",
"Our experiments indicate that sampling the binary activations is an important element for stochastic training of binary Neural Networks.\n",
"Deep Neural Networks are notorious for having vast memory and computation requirements, both during training and test/prediction time.",
"As such, Deep Neural Networks may be unfeasible in various environments such as battery powered devices, embedded devices (because of memory requirement), on body devices (due to heat dissipation), or environments in which constrains may be imposed by a limited economical budget.",
"Hence, there is a clear need for Neural Networks that can operate in these resource limited environments.One method for reducing the memory and computational requirements for Neural Networks is to reduce the bit-width of the parameters and activations of the Neural Network.",
"This can be achieved either during training (e.g., BID15 ; BID0 ) or using post-training mechanisms (e.g., BID15 , BID5 ).",
"By taking the reduction of the bit-width for weights and activations to the extreme, i.e., a single bit, one obtains a Binary Neural Network.",
"Binary Neural Networks have several advantageous properties, i.e., a 32× reduction in memory requirements and the forward pass can be implemented using XNOR operations and bit-counting, which results in a 58× speedup on CPU BID20 .",
"Moreover, Binary Neural Networks are more robust to adversarial examples BID2 .",
"BID21 introduced a probabilistic training method for Neural Networks with binary weights, but allow for full precision activations.",
"In this paper, we propose a probabilistic training method for Neural Networks with both binary weights and binary activations, which are even more memory and computation efficient.",
"In short, obtain a closed form forward pass for probabilistic neural networks if we constrain the input and weights to binary (random) variables.",
"The output of the Multiply and Accumulate (MAC) operations, or pre-activation, is approximated using a factorized Normal distribution.",
"Subsequently, we introduce stochastic versions of Max-Pooling and Batch Normalization that allow us to propagate the pre-activatoins throughout a single layer.",
"By applying the sign(·) activation function to the random pre-activation, we not only obtain a distribution over binary activations, it also allows for backpropagation through the sign(·) operation.",
"This is especially convenient as this in a deterministic Neural Network all gradient information is zeroed out when using sign as activation.",
"We explore two different methods for training this probabilistic binary neural network: In the first method the activation distribution of layer l is propagated to layer (l + 1), which means the MAC operation is performed on two binary random variables.",
"In the second method the binary activation is sampled as the last operation in a layer using the concrete relaxation BID16 .",
"This can be thought of as a form of local reparametrization BID11 .",
"We call the networks obtained using these methods PBNet and PBNet-S, respectively.At test time, we obtain a single deterministic Binary Neural Network, an ensemble of Binary Neural Networks by sampling from the parameter distribution, or a Ternary Neural Network based on the Binary weight distribution.",
"An advantage of our method is that we can take samples from the parameter distribution indefinitely-without retraining.",
"Hence, this method allows for anytime ensemble predictions and uncertainty estimates.",
"Note that while in this work we only consider the binary case, our method supports any discrete distribution over weights and activations.",
"We have presented a stochastic method for training Binary Neural Networks.",
"The method is evaluated on multiple standardized benchmarks and reached competitive results.",
"The PBNet has various advantageous properties as a result of the training method.",
"The weight distribution allows one to generate ensembles online which results in improved accuracy and better uncertainty estimations.",
"Moreover, the Bayesian formulation of the PBNet allows for further pruning of the network, which we leave as future work.",
"A BINARY DISTRIBUTION For convenience, we have introduced the Binary distribution in this paper.",
"In this appendix we list some of the properties used in the paper, which all follow direcly from the properties of the Bernoulli distribution.",
"The Binary distribution is a reparametrization of the Bernoulli distribution such that: DISPLAYFORM0 This gives the following probability mass function: DISPLAYFORM1 where a ∈ {−1, +1} and θ ∈ [−1, 1].",
"From this, the mean and variance are easily computed: DISPLAYFORM2 Finally, let b ∼ Binary(φ), then ab ∼ Binary(θφ)."
] | [
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.31578946113586426,
0.6111111044883728,
0.19607841968536377,
0.11764705181121826,
0.1666666567325592,
0.29999998211860657,
0.34285715222358704,
0.3030303120613098,
0.07547169178724289,
0.2916666567325592,
0.05714285373687744,
0.41025641560554504,
0.1599999964237213,
0.14814814925193787,
0.4848484694957733,
0.4878048598766327,
0.25641024112701416,
0.11764705181121826,
0.21621620655059814,
0.1463414579629898,
0.1666666567325592,
0.19607841968536377,
0.1764705777168274,
0.07407406717538834,
0.2181818187236786,
0.060606054961681366,
0.2222222238779068,
0.2631579041481018,
0.5925925970077515,
0.1428571343421936,
0.20689654350280762,
0.05882352590560913,
0.060606054961681366,
0.06666666269302368,
0,
0.1395348757505417,
0.05882352590560913
] | B1fysiAqK7 | true | [
"We introduce a stochastic training method for training Binary Neural Network with both binary weights and activations."
] |
[
"The goal of generative models is to model the underlying data distribution of a\n",
"sample based dataset.",
"Our intuition is that an accurate model should in principle\n",
"also include the sample based dataset as part of its induced probability distribution.\n",
"To investigate this, we look at fully trained generative models using the Generative\n",
"Adversarial Networks (GAN) framework and analyze the resulting generator\n",
"on its ability to memorize the dataset.",
"Further, we show that the size of the initial\n",
"latent space is paramount to allow for an accurate reconstruction of the training\n",
"data.",
"This gives us a link to compression theory, where Autoencoders (AE) are\n",
"used to lower bound the reconstruction capabilities of our generative model.",
"Here,\n",
"we observe similar results to the perception-distortion tradeoff (Blau & Michaeli\n",
"(2018)).",
"Given a small latent space, the AE produces low quality and the GAN\n",
"produces high quality outputs from a perceptual viewpoint.",
"In contrast, the distortion\n",
"error is smaller for the AE.",
"By increasing the dimensionality of the latent\n",
"space the distortion decreases for both models, but the perceptual quality only\n",
"increases for the AE.",
"Generative Adversarial Networks (GANs) were introduced by Goodfellow et al. (2014) for the purpose of generative modelling.",
"Since then this framework has been successfully applied to works in style transfer by Karras et al. (2018) , superresolution by Shocher et al. (2018) and semi-supervised learning by Salimans et al. (2016) , but what GANs actually learn is still poorly understood as has been noted by Webster et al. (2019) .",
"Recently, GANs have been used to solve inverse problems, where it was tried to use the generated manifold to solve an auxiliary task like image completion (Webster et al. (2019) ), MRI reconstruction (Narnhofer et al. (2019) ) or anomaly detection (Shocher et al. (2018) ).",
"For those applications, it is necessary to know if the generator NN actually describes the distribution well.",
"Related works have shown that faithfully reconstructing the images from a generator network is not trivial (Webster et al. (2019) ; Shmelkov et al. (2018) ).",
"The original convergence proof by Goodfellow et al. (2014) assumes that the generator and discriminator Neural Networks (NNs) have infinite capacity and they showed that the discriminator network models the Jensen-Shannon divergence between the probability distribution induced by the generator and the real data distribution.",
"Others have adapted this paradigm and devised loss functions which have been shown to converge to other divergences or distances on the underlying probability distributions ; Nowozin et al. (2016) ; Mao et al. (2017) ).",
"Regularization techniques like Gradient Penalty (Gulrajani et al. (2017) ) and Spectral Norm (Miyato et al. (2018) ) did improve the stability of the training process ) but it is still unclear how well this NNs actually approximate such distances even for trivial problems (Pinetz et al. (2018) ).",
"Additionally, it is not at all clear how the generated distribution or the actual target distribution look like.",
"Arora & Zhang (2017) used the birthday paradox to empirically gauge the size of the support of the generated distribution.",
"GANs are used to transform a well understood low dimensional distribution (in practice either gaussian or uniform) to a high dimensional unknown distribution (Goodfellow et al. (2014) ) by playing a min-max game between two NNs.",
"This paper is based around the intuition, that an estimated probability distribution from a dataset X has high precision if a high percentage of the actual data samples are included in the estimated distribution.",
"To have a sense of what an adequate capacity for a generator network is, we use AE to reconstruct the dataset first.",
"This work relies on the assumption that it is easier to reconstruct the data samples alone, than to reconstruct the entire manifold and Section 5 shows empirical evidence for this.",
"Based on our intuition the manifold consists of the data samples and imposes additional structure between the data samples.",
"In contrast by just reproducing the data samples, no such additional restrictions are given, making the problem strictly simpler.",
"AEs can be trained rapidly and have been researched in detail for a long time (Rumelhart et al. (1985) ).",
"In contrast, trying to do a hyperparameter search on the GAN networks themselves gives rise to all kinds of problems, like instabilities in the training process, random failures and dependence on random seeds for their performance ).",
"Hence, our contributions are as follows:",
"• An investigation of the impact of the dimensionality of the latent space on the generated manifold.",
"We showcase that the fit of the data depends heavily on the latent space.",
"We also show similar results thereby to the perception-distortion tradeoff (Blau & Michaeli (2018) ), where with a small dimension for the latent space, the GAN optimizes for perception and the AE optimizes for distortion.",
"• Relating the GAN problem to a compression task and furthermore using compression tools via deep learning to produce a lower bound for a dataset dependent suitable dimensionality of the latent space.",
"• An investigation of the generated manifold and the limitations thereof to produce shifted or noisy images and how this relates to the size of the latent space and overfitting of the generative model.",
"The remainder of this paper is organized as follows.",
"Section 2 shows the related work.",
"Then in Section 3 we revisit the theory behind pseudo inverting NNs and we explain our methodology in Section 4.",
"In Section 5 the results are shown.",
"Section 6 draws conclusions based on those results.",
"In this work, we show that by reducing the problem to a compression task, we can give a lower bound on the required capacity and latent space dimensionality of the generator network for the distribution estimation task.",
"Relating these two different algorithms to each other, the literature surrounding AEs for invertability and dimensionality reduction, as well as the corresponding theoretical advancements are used.",
"While in this work the encoder and the discriminator NNs use the same architecture, we have not discovered any relation between them.",
"Still, the same architecture works well empirically for both task.",
"Using our framework we show various properties of generator networks.",
"The perceptual image quality appears to be independent of the actual size of the latent space, which is in contrast to AE, where the visual quality improves if the dimensionality of the latent space is increased.",
"However, the ability to reconstruct the training set correctly does depend on the initial latent space.",
"Also the ability of the generator to reconstruct deviations from the original dataset, like a validation set or shifted images depends just as much on the initial latent space.",
"However, the same cannot be said for reconstructing arbitrary noise images.",
"Here the reconstruction ability is independent of the initial latent space unless it is chosen very large, suggesting that the generator has learned realistic natural image features.",
"Here for smaller latent spaces we can still observe face like features.",
"Our hypothesis is that the implicit bias induced by the generator architecture lends itself to generating natural images and GANs are skilled at that by learning primitives which can be combined to construct arbitrary images.",
"In future works we want to use our setup to search towards better and more reliable generators for images."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.14814814925193787,
0,
0,
0.1428571343421936,
0.2222222238779068,
0.17391303181648254,
0.0952380895614624,
0.1818181723356247,
0.29629629850387573,
0,
0.1599999964237213,
0.07999999821186066,
0.1538461446762085,
0,
0.1111111044883728,
0.09999999403953552,
0.29999998211860657,
0.1599999964237213,
0.1111111044883728,
0.19354838132858276,
0.037735845893621445,
0.039215683937072754,
0.06666666269302368,
0.052631575614213943,
0.08510638028383255,
0.04444444179534912,
0.07407406717538834,
0.06666666269302368,
0.13333332538604736,
0.04444444179534912,
0.0952380895614624,
0.11428570747375488,
0.04999999701976776,
0.13793103396892548,
0.1249999925494194,
0.05882352590560913,
0.08510638028383255,
0,
0.38461539149284363,
0.38461539149284363,
0.1395348757505417,
0.19512194395065308,
0.20512820780277252,
0.08695651590824127,
0.09999999403953552,
0.19354838132858276,
0.0952380895614624,
0,
0.21739129722118378,
0.052631575614213943,
0.11764705181121826,
0.0833333283662796,
0.0833333283662796,
0.19999998807907104,
0.2142857164144516,
0.19999998807907104,
0.07999999821186066,
0.21052631735801697,
0.07692307233810425,
0.09090908616781235,
0.0624999962747097
] | Hygy01StvH | true | [
"We analyze the impact of the latent space of fully trained generators by pseudo inverting them."
] |
[
"We address the problem of open-set authorship verification, a classification task that consists of attributing texts of unknown authorship to a given author when the unknown documents in the test set are excluded from the training set.",
"We present an end-to-end model-building process that is universally applicable to a wide variety of corpora with little to no modification or fine-tuning.",
"It relies on transfer learning of a deep language model and uses a generative adversarial network and a number of text augmentation techniques to improve the model's generalization ability.",
"The language model encodes documents of known and unknown authorship into a domain-invariant space, aligning document pairs as input to the classifier, while keeping them separate.",
"The resulting embeddings are used to train to an ensemble of recurrent and quasi-recurrent neural networks.",
"The entire pipeline is bidirectional; forward and backward pass results are averaged.",
"We perform experiments on four traditional authorship verification datasets, a collection of machine learning papers mined from the web, and a large Amazon-Reviews dataset.",
"Experimental results surpass baseline and current state-of-the-art techniques, validating the proposed approach.",
"We investigate the applicability of transfer learning techniques to Authorship Verification (AV) problems, and propose a a method that uses some of the most recent advances in deep learning to achieve state of the art results on a variety of datasets.",
"AV seeks to determine whether two or more text documents have been written by the same author.",
"Some applications of AV include plagiarism analysis, sock-puppet detection, blackmailing, and email spoofing prevention BID7 .",
"Traditionally, studies on AV consider a closed and limited set of authors, and a closed set of documents written by such authors.",
"During the training step, some of these documents (sometimes as long as a novel) are used.",
"The goal can be formulated as to successfully identify whether the authors of a pair of documents are identical BID14 BID19 BID11 .",
"This type of AV tasks assumes access to the writing samples of all possible authors during the training step, which is not realistic.",
"Recently, the AV problem has changed to reflect realistic -and more challenging-scenarios.",
"The goal is no longer to individually learn the writing style of the authors (like in traditional AV methods), but to learn what differentiates two different authors within a corpus.",
"This task involves predicting authorship of documents that may not have been previously encountered within the training set; in fact, the presence of the authors in the training data is not guaranteed either.",
"That is, the test set may contain out of training sample data; given a set of authors of unknown papers contained within the training data, A unknown train , and a set of authors of unknown papers in the test data, A unknown test , it is neither unreasonable nor unexpected to find that A unknown train ∩A unknown test = ∅.",
"Some other challenges arise in modern AV tasks, making authorship verification of a given pair of documents hard to infer.",
"One is the lack of training data, which can manifest itself in any one or more of the following: the training set may be small, samples of available writings may be limited, or the length of the given documents may be insufficient.",
"Another is the test and train documents belonging to different genre and/or topics, both within their respective sets as well as between the train and the test set -implying they were likely drawn from different distributions.",
"The challenge is to ensure robustness in a multitude of possible scenarios.",
"Regardless of the AV problem specifics, generally we assume a training dataset made of sets of triples: DISPLAYFORM0 with x i X known , x j X unknown a realization from random variables X known and X unknown , and the label y i,j Y is drawn from a random variable Y , producing a total of P sets of realizations, each potentially by a different author, thus forming up to P source domains, because it can be argued that a collection of literary works by one author forms a latent domain of it's own.",
"The goal is to learn a prediction function f : X → Y that can generalize well and make accurate predictions regarding documents written by authors both inside and outside of the training set, even if those documents were not seen in training.",
"Less formally, in AV the task is composed of multiple sub-problems: for each given sub-set of texts, we are provided one or more documents that need to be verified and one or more that are known to be of identical authorship.",
"We approach the AV problem by designing a straightforward deep document classification model that relies on transfer learning a deep language model, ensembles, an adversary, differential learning rates, and data augmentation.",
"In order to ensure the design's versatility and robustness, we perform authorship verification on a collection of datasets that have little in common in terms of size, distribution, origins, and manner they were designed.",
"For evaluation, we consider standard AV corpora with minimal amount of training data, PAN-2013 BID12 , PAN-2014E and PAN-2014N BID27 , PAN-2015 BID28 , a collection of scientific papers mined from the web BID2 , and Amazon Reviews dataset BID8 .",
"The proposed approach performs well in all scenarios with no specific modifications and minimal fine-tuning, defeating all baselines, PAN competition winners, as well as the recent Transformation Encoder and PRNN models that were recently shown to perform well on AV tasks.",
"BID8 .",
"Authorship verification has always been a challenging problem.",
"It can be even more difficult when no writing samples of questioned author/authors is given.",
"In this paper, we explore the possibility of a more general approach to the problem, one that does not rely on having most of the authors within the training set.",
"To this end, we use transfer and adversarial learning learning, data augmentation, ensemble methods, and cutting edge developments in training deep models to produce an architecture that is to the best of our knowledge novel at least to problem setting.",
"Our design exhibits a high degree of robustness and stability when dealing with out-of-sample (previously unseen) authors and lack of training data and delivers state-of-the-art performance."
] | [
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.2222222238779068,
0.8333333134651184,
0.15686273574829102,
0.19230768084526062,
0.1463414579629898,
0.10526315122842789,
0.2448979616165161,
0.10526315122842789,
0.27586206793785095,
0.09302324801683426,
0.09756097197532654,
0.1395348757505417,
0.09756097197532654,
0.12765957415103912,
0.12765957415103912,
0.052631575614213943,
0.19230768084526062,
0.1538461446762085,
0.1904761791229248,
0.2222222238779068,
0.1090909019112587,
0.1090909019112587,
0.21052631735801697,
0.1505376249551773,
0.1818181723356247,
0.24137930572032928,
0.14814814925193787,
0.28070175647735596,
0.16393442451953888,
0.16129031777381897,
0.11764705181121826,
0.1463414579629898,
0.1538461446762085,
0.158730149269104,
0.20408162474632263
] | BkgdPnjQ84 | true | [
"We propose and end-to-end model-building process that is universally applicable to a wide variety of authorship verification corpora and outperforms state-of-the-art with little to no modification or fine-tuning."
] |
[
"We consider tackling a single-agent RL problem by distributing it to $n$ learners.",
"These learners, called advisors, endeavour to solve the problem from a different focus.",
"Their advice, taking the form of action values, is then communicated to an aggregator, which is in control of the system.",
"We show that the local planning method for the advisors is critical and that none of the ones found in the literature is flawless: the \\textit{egocentric} planning overestimates values of states where the other advisors disagree, and the \\textit{agnostic} planning is inefficient around danger zones.",
"We introduce a novel approach called \\textit{empathic} and discuss its theoretical aspects.",
"We empirically examine and validate our theoretical findings on a fruit collection task.",
"When a person faces a complex and important problem, his individual problem solving abilities might not suffice.",
"He has to actively seek for advice around him: he might consult his relatives, browse different sources on the internet, and/or hire one or several people that are specialised in some aspects of the problem.",
"He then aggregates the technical, ethical and emotional advice in order to build an informed plan and to hopefully make the best possible decision.",
"A large number of papers tackle the decomposition of a single Reinforcement Learning task (RL, Sutton & Barto, 1998) into several simpler ones.",
"They generally follow a method where agents are trained independently and generally greedily to their local optimality, and are aggregated into a global policy by voting or averaging.",
"Recent works BID12 BID30 prove their ability to solve problems that are intractable otherwise.",
"Section 2 provides a survey of approaches and algorithms in this field.Formalised in Section 3, the Multi-Advisor RL (MAd-RL) partitions a single-agent RL into a MultiAgent RL problem BID22 , under the widespread divide & conquer paradigm.",
"Unlike Hierarchical RL BID2 BID19 BID4 , this approach gives them the role of advisor: providing an aggregator with the local Q-values for all actions.",
"The advisors are said to have a focus: reward function, state space, learning technique, etc.",
"The MAd-RL approach allows therefore to tackle the RL task from different focuses.When a person is consulted for an advice by a enquirer, he may answer egocentrically: as if he was in charge of next actions, agnostically: anticipating any future actions equally, or empathically: by considering the next actions of the enquirer.",
"The same approaches are modelled in the local advisors' planning methods.",
"Section 4 shows that the egocentric planning presents the severe theoretical shortcoming of inverting a max into a max in the global Bellman equation.",
"It leads to an overestimation of the values of states where the advisors disagree, and creates an attractor phenomenon, causing the system to remain static without any tie-breaking possibilities.",
"It is shown on a navigation task that attractors can be avoided by lowering the discount factor γ under a given value.",
"The agnostic planning BID30 has the drawback to be inefficient in dangerous environments, because it gets easily afraid of the controller performing a bad sequence of actions.",
"Finally, we introduce our novel empathic planning and show that it converges to the global optimal Bellman equation when all advisors are training on the full state space.van BID29 demonstrate on a fruit collection task that a distributed architecture significantly speeds up learning and converges to a better solution than non distributed baselines.",
"Section 5.2 extends those results and empirically validates our theoretical analysis: the egocentric planning gets stuck in attractors with high γ values; with low γ values, it gets high scores but is also very unstable as soon as some noise is introduced; the agnostic planning fails at efficiently gathering the fruits near the ghosts; despite lack of convergence guarantees with partial information in advisors' state space, our novel empathic planning also achieves high scores while being robust to noise.",
"This article presented MAd-RL, a common ground for the many recent and successful works decomposing a single-agent RL problem into simpler problems tackled by independent learners.",
"It focuses more specifically on the local planning performed by the advisors.",
"Three of them -two found in the literature and one novel -are discussed, analysed and empirically compared: egocentric, agnostic, and empathic.",
"The lessons to be learnt from the article are the following ones.The egocentric planning has convergence guarantees but overestimates the values of states where the advisors disagree.",
"As a consequence, it suffers from attractors: states where the no-op action is preferred to actions making progress on a subset of subtasks.",
"Some domains, such as resource scheduling, are identified as attractor-free, and some other domains, such as navigation, are set conditions on γ to guarantee the absence of attractor.",
"It is necessary to recall that an attractor-free setting means that the system will continue making progress towards goals as long as there are any opportunity to do so, not that the egocentric MAd-RL system will converge to the optimal solution.The agnostic planning also has convergence guarantees, and the local agnostic planning is equivalent to the global agnostic planning.",
"However, it may converge to bad solutions.",
"For instance, in dangerous environments, it considers all actions equally likely, it favours staying away from situation where a random sequence of actions has a significant chance of ending bad: crossing a bridge would be avoided.",
"Still, the agnostic planning simplicity enables the use of general value functions BID28 BID30 .The",
"empathic planning optimises the system according to the global Bellman optimality equation, but without any guarantee of convergence, if the advisor state space is smaller than the global state.In our experiments, we never encountered a case where the convergence was not obtained, and on the Pac-Boy domain, it robustly learns a near optimal policy after only 10 epochs. It",
"can also be safely applied to Ensemble RL tasks where all learners are given the full state space."
] | [
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
1,
0.23076923191547394,
0.06451612710952759,
0.045454543083906174,
0.1599999964237213,
0.1538461446762085,
0.13793103396892548,
0.08510638028383255,
0.05882352590560913,
0.05714285373687744,
0.1621621549129486,
0.07407406717538834,
0.1818181723356247,
0.05405404791235924,
0.1428571343421936,
0.13793103396892548,
0,
0.060606054961681366,
0.05405404791235924,
0.11764705181121826,
0.15789473056793213,
0.10344827175140381,
0.053333330899477005,
0.31578946113586426,
0.0833333283662796,
0,
0.05405404791235924,
0.17142856121063232,
0.0555555522441864,
0.0357142835855484,
0.19999998807907104,
0.09090908616781235,
0,
0.0923076868057251,
0.19354838132858276
] | rkvDssyRb | true | [
"We consider tackling a single-agent RL problem by distributing it to $n$ learners."
] |
[
"We present a hybrid framework that leverages the trade-off between temporal and frequency precision in audio representations to improve the performance of speech enhancement task.",
"We first show that conventional approaches using specific representations such as raw-audio and spectrograms are each effective at targeting different types of noise.\n",
"By integrating both approaches, our model can learn multi-scale and multi-domain features, effectively removing noise existing on different regions on the time-frequency space in a complementary way.",
"Experimental results show that the proposed hybrid model yields better performance and robustness than using each model individually."
] | [
1,
0,
0,
0
] | [
0.21621620655059814,
0.10810810327529907,
0.1538461446762085,
0.19999998807907104
] | B1xOLgWijQ | false | [
"A hybrid model utilizing both raw-audio and spectrogram information for speech enhancement tasks."
] |
[
"Consistently checking the statistical significance of experimental results is the first mandatory step towards reproducible science.",
"This paper presents a hitchhiker's guide to rigorous comparisons of reinforcement learning algorithms.",
"After introducing the concepts of statistical testing, we review the relevant statistical tests and compare them empirically in terms of false positive rate and statistical power as a function of the sample size (number of seeds) and effect size.",
"We further investigate the robustness of these tests to violations of the most common hypotheses (normal distributions, same distributions, equal variances).",
"Beside simulations, we compare empirical distributions obtained by running Soft-Actor Critic and Twin-Delayed Deep Deterministic Policy Gradient on Half-Cheetah.",
"We conclude by providing guidelines and code to perform rigorous comparisons of RL algorithm performances.",
"Reproducibility in Machine Learning and Reinforcement Learning in particular (RL) has become a serious issue in the recent years.",
"As pointed out in Islam et al. BID0 and Henderson et al. BID1 , reproducing the results of an RL paper can turn out to be much more complicated than expected.",
"In a thorough investigation, Henderson et al. BID1 showed it can be caused by differences in codebases, hyperparameters (e.g. size of the network, activation functions) or the number of random seeds used by the original study.",
"Henderson et al. BID1 states the obvious: the claim that an algorithm performs better than another should be supported by evidence, which requires the use of statistical tests.",
"Building on these observations, this paper presents a hitchhiker's guide for statistical comparisons of RL algorithms.",
"The performances of RL algorithm have specific characteristics (they are independent of each other, they are not paired between algorithms etc.).",
"This paper reviews some statistical tests relevant in that context and compares them in terms of false positive rate and statistical power.",
"Beside simulations, it compares empirical distributions obtained by running Soft-Actor Critic (SAC) BID2 and Twin-Delayed DDPG (TD3) BID3 on Half-Cheetah BID4 .",
"We finally provide guidelines to perform robust difference testing in the context of RL.",
"A repository containing the raw results and the code to reproduce all experiments is available at https://github.com/ccolas/rl_stats.",
"No matter the distributions.",
"From the above results, it seems clear that the bootstrap test should never be used for sample sizes below N = 50 and the permutation test should never be used for sample sizes below N = 10.",
"The bootstrap test in particular, uses the sample as an estimate of the true performance distribution.",
"A small sample is a very noisy estimate, which leads to very high false positive rates.",
"The ranked t-test shows a false positive rate of 0 and a statistical power of 0 when N = 2 in all conditions.",
"As noted in BID12 , comparing two samples of size N = 2 can result in only four possible p-values (only 4 possible orders when ranked), none of which falls below α = 0.05.",
"Such quantization issues make this test unreliable for small sample sizes, see BID12 for further comments and references on this issue.When distributions do not meet assumptions.",
"In addition to the behaviors reported above, Section 4.2 shows that non-parametric tests (Mann-Whitney and ranked t-test) can demonstrate very high false positive rates when comparing a symmetric distribution with a skewed one (log-normal).",
"This effect gets worse linearly with the sample size.",
"When the sample size increases, the number of samples drawn in the skewed tail of the log-normal increases.",
"All these realizations will be ranked above any realizations from the other distribution.",
"Therefore, the larger the sample size, the more realization are ranked first in favor of the log-normal, which leads to a bias in the statistical test.",
"This problem does not occur when two log-normal are compared to one another.",
"Comparing a skewed distribution to a symmetric one violates the Mann-Whitney assumptions stating that distributions must have the same shape and spread.",
"The false positive rates of Mann-Whitney and ranked t-test are also above the confidence level whenever a bimodal distribution is compared to another distribution.",
"The traditional recommendation to use non-parametric tests when the distributions are not normal seems to be failing when the two distributions are different.Most robust tests.",
"The t-test and the Welch's t-test were found to be more robust than others to violations of their assumptions.",
"However, α * was found to be slightly above the required level (α * > α) when at least one of the two distributions is skewed (α * ≈ 0.1) no matter the sample size, and when one of the two distributions is bimodal, for small sample sizes N < 10.",
"Welch's α * is always a bit lower than the t-test's α * .Statistical",
"power. Except for",
"the anomalies in small sample size mentioned above due to overconfident tests like the bootstrap or the permutation tests, statistical powers stay qualitatively stable no matter the distributions compared, or the test used: = 0.5: N ≈ 100; = 1: N ≈ 20 and = 2: N ≈ 5, 10.",
"In conclusion, this paper advocates for the use of Welch's t-test with low confidence level (α < 0.05) to ensure a false positive rate below α * < 0.05.",
"The sample size must be selected carefully depending on the expected relative effect size.",
"It also warns against the use of other unreliable tests, such as the bootstrap test (for N < 50), the Mann-Whitney and the ranked t-test (unless assumptions are carefully checked), or the permutation test (for N < 10).",
"Using the t-test or the Welch's t-test with small sample sizes (<5) usually leads to high false positive rate and would require very large relative effect sizes (over = 2) to show good statistical power.",
"Sample sizes above N = 20 generally meet the requirement of a 0.8 statistical power for a relative effect size = 1."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.04878048226237297,
0.20512820780277252,
0.1090909019112587,
0.13636362552642822,
0.13333332538604736,
0.24390242993831635,
0.0476190410554409,
0.14814814925193787,
0,
0.07692307233810425,
0.2380952388048172,
0.04347825422883034,
0.2666666507720947,
0.1702127605676651,
0.14999999105930328,
0.09302324801683426,
0.06666666269302368,
0.07999999821186066,
0,
0.04878048226237297,
0.08695651590824127,
0,
0.15686273574829102,
0.09999999403953552,
0.05714285373687744,
0,
0,
0.08510638028383255,
0.10256409645080566,
0.17391303181648254,
0.08163265138864517,
0.1304347813129425,
0.1395348757505417,
0.1230769157409668,
0,
0.06896551698446274,
0.1515151411294937,
0.1090909019112587,
0,
0.0714285671710968,
0.10526315122842789,
0.08510638028383255
] | ryx0N3IaIV | true | [
"This paper compares statistical tests for RL comparisons (false positive, statistical power), checks robustness to assumptions using simulated distributions and empirical distributions (SAC, TD3), provides guidelines for RL students and researchers."
] |
[
"In the Information Bottleneck (IB), when tuning the relative strength between compression and prediction terms, how do the two terms behave, and what's their relationship with the dataset and the learned representation?",
"In this paper, we set out to answer these questions by studying multiple phase transitions in the IB objective: IB_β[p(z|x)] = I(X; Z) − βI(Y; Z) defined on the encoding distribution p(z|x) for input X, target Y and representation Z, where sudden jumps of dI(Y; Z)/dβ and prediction accuracy are observed with increasing β.",
"We introduce a definition for IB phase transitions as a qualitative change of the IB loss landscape, and show that the transitions correspond to the onset of learning new classes.",
"Using second-order calculus of variations, we derive a formula that provides a practical condition for IB phase transitions, and draw its connection with the Fisher information matrix for parameterized models.",
"We provide two perspectives to understand the formula, revealing that each IB phase transition is finding a component of maximum (nonlinear) correlation between X and Y orthogonal to the learned representation, in close analogy with canonical-correlation analysis (CCA) in linear settings.",
"Based on the theory, we present an algorithm for discovering phase transition points.",
"Finally, we verify that our theory and algorithm accurately predict phase transitions in categorical datasets, predict the onset of learning new classes and class difficulty in MNIST, and predict prominent phase transitions in CIFAR10.\n",
"The Information Bottleneck (IB) objective (Tishby et al., 2000) :",
"explicitly trades off model compression (I(X; Z), I(·; ·) denoting mutual information) with predictive performance (I(Y ; Z)) using the Lagrange multiplier β, where X, Y are observed random variables, and Z is a learned representation of X. The IB method has proved effective in a variety of scenarios, including improving the robustness against adversarial attacks (Alemi et al., 2016; Fischer, 2018) , learning invariant and disentangled representations (Achille & Soatto, 2018a; b) , underlying information-based geometric clustering (Strouse & Schwab, 2017b) , improving the training and performance in adversarial learning (Peng et al., 2018) , and facilitating skill discovery (Sharma et al., 2019) and learning goal-conditioned policy (Goyal et al., 2019) in reinforcement learning.",
"From Eq.",
"(1) we see that when β → 0 it will encourage I(X; Z) = 0 which leads to a trivial representation Z that is independent of X, while when β → +∞, it reduces to a maximum likelihood objective 1 that does not constrain the information flow.",
"Between these two extremes, how will the IB objective behave?",
"Will prediction and compression performance change smoothly, or do there exist interesting transitions in between?",
"In Wu et al. (2019) , the authors observe and study the learnability transition, i.e. the β value such that the IB objective transitions from a trivial global minimum to learning a nontrivial representation.",
"They also show how this first phase transition relates to the structure of the dataset.",
"However, to answer the full question, we need to consider the full range of β.",
"Motivation.",
"To get a sense of how I(Y ; Z) and I(X; Z) vary with β, we train Variational Information Bottleneck (VIB) models (Alemi et al., 2016) on the CIFAR10 dataset (Krizhevsky & Hinton, 2009) , where each experiment is at a different β and random initialization of the model.",
"Fig.",
"1 shows the I(X; Z), I(Y ; Z) and accuracy vs. β, as well as I(Y ; Z) vs. I(X; Z) for CIFAR10 with 20% label noise (see Appendix I for details).",
"are discontinuous and the accuracy has discrete jumps.",
"The observation lets us refine our question: When do the phase transitions occur, and how do they depend on the structure of the dataset?",
"These questions are important, since answering them will help us gain a better understanding of the IB objective and its close interplay with the dataset and the learned representation.",
"Moreover, the IB objective belongs to a general form of two-term trade-offs in many machine learning objectives: L = Prediction-loss + β · Complexity, where the complexity term generally takes the form of regularization.",
"Usually, learning is set at a specific β.",
"Many more insights can be gained if we understand the behavior of the prediction loss and model complexity with varying β, and how they depend on the dataset.",
"The techniques developed to address the question in the IB setting may also help us understand the two-term tradeoff in other learning objectives.",
"Contributions.",
"In this work, we begin to address the above question in IB settings.",
"Specifically:",
"• We identify a qualitative change of the IB loss landscape w.r.t. p(z|x) for varying β as IB phase transitions (Section 3).",
"• Based on the definition, we introduce a quantity G[p(z|x)] and use it to prove a theorem giving a practical condition for IB phase transitions.",
"We further reveal the connection between G[p(z|x)] and the Fisher information matrix when p(z|x) is parameterized by θ (Section 3).",
"• We reveal the close interplay between the IB objective, the dataset and the learned representation, by showing that in IB, each phase transition corresponds to learning a new nonlinear component of maximum correlation between X and Y , orthogonal to the previously-learned Z, and each with decreasing strength (Section 4).",
"To the best of our knowledge, our work provides the first theoretical formula to address IB phase transitions in the most general setting.",
"In addition, we present an algorithm for iteratively finding the IB phase transition points (Section 5).",
"We show that our theory and algorithm give tight matches with the observed phase transitions in categorical datasets, predict the onset of learning new classes and class difficulty in MNIST, and predict prominent transitions in CIFAR10 experiments (Section 6).",
"In this work, we observe and study the phase transitions in IB as we vary β.",
"We introduce the definition for IB phase transitions, and based on it derive a formula that gives a practical condition for IB phase transitions.",
"We further understand the formula via Jensen's inequality and representational maximum correlation.",
"We reveal the close interplay between the IB objective, the dataset and the learned representation, as each phase transition is learning a nonlinear maximum correlation component in the orthogonal space of the learned representation.",
"We present an algorithm for finding the phase transitions, and show that it gives tight matches with observed phase transitions in categorical datasets, predicts onset of learning new classes and class difficulty in MNIST, and predicts prominent transitions in CIFAR10 experiments.",
"This work is a first theoretical step towards a deeper understanding of the phenomenon of phase transitions in the Information Bottleneck.",
"We believe our approach will be applicable to other \"trade-off\" objectives, like β-VAE (Higgins et al., 2017) and InfoDropout (Achille & Soatto, 2018a) , where the model's ability to predict is balanced against a measure of complexity."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0
] | [
0.25,
0.24657534062862396,
0.3478260934352875,
0.19999998807907104,
0.3333333432674408,
0.11428570747375488,
0.2857142686843872,
0.1818181723356247,
0.12844036519527435,
0.1666666567325592,
0.1249999925494194,
0.2702702581882477,
0.22641508281230927,
0.2222222238779068,
0.1764705777168274,
0.1818181723356247,
0.1304347813129425,
0.13333332538604736,
0.23255813121795654,
0.2083333283662796,
0.23076923191547394,
0.06666666269302368,
0.21276594698429108,
0.2380952388048172,
0.17142856121063232,
0.2666666507720947,
0.2666666507720947,
0.1463414579629898,
0.25,
0.3333333134651184,
0.10526315122842789,
0.37037035822868347,
0.2702702581882477,
0.2857142686843872,
0.23529411852359772,
0.2800000011920929,
0.2857142686843872,
0.44999998807907104,
0.24137930572032928
] | HJloElBYvB | true | [
"We give a theoretical analysis of the Information Bottleneck objective to understand and predict observed phase transitions in the prediction vs. compression tradeoff."
] |
[
"We propose two approaches of locally adaptive activation functions namely, layer-wise and neuron-wise locally adaptive activation functions, which improve the performance of deep and physics-informed neural networks.",
"The local adaptation of activation function is achieved by introducing scalable hyper-parameters in each layer (layer-wise) and for every neuron separately (neuron-wise), and then optimizing it using the stochastic gradient descent algorithm.",
"Introduction of neuron-wise activation function acts like a vector activation function as opposed to the traditional scalar activation function given by fixed, global and layer-wise activations.",
"In order to further increase the training speed, an activation slope based slope recovery term is added in the loss function, which further accelerate convergence, thereby reducing the training cost.",
"For numerical experiments, a nonlinear discontinuous function is approximated using a deep neural network with layer-wise and neuron-wise locally adaptive activation functions with and without the slope recovery term and compared with its global counterpart.",
"Moreover, solution of the nonlinear Burgers equation, which exhibits steep gradients, is also obtained using the proposed methods.",
"On the theoretical side, we prove that in the proposed method the gradient descent algorithms are not attracted to sub-optimal critical points or local minima under practical conditions on the initialization and learning rate.",
"Furthermore, the proposed adaptive activation functions with the slope recovery are shown to accelerate the training process in standard deep learning benchmarks using CIFAR-10, CIFAR-100, SVHN, MNIST, KMNIST, Fashion-MNIST, and Semeion data sets with and without data augmentation.",
"In recent years, research on neural networks (NNs) has intensified around the world due to their successful applications in many diverse fields such as speech recognition , computer vision (Krizhevsky et al., 2012) , natural language translation (Wu et al., 2016) , etc.",
"Training of NN is performed on data sets before using it in the actual applications.",
"Various data sets are available for applications like image classification, which is a subset of computer vision.",
"MNIST (LeCun et al., 1998) and their variants like, Fashion-MNIST (Xiao et al., 2017) , and KMNIST (Clanuwat et al., 2018) are the data sets for handwritten digits, images of clothing and accessories, and Japanese letters, respectively.",
"Apart from MNIST, Semeion (Brescia, 1994 ) is a handwritten digit data set that contains 1593 digits collected from 80 persons.",
"SVHN (Netzer et al., 2011) is another data set for street view house numbers obtained from house numbers in Google Street View images.",
"CI-FAR (Krizhevsky et al., 2009 ) is the popular data set containing color images commonly used to train machine learning algorithms.",
"In particular, the CIFAR-10 data set contains 50000 training and 10000 testing images in 10 classes with image resolution of 32x32.",
"CIFAR-100 is similar to the CIFAR-10, except it has 100 classes with 600 images in each class, which is more challenging than the CIFAR-10 data set.",
"problems, where the approximate solutions of governing equations are obtained, as well as inverse problems, where parameters involved in the governing equation are inferred from the training data.",
"Highly efficient and adaptable algorithms are important to design the most effective NN which not only increases the accuracy of the solution but also reduces the training cost.",
"Various architectures of NN like Dropout NN (Srivastava et al., 2014) are proposed in the literature, which can improve the efficiency of the algorithm for specific applications.",
"Activation function plays an important role in the training process of NN.",
"In this work, we are particularly focusing on adaptive activation functions, which adapt automatically such that the network can be trained faster.",
"Various methods are proposed in the literature for adaptive activation function, like the adaptive sigmoidal activation function proposed by (Yu et al., 2002) for multilayer feedforward NNs, while (Qian et al., 2018) focuses on learning activation functions in convolutional NNs by combining basic activation functions in a data-driven way.",
"Multiple activation functions per neuron are proposed (Dushkoff & Ptucha, 2016) , where individual neurons select between a multitude of activation functions.",
"(Li et al., 2013) proposed a tunable activation function, where only a single hidden layer is used and the activation function is tuned.",
"(Shen et al., 2004) , used a similar idea of tunable activation function but with multiple outputs.",
"Recently, Kunc and Kléma proposed a transformative adaptive activation functions for gene expression inference, see (Kunc & Kléma, 2019) .",
"One such adaptive activation function is proposed (Jagtap & Karniadakis, 2019) by introducing scalable hyper-parameter in the activation function, which can be optimized.",
"Mathematically, it changes the slope of activation function thereby increasing the learning process, especially during the initial training period.",
"Due to single scalar hyper-parameter, we call such adaptive activation functions globally adaptive activations, meaning that it gives an optimized slope for the entire network.",
"One can think of doing such optimization at the local level, where the scalable hyper-parameter are introduced hidden layer-wise or even for each neuron in the network.",
"Such local adaptation can further improve the performance of the network.",
"Figure 1 shows a sketch of a neuron-wise locally adaptive activation function based physics-informed neural network (LAAF-PINN), where both the NN part along with the physicsinformed part can be seen.",
"In this architecture, along with the output of NN and the residual term from the governing equation, the activation slopes from every neuron are also contributing to the loss function in the form of slope recovery term.",
"The rest of the paper is organized as follows.",
"Section 2 presents the methodology of the proposed layer-wise and neuron-wise locally adaptive activations in detail.",
"This also includes a discussion on the slope recovery term, expansion of parametric space due to layer-wise and neuron-wise introduction of hyper-parameters, its effect on the overall training cost, and a theoretical result for gradient decent algorithms.",
"Section 3 gives numerical experiments, where we approximate a nonlinear discontinuous function using deep NN by the proposed approaches.",
"We also solve the Burgers equation using the proposed algorithm and present various comparisons in appendix B. Section 4 presents numerical results with various standard deep learning benchmarks using CIFAR-10, CIFAR-100, SVHN, MNIST, KMNIST, Fashion-MNIST, and Semeion data sets.",
"Finally, in section 5, we summarize the conclusions of our work.",
"In this paper, we present two versions of locally adaptive activation functions namely, layer-wise and neuron-wise locally adaptive activation functions.",
"Such local activation functions further improve the training speed of the neural network compared to its global predecessor.",
"To further accelerate the training process, an activation slope based slope recovery term is added in the loss function for both layer-wise and neuron-wise activation functions, which is shown to enhance the performance of the neural network.",
"Various NN and PINN test cases like nonlinear discontinuous function approximation and Burgers equation respectively, and benchmark deep learning problems like MNIST, CIFAR, SVHN etc are solved to verify our claim.",
"Moreover, we theoretically prove that no sub-optimal critical point or local minimum attracts gradient descent algorithms in the proposed methods (L-LAAF and N-LAAF) with the slope recovery term under only mild assumptions.",
"k=1 is a limit point of (Θ m ) m∈N and a sub-optimal critical point or a sub-optimal local minimum.",
"and h",
"Following the proofs in (Bertsekas, 1997, Propositions 1.2.1-1.2.4), we have that ∇J(Θ) = 0 and J(Θ) < Jc(0) + S(0), for all three cases of the conditions corresponding the different rules of the learning rate.",
"Therefore, we have that for all k ∈ {1, . . . , D − 1},",
"Furthermore, we have that for all k ∈ {1, . . . , D − 1} and all j ∈ {1, . . . , N k },",
"By combining equation 5-equation 7, for all k ∈ {1, . . . , D − 1},",
"which implies that for all a k = 0 since (D − 1)",
"exp(a k ) = 0.",
"This implies that J(Θ) = Jc(0) + S(0), which contradicts with J(Θ) < Jc(0) + S(0).",
"This proves the desired statement for L-LAAF.",
"For N-LAAF, we prove the statement by contradiction.",
"Suppose that the parameter vectorΘ consisting of {w",
"k=1 ∀j = 1, 2, · · · , N k is a limit point of (Θ m ) m∈N and a suboptimal critical point or a sub-optimal local minimum.",
"Redefine",
"and h",
"for all j ∈ {1, . . . , N k }, where w k,j ∈ R 1×N k−1 and b k,j ∈ R. Then, by the same proof steps, we have that ∇J(Θ) = 0 and J(Θ) < Jc(0) + S(0), for all three cases of the conditions corresponding the different rules of the learning rate.",
"Therefore, we have that for all k ∈ {1, . . . , D − 1} and all j ∈ {1, . . . , N k },",
"By combining equation 6-equation 8, for all k ∈ {1, . . . , D − 1} and all j ∈ {1, . . . , N k }, ,",
"which implies that for all a",
"This implies that J(Θ) = Jc(0) + S(0), which contradicts with J(Θ) < Jc(0) + S(0).",
"This proves the desired statement for N-LAAF."
] | [
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.5,
0.17777776718139648,
0.1111111044883728,
0.10256409645080566,
0.3181818127632141,
0,
0.08888888359069824,
0.25531914830207825,
0.11320754140615463,
0.06896550953388214,
0.06451612710952759,
0.09090908616781235,
0,
0.1111111044883728,
0,
0.11428570747375488,
0.052631575614213943,
0.05714285373687744,
0.05128204822540283,
0.10526315122842789,
0.07692307233810425,
0.1666666567325592,
0.19607843458652496,
0.11764705181121826,
0.11428570747375488,
0.06451612710952759,
0.3030303120613098,
0.1666666567325592,
0.06451612710952759,
0.21052631735801697,
0.10256409645080566,
0,
0.24390242993831635,
0.1395348757505417,
0,
0.27586206793785095,
0.08695651590824127,
0.060606054961681366,
0.12244897335767746,
0.07999999821186066,
0.3333333134651184,
0.19354838132858276,
0.2222222238779068,
0.0952380895614624,
0.08888888359069824,
0.06666666269302368,
0.1249999925494194,
0.0714285671710968,
0.1249999925494194,
0.06896550953388214,
0.07407406717538834,
0,
0,
0.0952380895614624,
0,
0,
0.05128204822540283,
0.06896551698446274,
0.1249999925494194,
0.12121211737394333,
0.09999999403953552,
0,
0.0952380895614624
] | rkeJzpNtPS | true | [
"Proposing locally adaptive activation functions in deep and physics-informed neural networks for faster convergence"
] |
[
"Learning in Gaussian Process models occurs through the adaptation of hyperparameters of the mean and the covariance function.",
"The classical approach entails maximizing the marginal likelihood yielding fixed point estimates (an approach called Type II maximum likelihood or ML-II).",
" An alternative learning procedure is to infer the posterior over hyperparameters in a hierarchical specification of GPs we call Fully Bayesian Gaussian Process Regression (GPR)",
". This work considers two approximations to the intractable hyperparameter posterior",
", 1) Hamiltonian Monte Carlo (HMC) yielding a sampling based approximation and",
"2) Variational Inference (VI) where the posterior over hyperparameters is approximated by a factorized Gaussian (mean-field) or a full rank Gaussian accounting for correlations between hyperparameters.",
"We analyse the predictive performance for fully Bayesian GPR on a range of benchmark data sets.",
"We demonstrate the feasibility of fully Bayesian GPR in the Gaussian likelihood setting for moderate sized high-dimensional data sets with composite kernels.",
"We present a concise comparative analysis across different approximation schemes and find that VI schemes based on the Gaussian variational family are only marginally inferior in terms of predictive performance to the gold standard HMC.",
"While sampling with HMC can be tuned to generate samples from multi-modal posteriors using tempered transitions (Neal, 1996) , the predictions can remain invariant to samples from different hyperparameter modes.",
"Fully Bayesian bottom: Airline).",
"In the CO 2 data where we undertake long-range extrapolation, the uncertainty intervals under the full Bayesian schemes capture the true observations while ML-II underestimates predictive uncertainty.",
"For the Airline dataset, red in each twoway plot denotes ML-II, the uncertainty intervals under the full Bayesian schemes capture the upward trend better than ML-II.",
"The latter also misses on structure that the other schemes capture.",
"inference in GPs is highly intractable and one has to consider the trade-off between computational cost, accuracy and robustness of uncertainty intervals.",
"Most interesting real-world applications of GPs entail hand-crafted kernels involving many hyperparameters where there risk of overfitting is not only higher but also hard to detect.",
"A more robust solution is to integrate over the hyperparameters and compute predictive intervals that reflect these uncertainties.",
"An interesting question is whether conducting inference over hierarchies in GPs increases expressivity and representational power by accounting for a more diverse range of models consistent with the data.",
"More specifically, how does it compare to the expressivity of deep GPs (Damianou and Lawrence, 2013) with point estimate hyperparameters.",
"Further, these general approximation schemes can be considered in conjunction with different incarnations of GP models where transformations are used to warp the observation space yielding warped GPs (Snelson et al., 2004) or warp the input space either using parametric transformations like neural nets yielding deep kernel learning (Wilson et al., 2016) or non-parametric ones yielding deep GPs (Damianou and Lawrence, 2013 6.",
"Appendix"
] | [
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.3333333432674408,
0,
0.3529411852359772,
0,
0,
0.1249999925494194,
0.1599999964237213,
0.2666666507720947,
0.1428571343421936,
0,
0.1538461446762085,
0.0624999962747097,
0.1249999925494194,
0,
0.13333332538604736,
0.05882352590560913,
0,
0.10526315122842789,
0.06896550953388214,
0.06451612710952759
] | S1ldYJh4FH | true | [
"Analysis of Bayesian Hyperparameter Inference in Gaussian Process Regression "
] |
[
"We consider the problem of using variational latent-variable models for data compression.",
"For such models to produce a compressed binary sequence, which is the universal data representation in a digital world, the latent representation needs to be subjected to entropy coding.",
"Range coding as an entropy coding technique is optimal, but it can fail catastrophically if the computation of the prior differs even slightly between the sending and the receiving side.",
"Unfortunately, this is a common scenario when floating point math is used and the sender and receiver operate on different hardware or software platforms, as numerical round-off is often platform dependent.",
"We propose using integer networks as a universal solution to this problem, and demonstrate that they enable reliable cross-platform encoding and decoding of images using variational models.",
"The task of information transmission in today's world is largely divided into two separate endeavors: source coding, or the representation of data (such as audio or images) as sequences of bits, and channel coding, representing sequences of bits as analog signals on imperfect, physical channels such as radio waves BID7 .",
"This decoupling has substantial benefits, as the binary representations of arbitrary data can be seamlessly transmitted over arbitrary physical channels by only changing the underlying channel code, rather than having to design a new code for every possible combination of data source and physical channel.",
"Hence, the universal representation of any compressed data today is the binary channel, a representation which consists of a variable number of binary symbols, each with probability 1 2 , and no noise (i.e. uncertainty).",
"As a latent representation, the binary channel unfortunately is a severe restriction compared to the richness of latent representations defined by many variational latent-variable models in the literature (e.g., BID13 BID22 BID18 , and in particular models targeted at data compression BID23 BID0 .",
"Variational latent-variable models such as VAEs BID13 consist of an encoder model distribution e(y |",
"x) bringing the data x into a latent representation y, and a decoder model distribution d(x |",
"y), which represents the data likelihood conditioned on the latents.",
"Given an encoder e, we observe the marginal distribution of latents m(y",
") = E x [e(y | x",
")], where the expectation runs over the (unknown) data distribution.",
"The prior p(y",
") is a variational estimate of the marginal BID1 .By",
"choosing the parametric forms of these distributions and the training objective appropriately, many such models succeed in representing relevant information in the data they are trained for quite compactly (i.e., with a small expected Kullback-Leibler (KL) divergence between the encoder and the prior, E x D KL [e p]), and so may be called compressive in a sense. However",
", not all of them can be directly used for practical data compression, as the representation needs to be further converted into binary (entropy encoded). This conversion",
"is typically performed by range coding, or arithmetic coding BID20 . Range coding is",
"asymptotically optimal: the length of the binary sequence quickly converges to the expected KL divergence in bits, for reasonably large sequences (such as, for one image). For this to hold",
", the following requirements must be satisfied: Figure 1 : The same image, decoded with a model computing the prior using integer arithmetic (left), and the same model using floating point arithmetic (right). The image was decoded",
"correctly, beginning in the top-left corner, until floating point round-off error caused a small discrepancy between the sender's and the receiver's copy of the prior, at which point the error propagated catastrophically.• The representation must",
"be discrete-valued, i.e. have a finite number of states, and be noiseless -i.e. the conditional entropy of the encoder must be zero: DISPLAYFORM0 • All scalar elements of the representation y must be brought into a total ordering, and the prior needs to be written using the chain rule of calculus (as a product of conditionals), as the algorithm can only encode or decode one scalar random variable at a time.• Both sides of the binary",
"channel (i.e. sender and receiver) must be able to evaluate the prior, and they must have identical instances of it.The latter point is crucial, as range coding is extremely sensitive to differences in p between sender and receiver -so sensitive, in fact, that even small perturbations due to floating point round-off error can lead to catastrophic error propagation. Unfortunately, numerical round-off",
"is highly platform dependent, and in typical data compression applications, sender and receiver may well employ different hardware or software platforms. Round-off error may even be non-deterministic",
"on one and the same computer. Figure 1 illustrates a decoding failure in a",
"model which computes p using floating point math, caused by such computational non-determinism in sender vs. receiver. Recently, latent-variable models have been explored",
"that employ artificial neural networks (ANNs) to compute hierarchical or autoregressive priors BID22 BID18 , including some of the best-performing learned image compression models BID17 BID14 . Because ANNs are typically based on floating point",
"math, these methods are vulnerable to catastrophic failures when deployed on heterogeneous platforms.To address this problem, and enable use of powerful learned variational models for real-world data compression, we propose to use integer arithmetic in these ANNs, as floating-point arithmetic cannot presently be made deterministic across arbitrary platforms. We formulate a type of quantized neural network we",
"call integer networks, which are specifically targeted at generative and compression models, and at preventing computational non-determinism in computation of the prior. Because full determinism is a feature of many existing",
", widely used image and video compression methods, we also consider using integer networks end to end for computing the representation itself.",
"There is a large body of recent research considering quantization of ANNs mostly targeted at image recognition applications.",
"BID6 train classification networks on lower precision multiplication.",
"BID11 and BID19 perform quantization down to bilevel (i.e., 1-bit integers) at inference time to reduce computation in classification networks.",
"More recently, BID24 and others have used quantization during training as well as inference, to reduce computation on gradients as well as activations, and BID5 use non-uniform quantization to remove floating point computation, replacing it completely with integer offsets into an integer lookup table.While the quantization of neural networks is not a new topic, the results from the above techniques focus almost exclusively on classification networks.",
"BID8 , BID9 , and others have demonstrated that these types of networks are particularly robust to capacity reduction.Models used for image compression, like many generative models, are much more sensitive to capacity constraints since they tend to underfit.",
"As illustrated in and in figure 3 (right), this class of models is much more sensitive to reductions of capacity, both in terms of network size and the expressive power of the activation function.",
"This may explain why our experiments with post-hoc quantization of network activations have never yielded competitive results for this class of model (not shown).As",
"illustrated in figure 1 and table 1, small floating point inconsistencies in variational latent-variable models can have disastrous effects when we use range coding to employ the models for data compression across different hardware or software platforms. The",
"reader may wonder whether there exists other entropy coding algorithms that can convert discrete latent-variable representations into a binary representation, and which do not suffer from a sensitivity to perturbations in the probability model. Unfortunately",
", such an algorithm would always produce suboptimal results for the following reason. The source coding",
"theorem BID21 ) establishes a lower bound on the average length of the resulting bit sequences, which range coding achieves asymptotically (i.e. for long bit sequences). The lower bound is",
"given by the cross entropy between the marginal and the prior: DISPLAYFORM0 where |b(y)| is the length of the binary representation of y. If an entropy coding",
"algorithm tolerates error in the values of p(y | θ), this means it must operate under the assumption of identical probability values for a range of values of θ -in other words, discretize the probability values. Since the cross entropy",
"is minimal only for p(y | θ) = m(y) (for all y), this would impose a new lower bound on |b(y)| given by the cross entropy with the discretized probabilities, which is greater or equal to the cross entropy given above. Thus, the more tolerant",
"the entropy coding method is to errors in p, the further it deviates from optimal performance. Moreover, it is hard to",
"establish tolerance intervals for probability values computed with floating point arithmetic, in particular when ANNs are used, due to error propagation. Hence, it is generally",
"difficult to provide guarantees that a given tolerance will not be exceeded. For similar reasons, current",
"commercial compression methods model probabilities exclusively in the discrete domain (e.g., using lookup tables; BID16 .Our approach to neural network",
"quantization is the first we are aware of which specifically addresses non-deterministic computation, as opposed to computational complexity. It enables a variety of possible",
"variational model architectures and distributions to be effectively used for platformindependent data compression. While we aren't assessing its effects",
"on computational complexity here, it is conceivable that complexity reductions can also be achieved with the same approach; this is a topic for future work."
] | [
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.48275861144065857,
0.09756097197532654,
0,
0,
0.2857142686843872,
0.03448275476694107,
0.1071428507566452,
0.08510638028383255,
0.145454540848732,
0.0624999962747097,
0.060606054961681366,
0.07692307233810425,
0,
0,
0.07692307233810425,
0,
0.07407406717538834,
0.11594202369451523,
0.1860465109348297,
0,
0.04651162400841713,
0.08695651590824127,
0,
0.02739725634455681,
0,
0.0952380895614624,
0,
0.14999999105930328,
0.11764705181121826,
0.1764705777168274,
0.13333332538604736,
0.21052631735801697,
0,
0.1599999964237213,
0.052631575614213943,
0.05633802339434624,
0.07692307233810425,
0.045454539358615875,
0.1463414579629898,
0.18867924809455872,
0,
0.060606054961681366,
0.045454539358615875,
0,
0.04255318641662598,
0.072727270424366,
0,
0.0952380895614624,
0,
0.10256409645080566,
0.09999999403953552,
0.2222222238779068,
0.14999999105930328
] | S1zz2i0cY7 | true | [
"We train variational models with quantized networks for computational determinism. This enables using them for cross-platform data compression."
] |
[
"Neural networks powered with external memory simulate computer behaviors.",
"These models, which use the memory to store data for a neural controller, can learn algorithms and other complex tasks.",
"In this paper, we introduce a new memory to store weights for the controller, analogous to the stored-program memory in modern computer architectures.",
"The proposed model, dubbed Neural Stored-program Memory, augments current memory-augmented neural networks, creating differentiable machines that can switch programs through time, adapt to variable contexts and thus fully resemble the Universal Turing Machine.",
"A wide range of experiments demonstrate that the resulting machines not only excel in classical algorithmic problems, but also have potential for compositional, continual, few-shot learning and question-answering tasks.",
"Recurrent Neural Networks (RNNs) are Turing-complete (Siegelmann & Sontag, 1995) .",
"However, in practice RNNs struggle to learn simple procedures as they lack explicit memory (Graves et al., 2014; Mozer & Das, 1993) .",
"These findings have sparked a new research direction called Memory Augmented Neural Networks (MANNs) that emulate modern computer behavior by detaching memorization from computation via memory and controller network, respectively.",
"MANNs have demonstrated significant improvements over memory-less RNNs in various sequential learning tasks Le et al., 2018a; Sukhbaatar et al., 2015) .",
"Nonetheless, MANNs have barely simulated general-purpose computers.",
"Current MANNs miss a key concept in computer design: stored-program memory.",
"The concept has emerged from the idea of Universal Turing Machine (UTM) (Turing, 1936) and further developed in Harvard Architecture (Broesch, 2009 ), Von Neumann Architecture (von Neumann, 1993 .",
"In UTM, both data and programs that manipulate the data are stored in memory.",
"A control unit then reads the programs from the memory and executes them with the data.",
"This mechanism allows flexibility to perform universal computations.",
"Unfortunately, current MANNs such as Neural Turing Machine (NTM) (Graves et al., 2014) , Differentiable Neural Computer (DNC) and Least Recently Used Access (LRUA) (Santoro et al., 2016) only support memory for data and embed a single program into the controller network, which goes against the stored-program memory principle.",
"Our goal is to advance a step further towards UTM by coupling a MANN with an external program memory.",
"The program memory co-exists with the data memory in the MANN, providing more flexibility, reuseability and modularity in learning complicated tasks.",
"The program memory stores the weights of the MANN's controller network, which are retrieved quickly via a key-value attention mechanism across timesteps yet updated slowly via backpropagation.",
"By introducing a meta network to moderate the operations of the program memory, our model, henceforth referred to as Neural Stored-program Memory (NSM), can learn to switch the programs/weights in the controller network appropriately, adapting to different functionalities aligning with different parts of a sequential task, or different tasks in continual and few-shot learning.",
"To validate our proposal, the NTM armed with NSM, namely Neural Universal Turing Machine (NUTM), is tested on a variety of synthetic tasks including algorithmic tasks from Graves et al. (2014) , composition of algorithmic tasks and continual procedure learning.",
"For these algorithmic problems, we demonstrate clear improvements of NUTM over NTM.",
"Further, we investigate NUTM in few-shot learning by using LRUA as the MANN and achieve notably better results.",
"Finally, we expand NUTM application to linguistic problems by equipping NUTM with DNC core and achieve competitive performances against stateof-the-arts in the bAbI task .",
"Taken together, our study advances neural network simulation of Turing Machines to neural architecture for Universal Turing Machines.",
"This develops a new class of MANNs that can store and query both the weights and data of their own controllers, thereby following the stored-program principle.",
"A set of five diverse experiments demonstrate the computational universality of the approach.",
"This paper introduces the Neural Stored-program Memory (NSM), a new type of external memory for neural networks.",
"The memory, which takes inspirations from the stored-program memory in computer architecture, gives memory-augmented neural networks (MANNs) flexibility to change their control programs through time while maintaining differentiability.",
"The mechanism simulates modern computer behavior, potential making MANNs truly neural computers.",
"Our experiments demonstrated that when coupled with our model, the Neural Turing Machine learns algorithms better and adapts faster to new tasks at both sequence and sample levels.",
"When used in few-shot learning, our method helps MANN as well.",
"We also applied the NSM to the Differentiable Neural Computer and observed a significant improvement, reaching the state-of-the-arts in the bAbI task.",
"Although this paper limits to MANN integration, other neural networks can also reap benefits from our proposed model, which will be explored in future works.",
"Table 9 : Task settings (continual procedure learning tasks)."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0,
0.07407406717538834,
0,
0.20000000298023224,
0.1111111044883728,
0,
0,
0,
0,
0,
0,
0.22857142984867096,
0,
0.0952380895614624,
0,
0.07999999821186066,
0,
0,
0.0624999962747097,
0.040816325694322586,
0.1860465109348297,
0.10526315122842789,
0,
0,
0.45454543828964233,
0.06666666269302368,
0.2222222238779068,
0.1666666567325592,
0.05714285373687744,
0.10526315122842789,
0.11764705926179886,
0,
0,
0.0624999962747097,
0
] | rkxxA24FDr | true | [
"A neural simulation of Universal Turing Machine"
] |
[
"It is common practice to decay the learning rate.",
"Here we show one can usually obtain the same learning curve on both training and test sets by instead increasing the batch size during training.",
"This procedure is successful for stochastic gradient descent (SGD), SGD with momentum, Nesterov momentum, and Adam.",
"It reaches equivalent test accuracies after the same number of training epochs, but with fewer parameter updates, leading to greater parallelism and shorter training times.",
"We can further reduce the number of parameter updates by increasing the learning rate $\\epsilon$ and scaling the batch size $B \\propto \\epsilon$.",
"Finally, one can increase the momentum coefficient $m$ and scale $B \\propto 1/(1-m)$, although this tends to slightly reduce the test accuracy.",
"Crucially, our techniques allow us to repurpose existing training schedules for large batch training with no hyper-parameter tuning.",
"We train ResNet-50 on ImageNet to 76.1% validation accuracy in under 30 minutes.",
"Stochastic gradient descent (SGD) remains the dominant optimization algorithm of deep learning.",
"However while SGD finds minima that generalize well BID30 BID26 , each parameter update only takes a small step towards the objective.",
"Increasing interest has focused on large batch training BID8 BID10 BID27 , in an attempt to increase the step size and reduce the number of parameter updates required to train a model.",
"Large batches can be parallelized across many machines, reducing training time.",
"Unfortunately, when we increase the batch size the test set accuracy often falls BID12 BID8 .To",
"understand this surprising observation, BID23 argued one should interpret SGD as integrating a stochastic differential equation. They",
"showed that the scale of random fluctuations in the SGD dynamics, g = ( N B − 1), where is the learning rate, N training set size and B batch size. Furthermore",
", they found that there is an optimum fluctuation scale g which maximizes the test set accuracy (at constant learning rate), and this introduces an optimal batch size proportional to the learning rate when B N . BID8 already",
"observed this scaling rule empirically and exploited it to train ResNet-50 to 76.3% ImageNet validation accuracy in one hour. Here we show,•",
"When one decays the learning rate, one simultaneously decays the scale of random fluctuations g in the SGD dynamics. Decaying the learning",
"rate is simulated annealing. We propose an alternative",
"procedure; instead of decaying the learning rate, we increase the batch size during training. This strategy achieves near-identical",
"model performance on the test set with the same number of training epochs but significantly fewer parameter updates. Our proposal does not require any fine-tuning",
"as we follow pre-existing training schedules; when the learning rate drops by a factor of α, we instead increase the batch size by α.• As shown previously, we can further reduce the",
"number of parameter updates by increasing the learning rate and scaling B ∝ . One can also increase the momentum coefficient and",
"scale B ∝ 1/(1 − m), although this slightly reduces the test accuracy. We train InceptionResNet-V2 on ImageNet in under 2500",
"parameter updates, using batches of 65536 images, and reach a validation set accuracy of 77%. We also replicate the setup of BID8 on TPU and train",
"ResNet-50 on ImageNet to 76.1% accuracy in under 30 minutes.We note that a number of recent works have discussed increasing the batch size during training BID7 BID3 BID1 BID2 BID5 , but to our knowledge no paper has shown empirically that increasing the batch size and decaying the learning rate are quantitatively equivalent. A key contribution of our work is to demonstrate that",
"decaying learning rate schedules can be directly converted into increasing batch size schedules, and vice versa; providing a straightforward pathway towards large batch training.In section 2 we discuss the convergence criteria for SGD in strongly convex minima, in section 3 we interpret decaying learning rates as simulated annealing, and in section 4 we discuss the difficulties of training with large momentum coefficients. Finally in section 5 we present conclusive experimental",
"evidence that the empirical benefits of decaying learning rates in deep learning can be obtained by instead increasing the batch size during training. We exploit this observation and other tricks to achieve",
"efficient large batch training on CIFAR-10 and ImageNet.",
"We can often achieve the benefits of decaying the learning rate by instead increasing the batch size during training.",
"We support this claim with experiments on CIFAR-10 and ImageNet, and with a range of optimizers including SGD, Momentum and Adam.",
"Our findings enable the efficient use of vast batch sizes, significantly reducing the number of parameter updates required to train a model.",
"This has the potential to dramatically reduce model training times.",
"We further increase the batch size B by increasing the learning rate and momentum parameter m, while scaling B ∝ /(1 − m).",
"Combining these strategies, we train Inception-ResNet-V2 on ImageNet to 77% validation accuracy in under 2500 parameter updates, using batches of 65536 images.",
"We also exploit increasing batch sizes to train ResNet-50 to 76.1% ImageNet validation set accuracy on TPU in under 30 minutes.",
"Most strikingly, we achieve this without any hyper-parameter tuning, since our scaling rules enable us to directly convert existing hyper-parameter choices from the literature for large batch training."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0
] | [
0.2857142686843872,
0.4571428596973419,
0.07407406717538834,
0.2222222238779068,
0.42424240708351135,
0.12121211737394333,
0.13793103396892548,
0,
0.1666666567325592,
0.05882352590560913,
0.2380952388048172,
0.08695651590824127,
0.2222222238779068,
0,
0.307692289352417,
0.25531914830207825,
0.05882352590560913,
0.2142857164144516,
0.09999999403953552,
0.41379308700561523,
0.1111111044883728,
0.29999998211860657,
0.32258063554763794,
0.060606054961681366,
0.11428570747375488,
0.3333333432674408,
0.24242423474788666,
0.380952388048172,
0.29999998211860657,
0.5517241358757019,
0.06666666269302368,
0.1249999925494194,
0.1818181723356247,
0.42424240708351135,
0,
0.12121211737394333,
0.1538461446762085
] | B1Yy1BxCZ | true | [
"Decaying the learning rate and increasing the batch size during training are equivalent."
] |
[
"Traditional models for question answering optimize using cross entropy loss, which encourages exact answers at the cost of penalizing nearby or overlapping answers that are sometimes equally accurate.",
"We propose a mixed objective that combines cross entropy loss with self-critical policy learning, using rewards derived from word overlap to solve the misalignment between evaluation metric and optimization objective.",
"In addition to the mixed objective, we introduce a deep residual coattention encoder that is inspired by recent work in deep self-attention and residual networks.",
"Our proposals improve model performance across question types and input lengths, especially for long questions that requires the ability to capture long-term dependencies.",
"On the Stanford Question Answering Dataset, our model achieves state of the art results with 75.1% exact match accuracy and 83.1% F1, while the ensemble obtains 78.9% exact match accuracy and 86.0% F1.",
"Existing state-of-the-art question answering models are trained to produce exact answer spans for a question and a document.",
"In this setting, a ground truth answer used to supervise the model is defined as a start and an end position within the document.",
"Existing training approaches optimize using cross entropy loss over the two positions.",
"However, this suffers from a fundamental disconnect between the optimization, which is tied to the position of a particular ground truth answer span, and the evaluation, which is based on the textual content of the answer.",
"This disconnect is especially harmful in cases where answers that are textually similar to, but distinct in positions from, the ground truth are penalized in the same fashion as answers that are textually dissimilar.",
"For example, suppose we are given the sentence \"Some believe that the Golden State Warriors team of 2017 is one of the greatest teams in NBA history\", the question \"which team is considered to be one of the greatest teams in NBA history\", and a ground truth answer of \"the Golden State Warriors team of 2017\".",
"The span \"Warriors\" is also a correct answer, but from the perspective of traditional cross entropy based training it is no better than the span \"history\".To",
"address this problem, we propose a mixed objective that combines traditional cross entropy loss over positions with a measure of word overlap trained with reinforcement learning. We",
"obtain the latter objective using self-critical policy learning in which the reward is based on word overlap between the proposed answer and the ground truth answer. Our",
"mixed objective brings two benefits: (i)",
"the reinforcement learning objective encourages answers that are textually similar to the ground truth answer and discourages those that are not; (ii",
") the cross entropy objective significantly facilitates policy learning by encouraging trajectories that are known to be correct. The",
"resulting objective is one that is both faithful to the evaluation metric and converges quickly in practice.In addition to our mixed training objective, we extend the Dynamic Coattention Network (DCN) by with a deep residual coattention encoder. This",
"allows the network to build richer representations of the input by enabling each input sequence to attend to previous attention contexts. BID26",
"show that the stacking of attention layers helps model long-range DISPLAYFORM0 Figure 1: Deep residual coattention encoder.dependencies. We merge",
"coattention outputs from each layer by means of residual connections to reduce the length of signal paths. BID6 show",
"that skip layer connections facilitate signal propagation and alleviate gradient degradation.The combination of the deep residual coattention encoder and the mixed objective leads to higher performance across question types, question lengths, and answer lengths on the Stanford Question Answering Dataset (SQuAD) BID20 compared to our DCN baseline. The improvement",
"is especially apparent on long questions, which require the model to capture long-range dependencies between the document and the question. Our model, which",
"we call DCN+, achieves state-of-the-art results on SQuAD, with 75.1% exact match accuracy and 83.1% F1. When ensembled,",
"the DCN+ obtains 78.9% exact match accuracy and 86.0% F1.",
"We introduced DCN+, an state-of-the-art question answering model with deep residual coattention trained using a mixed objective that combines cross entropy supervision with self-critical policy learning.",
"We showed that our proposals improve model performance across question types, question lengths, and answer lengths on the Stanford Question Answering Dataset ( SQuAD).",
"On SQuAD, the DCN+ achieves 75.1% exact match accuracy and 83.1% F1.",
"When ensembled, the DCN+ obtains 78.9% exact match accuracy and 86.0% F1."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0
] | [
0.12244897335767746,
0.15686273574829102,
0.2666666507720947,
0.13333332538604736,
0.38461539149284363,
0.052631575614213943,
0.09090908616781235,
0.05882352590560913,
0.20408162474632263,
0.0416666604578495,
0.10344827175140381,
0.08695651590824127,
0.12765957415103912,
0.17777776718139648,
0,
0.09756097197532654,
0.04878048226237297,
0.20689654350280762,
0.09999999403953552,
0.2380952388048172,
0.19999998807907104,
0.3692307770252228,
0.1904761791229248,
0.19512194395065308,
0.1764705777168274,
0.21276594698429108,
0.4000000059604645,
0.22857142984867096,
0.1666666567325592
] | H1meywxRW | true | [
"We introduce the DCN+ with deep residual coattention and mixed-objective RL, which achieves state of the art performance on the Stanford Question Answering Dataset."
] |
[
"Neural architecture search (NAS) has achieved breakthrough success in a great number of applications in the past few years.\n",
"It could be time to take a step back and analyze the good and bad aspects in the field of NAS.",
"A variety of algorithms search architectures under different search space.",
"These searched architectures are trained using different setups, e.g., hyper-parameters, data augmentation, regularization.",
"This raises a comparability problem when comparing the performance of various NAS algorithms.",
"NAS-Bench-101 has shown success to alleviate this problem.",
"In this work, we propose an extension to NAS-Bench-101: NAS-Bench-201 with a different search space, results on multiple datasets, and more diagnostic information.",
"NAS-Bench-201 has a fixed search space and provides a unified benchmark for almost any up-to-date NAS algorithms.",
"The design of our search space is inspired by the one used in the most popular cell-based searching algorithms, where a cell is represented as a directed acyclic graph.",
"Each edge here is associated with an operation selected from a predefined operation set.",
"For it to be applicable for all NAS algorithms, the search space defined in NAS-Bench-201 includes all possible architectures generated by 4 nodes and 5 associated operation options, which results in 15,625 neural cell candidates in total.",
"The training log using the same setup and the performance for each architecture candidate are provided for three datasets.",
"This allows researchers to avoid unnecessary repetitive training for selected architecture and focus solely on the search algorithm itself.",
"The training time saved for every architecture also largely improves the efficiency of most NAS algorithms and presents a more computational cost friendly NAS community for a broader range of researchers.",
"We provide additional diagnostic information such as fine-grained loss and accuracy, which can give inspirations to new designs of NAS algorithms.",
"In further support of the proposed NAS-Bench-102, we have analyzed it from many aspects and benchmarked 10 recent NAS algorithms, which verify its applicability.",
"The deep learning community is undergoing a transition from hand-designed neural architecture (He et al., 2016; Krizhevsky et al., 2012; to automatically designed neural architecture (Zoph & Le, 2017; Pham et al., 2018; Dong & Yang, 2019b; Liu et al., 2019) .",
"In its early era, the great success of deep learning was promoted by novel neural architectures, such as ResNet (He et al., 2016) , Inception , VGGNet (Simonyan & Zisserman, 2015) , and Transformer (Vaswani et al., 2017) .",
"However, manually designing one architecture requires human experts to try numerous different operation and connection choices (Zoph & Le, 2017) .",
"In contrast to architectures that are manually designed, those automatically found by neural architecture search (NAS) algorithms require much less human interaction and expert effort.",
"These NAS-generated architectures have shown promising results in many domains, such as image recognition (Zoph & Le, 2017; Pham et al., 2018; , sequence modeling (Pham et al., 2018; Dong & Yang, 2019b; Liu et al., 2019) , etc.",
"Recently, a variety of NAS algorithms have been increasingly proposed.",
"While these NAS methods are methodically designed and show promising improvements, many setups in their algorithms are different.",
"(1) Different search space is utilized, e.g., different macro skeletons of the whole architecture Tan et al., 2019 ) and a different operation set for the micro cell within the skeleton (Pham et al., 2018) , etc. (2) After a good architecture is selected, various strategies can be employed to train this architecture and report the performance, e.g., different data augmentation (Ghiasi et al., 2018; , different regularization , different scheduler , and different selections of hyper-parameters (Liu et al., 2018; Dong & Yang, 2019a) .",
"(3) The validation set for testing the performance of the selected architecture is not split in the same way (Liu et al., 2019; Pham et al., 2018) .",
"These discrepancies raise a comparability problem when comparing the performance of various NAS algorithms, making it difficult to conclude their contributions.",
"In response to this problem, NAS-Bench-101 (Ying et al., 2019) and NAS-HPO-Bench are proposed.",
"However, some NAS algorithms can not be applied directly on NASBench-101, and NAS-HPO-Bench only has 144 candidate architectures, which maybe insufficient to evaluate NAS algorithms.",
"To extend these two benchmarks and towards better reproducibility of NAS methods 1 , we propose NAS-Bench-201 with a fixed cell search space, inspired from the search space used in the most popular neural cell-based searching algorithms Liu et al., 2019) .",
"As shown in Figure 1 , each architecture consists of a predefined skeleton with a stack of the searched cell.",
"In this way, architecture search is transformed into the problem of searching a good cell.",
"Each cell is represented as a densely-connected directed acyclic graph (DAG) as shown in the bottom section of Figure 1 .",
"Here the node represents the sum of the feature maps and each edge is associated with an operation transforming the feature maps from the source node to the target node.",
"The size of the search space is related to the number of nodes defined for the DAG and the size of the operation set.",
"In NAS-Bench-201, we choose 4 nodes and 5 representative operation candidates for the operation set, which generates a total search space of 15,625 cells/architectures.",
"Each architecture is trained multiple times on three different datasets.",
"The training log and performance of each architecture are provided for each run.",
"The training accuracy/test accuracy/training loss/test loss after every training epoch for each architecture plus the number of parameters and floating point operations (FLOPs) are accessible.",
"Hopefully, NAS-Bench-201 will show its value in the field of NAS research.",
"(1) It provides a unified benchmark for most up-to-date NAS algorithms including all cell-based NAS methods.",
"With NASBench-201, researchers can focus on designing robust searching algorithm while avoiding tedious hyper-parameter tuning of the searched architecture.",
"Thus, NAS-Bench-201 provides a relatively fair benchmark for the comparison of different NAS algorithms.",
"(2) It provides the full training log of each architecture.",
"Unnecessary repetitive training procedure of each selected architecture can be avoided (Liu et al., 2018; Zoph & Le, 2017) so that researchers can target on the essence of NAS, i.e., search algorithm.",
"Another benefit is that the validation time for NAS largely decreases when testing in NAS-Bench-201, which provides a computational power friendly environment for more participations in NAS.",
"(3) It provides results of each architecture on multiple datasets.",
"The model transferability can be thoroughly evaluated for most NAS algorithms.",
"(4) In NAS-Bench-201, we provide systematic analysis of the proposed search space.",
"We also evaluate 10 recent advanced NAS algorithms including reinforcement learning (RL)-based methods, evolutionary strategy (ES)-based methods, differentiable-based methods, etc.",
"We hope our empirical analysis can bring some insights to the future designs of NAS algorithms.",
"In this paper, we introduce NAS-Bench-201 that extends the scope of reproducible NAS.",
"In NASBench-201, almost any NAS algorithms can be directly evaluated.",
"We train and evaluate 15,625 architecture on three different datasets, and we provide results regarding different metrics.",
"We comprehensively analyze our dataset and test some recent NAS algorithms on NAS-Bench-201 to serve as baselines for future works.",
"In future, we will (1) consider HPO and NAS together and (2) much larger search space.",
"We welcome researchers to try their NAS algorithms on our NAS-Bench-201 and would update the paper to include their results.",
"Table 6 : We compare the correlation of different training strategies.",
"The correlation coefficient between the validation accuracy after several training epochs on CIFAR-10 and (1) the validation accuracy of full trained models on the CIFAR-10 training set, (2) the test accuracy on CIFAR-10 trained with the training and validation sets, (3) the validation/test accuracy on CIFAR-100 trained with the CIFAR-100 training set, (4) the validation/test accuracy on ImageNet-16-120 trained with the ImageNet-16-120 training set.",
"We use the validation accuracy after \"EPOCHS\" training epochs, where the the cosine annealing converged after \"TOTAL\" epochs.",
"Parameter sharing (Pham et al., 2018 ) becomes a common technique to improve the efficiency of differentiable neural architecture search methods (Liu et al., 2019; Dong & Yang, 2019b; a) .",
"The shared parameters are shared over millions of architecture candidates.",
"It is almost impossible for the shared parameters to be optimal for all candidates.",
"We hope to evaluate the trained shared parameters quantitatively.",
"Specially, we use DARTS, GDAS, and SETN to optimize the shared parameters and the architecture encoding on CIFAR-10.",
"For each architecture candidate, we can calculate its probability of being a good architecture from the architecture encoding following SETN (Dong & Yang, 2019a) .",
"In addition, we can also evaluate a candidate using the shared parameters on the validation set to obtain \"the one-shot validation accuracy\".",
"It is computationally expensive to evaluate all candidates on the whole validation set.",
"To accelerate this procedure, we evaluate each architecture on a mini-batch with the size of 2048, and use the accuracy on this mini-batch to approximate \"the one-shot validation accuracy\".",
"Ideally, the architecture ranking sorted by the probability or the one-shot validation accuracy should be similar to the ground truth ranking.",
"We show the correlation between the proxy metric and the ground truth validation accuracy in Table 7 .",
"There are several observations: (1) The correlation between the probability (encoded by the architecture encoding) and the ground truth accuracy is low.",
"It suggests that the argmax-based deriving strategy (Liu et al., 2019) can not secure a good architecture.",
"It remains open on how to derive a good architecture after optimizing the shared parameters.",
"(2) The behavior of BN layers is important to one-shot validation accuracy.",
"The accumulated mean and variance from the training set are harmful to one-shot accuracy.",
"Instead, each architecture candidate should re-calculate the mean and variance of the BN layers.",
"(3) GDAS introduced Gumbel-softmax sampling when optimizing the architecture encoding.",
"This strategy leads to a high correlation for the learned probability than that of DARTS.",
"(4) The uniform sampling strategy for training the shared parameters (Dong & Yang, 2019a) can increase the correlation for one-shot accuracy compared to the strategy of the joint optimizing strategy (Dong & Yang, 2019b; Liu et al., 2019) ."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0,
0.14814814925193787,
0.23529411852359772,
0,
0.1904761791229248,
0.1249999925494194,
0.06451612710952759,
0.4166666567325592,
0,
0,
0.1428571343421936,
0,
0.07407406717538834,
0.11428570747375488,
0.20689654350280762,
0.0624999962747097,
0.04878048598766327,
0,
0.0714285671710968,
0.12121211737394333,
0,
0.2222222238779068,
0.1599999964237213,
0.029411762952804565,
0,
0.13793103396892548,
0.08695651590824127,
0.19354838132858276,
0.08510638028383255,
0,
0,
0,
0.06896550953388214,
0.07999999821186066,
0,
0,
0,
0,
0.09999999403953552,
0.260869562625885,
0,
0.27272728085517883,
0,
0,
0.0624999962747097,
0,
0.21052631735801697,
0,
0.1538461446762085,
0.25,
0.0952380895614624,
0.4444444477558136,
0,
0.2142857164144516,
0.08695651590824127,
0.23076923191547394,
0,
0,
0,
0.054054051637649536,
0,
0.1904761791229248,
0.11764705181121826,
0.0833333283662796,
0,
0.0714285671710968,
0.0952380895614624,
0.060606058686971664,
0.07999999821186066,
0,
0,
0,
0.08695651590824127,
0.09999999403953552,
0.09090908616781235,
0,
0,
0.08695651590824127,
0.052631575614213943
] | HJxyZkBKDr | true | [
"A NAS benchmark applicable to almost any NAS algorithms."
] |
[
"Generative Adversarial Networks (GANs) are one of the most popular tools for learning complex high dimensional distributions.",
"However, generalization properties of GANs have not been well understood.",
"In this paper, we analyze the generalization of GANs in practical settings.",
"We show that discriminators trained on discrete datasets with the original GAN loss have poor generalization capability and do not approximate the theoretically optimal discriminator.",
"We propose a zero-centered gradient penalty for improving the generalization of the discriminator by pushing it toward the optimal discriminator.",
"The penalty guarantees the generalization and convergence of GANs.",
"Experiments on synthetic and large scale datasets verify our theoretical analysis.\n",
"GANs BID6 are one of the most popular tools for modeling high dimensional data.",
"The original GAN is, however, highly unstable and often suffers from mode collapse.",
"Much of recent researches has focused on improving the stability of GANs BID21 BID8 BID14 BID10 .",
"On the theoretical aspect, BID17 proved that gradient based training of the original GAN is locally stable.",
"BID8 further proved that GANs trained with Two Timescale Update Rule (TTUR) converge to local equilibria.",
"However, the generalization of GANs at local equilibria is not discussed in depth in these papers.",
"BID2 showed that the generator can win by remembering a polynomial number of training examples.",
"The result implies that a low capacity discriminator cannot detect the lack of diversity.",
"Therefore, it cannot teach the generator to approximate the target distribution.",
"In section 4, we discuss the generalization capability of high capacity discriminators.",
"We show that high capacity discriminators trained with the original GAN loss tends to overfit to the mislabeled samples in training dataset, guiding the generator toward collapsed equilibria (i.e. equilibria where the generator has mode collapse).",
"BID3 proposed to measure the generalization capability of GAN by estimating the number of modes in the model distribution using the birthday paradox.",
"Experiments on several datasets showed that the number of modes in the model distribution is several times greater than the number of training examples.",
"The author concluded that although GANs might not be able to learn distributions, they do exhibit some level of generalization.",
"Our analysis shows that poor generalization comes from the mismatch between discriminators trained on discrete finite datasets and the theoretically optimal discriminator.",
"We propose a zero-centered gradient penalty for improving the generalization capability of (high capacity) discriminators.",
"Our zero-centered gradient penalty pushes the discriminator toward the optimal one, making GAN to converge to equilibrium with good generalization capability.Our contributions are as follow:1.",
"We show that discriminators trained with the original GAN loss have poor generalization capability.",
"Poor generalization in the discriminator prevents the generator from learning the target distribution.",
"TAB0 compares the key properties of our 0-GP with one centered GP (1-GP) BID7 and zero centered GP on real/fake samples only (0-GP-sample) BID13 .",
"In this paper, we clarify the reason behind the poor generalization capability of GAN.",
"We show that the original GAN loss does not guide the discriminator and the generator toward a generalizable equilibrium.",
"We propose a zero-centered gradient penalty which pushes empirical discriminators toward the optimal discriminator with good generalization capability.",
"Our gradient penalty provides better generalization and convergence guarantee than other gradient penalties.",
"Experiments on diverse datasets verify that our method significantly improves the generalization and stability of GANs.Pengchuan Zhang, Qiang Liu, Dengyong Zhou, Tao Xu, and Xiaodong He.",
"On the discriminationgeneralization tradeoff in GANs.",
"In International Conference on Learning Representations, 2018.A PROOF FOR PROPOSITION 1For continuous random variable V , P(V = v) = 0 for any v. The probability of finding a noise vector z such that G(z) is exactly equal to a real datapoint x ∈ D r via random sampling is 0.",
"Therefore, the probability of a real datapoint x i being in the fake dataset D g is 0.",
"Similarly, the probability of any fake datapoint being in the real dataset is 0.",
"DISPLAYFORM0 Furthermore, due to the curse of dimensionality, the probability of sampling a datapoint which is close to another datapoint in high dimensional space also decrease exponentially.",
"The distances between datapoints are larger in higher dimensional space.",
"That suggests that it is easier to separate D r and D (t) g in higher dimensional space."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.13333332538604736,
0.260869562625885,
0.23999999463558197,
0.1621621549129486,
0.6666666865348816,
0.45454543828964233,
0.07999999821186066,
0.2222222238779068,
0.07692307233810425,
0.2857142686843872,
0.13793103396892548,
0.06896550953388214,
0.2142857164144516,
0.1428571343421936,
0.14814814925193787,
0,
0.1599999964237213,
0.045454543083906174,
0.1249999925494194,
0.0624999962747097,
0.1818181723356247,
0.11764705181121826,
0.7142857313156128,
0.2222222238779068,
0.14814814925193787,
0.0833333283662796,
0.11428570747375488,
0.1538461446762085,
0.19999998807907104,
0.4516128897666931,
0.3199999928474426,
0.25641024112701416,
0.10526315122842789,
0.09999999403953552,
0.13333332538604736,
0.07692307233810425,
0.1111111044883728,
0,
0.06666666269302368
] | ByxPYjC5KQ | true | [
"We propose a zero-centered gradient penalty for improving generalization and stability of GANs"
] |
[
"Generative adversarial networks (GANs) are a class of deep generative models which aim to learn a target distribution in an unsupervised fashion.",
"While they were successfully applied to many problems, training a GAN is a notoriously challenging task and requires a significant amount of hyperparameter tuning, neural architecture engineering, and a non-trivial amount of ``tricks\".",
"The success in many practical applications coupled with the lack of a measure to quantify the failure modes of GANs resulted in a plethora of proposed losses, regularization and normalization schemes, and neural architectures.",
"In this work we take a sober view of the current state of GANs from a practical perspective.",
"We reproduce the current state of the art and go beyond fairly exploring the GAN landscape.",
"We discuss common pitfalls and reproducibility issues, open-source our code on Github, and provide pre-trained models on TensorFlow Hub.",
"Deep generative models are a powerful class of unsupervised machine learning models.",
"The power of these models was recently harnessed in a variety of applications, including image generation, learned compression, and domain transfer BID13 Radford et al., 2016; BID0 BID0 .",
"Generative adversarial networks BID8 are one of the main approaches to learning such models in a fully unsupervised fashion.",
"The GAN framework can be viewed as a two-player game where the first player, the generator, is learning to transform some simple input distribution (usually a standard multivariate Normal or uniform) to a distribution on the space of images, such that the second player, the discriminator, cannot tell whether the samples belong to the true distribution or were synthesized.",
"Both players aim to minimize their own loss and the solution to the game is the Nash equilibrium where neither player can improve their loss unilaterally.",
"This powerful framework can also be derived by minimizing a divergence between the model distribution and the true distribution BID20 .Training",
"GANs involves solving a minimax problem over the parameters of the generator and the discriminator which are usually parameterized as deep convolutional neural networks. Consequently",
", this minimax problem is notoriously hard to solve in practice. As a result",
", a plethora of loss functions, regularization and normalization schemes, coupled with neural architecture choices, have been proposed BID8 Salimans et al., 2016; BID19 BID9 BID18 .Our contributions",
". In this work we",
"provide a thorough empirical analysis of these competing approaches, and help the researchers and practitioners navigate this space. We first define",
"the GAN landscape -the set of loss functions, normalization and regularization schemes, and the most commonly used architectures. We explore this",
"search space on several modern large-scale data sets by means of hyperparameter optimization, considering both \"good\" sets of hyperparameters reported in the literature, as well as ones obtained by Gaussian Process regression. By analyzing the",
"impact of the loss function, we conclude that the non-saturating loss is sufficiently stable across data sets, architectures and hyperparameters. We then proceed",
"to decompose the effect of various normalization and regularization schemes, as well as varying architectures. We show that both",
"gradient penalty BID9 as well as spectral normalization BID19 are useful in the context of high-capacity architectures. Finally, we discuss",
"some common pitfalls, reproducibility issues, and practical considerations. We provide reference",
"implementations, including training and evaluation code on Github 1 and provide pre-trained models on TensorFlow Hub. 2 2 THE GAN LANDSCAPE",
"Given that there are 4 major components (loss, architecture, regularization, normalization) to analyze for each data set, it is infeasible to explore the whole landscape.",
"Hence, we opt for a more pragmatic solution -we keep some dimensions fixed, and vary the others.",
"For each experiment we highlight three aspects: (1) FID distribution of the top 5% of the trained models, (2) the corresponding sample diversity score, and (3) the tradeoff between the computational budget (i.e. number of models to train) and model quality in terms of FID.",
"Each model was retrained 5 times with a different random seed and we report the median score.",
"The variance for models obtained by Gaussian Process regression is handled implicitly so we train each model once.",
"In this work we study the GAN landscape: losses, regularization and normalization schemes, and neural architectures, and their impact on the on the quality of generated samples which we assess by recently introduced quantitative metrics.",
"Our fair and thorough empirical evaluation suggests that one should consider non-saturating GAN loss and spectral normalization as default choices when applying GANs to a new data set.",
"Given additional computational budget, we suggest adding the gradient penalty from BID9 and train the model until convergence.",
"Furthermore, additional marginal gains can be obtained by combining normalization and regularization empirically confirming the importance of the Lipschitz constant of the discriminator.",
"Furthermore, both types of architectures proposed up-to this point perform reasonably well.",
"A separate ablation study uncovered that most of the tricks applied in the ResNet style architectures lead to marginal changes in the quality and should be avoided due to the high computational cost.",
"As a result of this large-scale study we identify the common pitfalls standing in the way of accurate and fair comparison and propose concrete actions to demystify the future results -issues with metrics, data set preprocessing, non-determinism, and missing implementation details are particularly striking.",
"We hope that this work, together with the open-sourced reference implementations and trained models, will serve as a solid baseline for future GAN research.",
"We present an empirical study with SNDCGAN and ResNet CIFAR architectures on CIFAR10 in figure 5 and figure 6 .",
"In addition to Section 3.1, we evaluate one more kind of loss on CIFAR10.",
"Here HG, NS and WGAN stand for hinge loss, non saturating loss and Wasserstein loss respectively.",
"We observe that hinge loss performs very similar to non-saturating loss.",
"DISPLAYFORM0"
] | [
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.11764705181121826,
0.09999999403953552,
0.24390242993831635,
0.7586206793785095,
0.29629629850387573,
0.06666666269302368,
0.1666666567325592,
0.09999999403953552,
0.1875,
0.13793103396892548,
0.05882352590560913,
0.1249999925494194,
0.2222222238779068,
0.07407406717538834,
0.09756097197532654,
0,
0.1818181723356247,
0.1249999925494194,
0.1395348757505417,
0.11764705181121826,
0.12903225421905518,
0.1249999925494194,
0.0833333283662796,
0.06451612710952759,
0.05405404791235924,
0.13333332538604736,
0.07999999821186066,
0.13333332538604736,
0,
0.1428571343421936,
0.09999999403953552,
0.13333332538604736,
0.12121211737394333,
0.07999999821186066,
0.1463414579629898,
0.11538460850715637,
0.10810810327529907,
0.06666666269302368,
0.1428571343421936,
0,
0
] | rkGG6s0qKQ | true | [
"A sober view on the current state of GANs from a practical perspective"
] |
[
"As deep reinforcement learning (RL) is applied to more tasks, there is a need to visualize and understand the behavior of learned agents.",
"Saliency maps explain agent behavior by highlighting the features of the input state that are most relevant for the agent in taking an action.",
"Existing perturbation-based approaches to compute saliency often highlight regions of the input that are not relevant to the action taken by the agent.",
"Our approach generates more focused saliency maps by balancing two aspects (specificity and relevance) that capture different desiderata of saliency.",
"The first captures the impact of perturbation on the relative expected reward of the action to be explained. ",
"The second downweights irrelevant features that alter the relative expected rewards of actions other than the action to be explained. ",
"We compare our approach with existing approaches on agents trained to play board games (Chess and Go) and Atari games (Breakout, Pong and Space Invaders). ",
"We show through illustrative examples (Chess, Atari, Go), human studies (Chess), and automated evaluation methods (Chess) that our approach generates saliency maps that are more interpretable for humans than existing approaches.",
"and deep sequential models (Karpathy et al., 2015) .",
"However, interpretability for RL-based agents has received significantly less attention.",
"Interpreting the strategies learned by RL agents can help users better understand the problem that the agent is trained to solve.",
"For instance, interpreting the actions of a chess-playing agent in a position could provide useful information about aspects of the position.",
"Interpretation of RL agents is also an important step before deploying such models to solve real-world problems.",
"Inspired by the popularity and use of saliency maps to interpret in computer vision, a number of existing approaches have proposed similar methods for reinforcement learning-based agents.",
"derive saliency maps that explain RL agent behavior by applying a Gaussian blur to different parts of the input image.",
"They generate saliency maps using differences in the value function and policy vector between the original and perturbed state.",
"They achieve promising results on agents trained to play Atari games.",
"Iyer et al. (2018) compute saliency maps using a difference in the action-value (Q(s, a)) between the original and perturbed state.",
"There are two primary limitations to these approaches.",
"The first is that they highlight features whose perturbation affects actions apart from the one we are explaining.",
"This is illustrated in Figure 1 , which shows a chess position (it is white's turn).",
"Stockfish 1 plays the move Bb6 in this position, which traps the black rook (a5) and queen (c7) 2 .",
"The knight protects the white bishop on a4, and hence the move works.",
"In this position, if we consider the saliency of the white queen (square d1), then it is apparent that the queen is not involved in the tactic and hence the saliency should be low.",
"However, perturbing the state (by removing the queen) leads to a state with substantially different values for Q(s, a) and V (s).",
"Therefore, existing approaches Iyer et al., 2018) mark the queen as salient.",
"The second limitation is that they highlight features that are not relevant to the action to be explained.",
"In Figure 1c , perturbing the state by removing the black pawn on c6 alters the expected reward for actions other than the one to be explained.",
"Therefore, it alters the policy vector and is marked salient.",
"However, the pawn is not relevant to explain the move played in the position (Bb6).",
"In this work, we propose a perturbation based approach for generating saliency maps for black-box agents that builds on two desired properties of action-focused saliency.",
"The first, specificity, captures the impact of perturbation only on the Q-value of the action to be explained.",
"In the above example, this term downweights features such as the white queen that impact the expected reward of all actions equally.",
"The second, relevance, downweights irrelevant features that alter the expected rewards of actions other than the action to be explained.",
"It removes features such as the black pawn on c6 that increase the expected reward of other actions (in this case, Bb4).",
"By combining these aspects, we generate a saliency map that highlights features of the input state that are relevant for the action to be explained.",
"Figure 1 illustrates how the saliency map generated by our approach only highlights pieces relevant to the move, unlike existing approaches.",
"We use our approach to explain the actions taken by agents for board games (Chess and Go), and for Atari games (Breakout, Pong and Space Invaders).",
"Using a number of illustrative examples, we show that our proposed approach obtains more focused and accurate interpretations for all of these setups when compared to and Iyer et al. (2018) .",
"We also demonstrate that our approach is more effective in identifying important pieces in chess puzzles, and further, in aiding skilled chess players to solve chess puzzles (improves accuracy of solving them by nearly 25% and reduces the time taken by 31% over existing approaches).",
"We presented a perturbation-based approach that generates more focused saliency maps than existing approaches by balancing two aspects (specificity and relevance) that capture different desired characteristics of saliency.",
"We showed through illustrative examples (Chess, Atari, Go), human studies (Chess), and automated evaluation methods (Chess) that our approach generates saliency maps that are more interpretable for humans than existing approaches.",
"The results of our technique show that saliency can provide meaningful insights into a black-box RL agent's behavior.",
"For experiments on Go, we use the pre-trained MiniGo RL agent: https://github.com/ tensorflow/minigo.",
"This agent was trained using the AlphaGo Algorithm .",
"It also adds features and architecture changes from the AlphaZero Algorithm Silver et al. (2017) .",
"For experiments on Atari agents and for generating saliency maps for , we use their code and pre-trained RL agents available at https://github.com/greydanus/ visualize_atari.",
"These agents are trained using the Asynchronous Advantage Actor-Critic Algorithm (A3C) (Mnih et al., 2016) .",
"For generating saliency maps using Iyer et al. (2018) , we use our implementation.",
"All of our code and more detailed results are available in our Github repository: https://github.com/ rl-interpretation/understandingRL .",
"For chess and Go, we perturb the board position by removing one piece at a time.",
"We do not remove a piece if the resulting position is illegal.",
"For instance, in chess, we do not remove the king.",
"For Atari, we use the perturbation technique described in .",
"The technique perturbs the input image by adding a Gaussian blur localized around a pixel.",
"The blur is constructed using the Hadamard product to interpolate between the original input image and a Gaussian blur.",
"The saliency maps for Atari agents have been computed on the frames provided by in their code repository.",
"The puzzles for conducting the Chess human studies, creating the Chess Saliency Dataset, and providing illustrative examples have been taken from Lichess: https://database.lichess.",
"org/.",
"The puzzles for illustrative examples on Go have been taken from OnlineGo: https: //online-go.com/puzzles.",
"Figure 8 shows the saliency maps generated by our approach for the top 3 moves in a chess position.",
"The maps highlight the different pieces that are salient for each move.",
"For instance, Figure 8a shows that for the move Qd4, the pawn on g7 is important.",
"This is because the queen move protects the pawn.",
"For the saliency maps in Figures 8b and 8c , the pawn on g7 is not highlighted."
] | [
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.25,
0.4166666567325592,
0.25531914830207825,
0.17391303181648254,
0.1395348757505417,
0.1702127605676651,
0.3199999928474426,
0.10526315122842789,
0.1111111044883728,
0,
0.21739129722118378,
0.13636362552642822,
0.1818181723356247,
0.22641508281230927,
0.3404255211353302,
0.13636362552642822,
0.21052631735801697,
0.1702127605676651,
0.05714285373687744,
0.08888888359069824,
0.0476190447807312,
0.08888888359069824,
0.10256409645080566,
0.1111111044883728,
0.21276594698429108,
0.04999999701976776,
0.1860465109348297,
0.15686273574829102,
0.10810810327529907,
0.20000000298023224,
0.19999998807907104,
0.1428571343421936,
0.12765957415103912,
0.17391303181648254,
0.1249999925494194,
0.3199999928474426,
0.21276594698429108,
0.36734694242477417,
0.178571417927742,
0.21212120354175568,
0.22641508281230927,
0.10526315122842789,
0.17777776718139648,
0.09756097197532654,
0.11428570747375488,
0.1428571343421936,
0.12244897335767746,
0.09302324801683426,
0,
0.09302324801683426,
0.23255813121795654,
0.1538461446762085,
0.054054051637649536,
0.0555555522441864,
0.19512194395065308,
0.22727271914482117,
0.13333332538604736,
0.08163265138864517,
0,
0.17777776718139648,
0.05128204822540283,
0.0476190447807312,
0.05714285373687744,
0.09302324801683426
] | SJgzLkBKPB | true | [
"We propose a model-agnostic approach to explain the behaviour of black-box deep RL agents, trained to play Atari and board games, by highlighting relevant features of an input state."
] |
[
"To understand the inner work of deep neural networks and provide possible theoretical explanations, we study the deep representations through the untrained, random weight CNN-DCN architecture.",
"As a convolutional AutoEncoder, CNN indicates the portion of a convolutional neural network from the input to an intermediate convolutional layer, and DCN indicates the corresponding deconvolutional portion.",
"As compared with DCN training for pre-trained CNN, training the DCN for random-weight CNN converges more quickly and yields higher quality image reconstruction.",
"Then, what happens for the overall random CNN-DCN?",
"We gain intriguing results that the image can be reconstructed with good quality.",
"To gain more insight on the intermediate random representation, we investigate the impact of network width versus depth, number of random channels, and size of random kernels on the reconstruction quality, and provide theoretical justifications on empirical observations.",
"We further provide a fast style transfer application using the random weight CNN-DCN architecture to show the potential of our observation.",
"Deep neural networks have achieved impressive performance on various machine learning tasks.",
"However, our understanding of how these deep learning models operate remains limited.",
"Providing a theoretical explanation or empirical interpretation for their success is an important research area.",
"Existing works Arora et al. (2015; 2014) ; Paul & Venkatasubramanian (2014) propose mathematical models for learning architectures, however, the theoretical analysis of which fails to capture the state-of-the-art architectures.",
"Gilbert et al. (2017) ; Chang et al. (2018) leverage either compressive sensing or ordinary differential equations to facilitate the understanding of CNNs.",
"Ma et al. (2018) ; Hand & Voroninski (2017) deliver rigorous proofs about the invertibility of convolutional generative models.",
"Despite these promising progress, there is no solid theoretical foundation on why the overall random CNN-DCN architecture is capable for image reconstruction.",
"In this paper, we bridge the gap between the empirical observation and theoretical explanation of CNNs, especially the invertibility of the overall random CNN-DCN architecture.",
"To understand the deep representations of intermediate layers, a variety of visualization techniques have been developed in order to unveil the feature representation and hence the inner mechanism of convolutional neural networks (CNNs) Zeiler & Fergus (2014) ; Mahendran & Vedaldi (2015) ; Yosinski et al. (2015) ; Xu et al. (2015) .",
"In this work we propose applying randomization on deconvolutional networks (DCNs) for a systematic investigation of deep representations, and provide insights on the intrinsic properties of deep convolutional networks.",
"We first observe that training the DCN for reconstruction, the random CNN preserves richer information in the feature space.",
"The training on DCN converges faster for the random CNN contrasted to pre-trained CNN and yields higher quality image reconstruction.",
"It indicates there is rich information encoded in the random features; the pre-trained CNN discards some information irrelevant for classification and encodes relevant features in a way favorable for classification but harder for reconstruction.",
"This leads us to be curious about what happens if we feed the images to a CNN-DCN architecture where both the CNN and the DCN have random weights.",
"Our motivation for studying the overall random CNN-DCN architecture is threefold.",
"First, a series of works empirically showed that a certain feature learning architecture with random weights allowed satisfactory discriminative validity on object recognition tasks Jarrett et al. (2009) , and certain convolutional pooling architectures even with random weights can be inherently frequency selective and translation invariant, leading to the potential application of fast search of network architectures Saxe et al. (2011) .",
"Second, studying a complex system with random weights rather than learned determin-istic ones may lead to a better understanding of the system even in the learned case.",
"For example, in the field of compressed sensing, random sampling leads to breakthroughs in the understanding of the number of required measurements for a stable reconstruction of the signal Giryes et al. (2016) ; Gilbert et al. (2017) .",
"For highly complicated systems with nonlinear operations along the hidden layers, there are already some investigations on random deep neural networks Saxe et al. (2011); Arora et al. (2014) ; Ulyanov et al. (2017a) .",
"Third, as a reversible encoder-decoder architecture, deconvolution is a valuable visualization technique for studying the feature representation of deep convolutional nets.",
"To our knowledge there is no existing work on the random deconvolutional networks in the literature.",
"Our work on using deconvolution to study the random intermediate features of CNN provides new insights and inspires possible applications with untrained deep neural models.",
"Our main results and contributions are as follows.",
"We study the overall random CNN-DCN architecture to investigate the randomness in deconvolutional networks, i.e. there is no training at all for inverting the inputs that passes their information through a random weight convolutional network.",
"Surprisingly, the image is inverted with satisfactory quality.",
"The geometric and photometric features of the inputs are well preserved given a sufficient number of channels.",
"We provide empirical evidence as well as theoretical analysis on the reconstruction quality, and bound the error in terms of the number of random nonlinearities, the network architecture, the distribution of the random weights, and local similarity of the input which is high for natual images.",
"Extensive empirical study by varying the network width, depth, or kernel size has been performed to show the effectiveness on the inversion.",
"The CNN-DCN architecture with random weights can be very useful on texture synthesis, style transfer, image segmentation, image inpainting, etc.",
"As an example, we illustrate how fast style transfer can be applied using random weight CNN-DCN architecture.",
"Note that our approach can save a big amount of time and energy as we do not need to do the pre-training on deep models, and it is very flexible as we can easily try whatever nerual network architecture as we wish.",
"In this work, we introduce a novel investigation on deep random representations through the convolution-deconvolution architecture, which to our knowledge is the first study on the randomness of deconvolutional networks in the literature.",
"We extensively explore the potential of randomness for image reconstruction on deep neural networks, and found that images can be reconstructed with satisfactory quality when there are a sufficient number of channels.",
"Extensive investigations have been performed to show the effectiveness of the reconstruction.",
"We also provide theoretical analysis that a slight variant of the random CNN architecture has the ability to reconstruct the input image, and the output converges to the input image when the width of the network, i.e. number of channels, goes to infinity.",
"We also bound the reconstruction error between the input and the convergence value as a function of the network width and depth.",
"(2015) and AlexNet Krizhevsky et al. (2012) .",
"A convolutional layer is usually followed by a pooling layer, except for the last convolutional layer, Conv5.",
"For consistency, we will explore the output after the convolutional layer but before the pooling layer.",
"In what follows, \"feature representation\" or \"image representation\" denotes the feature vectors after the linear convolutional operator and the nonlinear activation operator but before the pooling operator for dimension reduction.",
"We build a CNN-DCN architecture on the layer of feature representation to be studied.",
"The convolution operator of a deconvolutional layer in DCN is the same as the convolution operator in CNN, and an upsampling operator is applied in DCN to invert the corresponding pooling operator in CNN, as designed in Dosovitskiy & Brox (2016).",
"We will focus on the representations of the convolutional layers, since Dosovitskiy et al. Dosovitskiy & Brox (2016) build DCNs for each layer of the pre-trained AlexNet and find that the predicted image from the fully connected layers becomes very vague.",
"For the activation operator, we apply the leaky ReLU nonlinearity with slope 0.2, that is, r(x) = x if x ≥ 0 and otherwise r(x) = 0.2x.",
"At the end of the DCN, a final Crop layer is added to cut the output of DeConv1 to the same shape as the original images.",
"We build deconvolutional networks on both VGG16 and AlexNet, and most importantly, we focus on the random features of the CNN structure when training the corresponding DCN.",
"Then we do no training for deconvolution and explore the properties of the purely random CNN-DCN architecture on VGG16."
] | [
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.4285714328289032,
0.14999999105930328,
0.25641024112701416,
0.14814814925193787,
0.25,
0.25,
0.3589743673801422,
0,
0.12903225421905518,
0.05882352590560913,
0.1249999925494194,
0.09999999403953552,
0.10526315122842789,
0.25,
0.25,
0.1666666567325592,
0.1818181723356247,
0.1666666567325592,
0.31578946113586426,
0.1702127605676651,
0.1818181723356247,
0.19999998807907104,
0.11594202369451523,
0.1428571343421936,
0.1666666567325592,
0.12244897335767746,
0.20512819290161133,
0.11764705181121826,
0.3181818127632141,
0.07407406717538834,
0.26923075318336487,
0.2222222238779068,
0.17142856121063232,
0.22641508281230927,
0.10256409645080566,
0.15789473056793213,
0.1666666567325592,
0.14814814925193787,
0.1666666567325592,
0.3199999928474426,
0.2666666507720947,
0.23076923191547394,
0.2702702581882477,
0.07692307233810425,
0.05882352590560913,
0.0624999962747097,
0.09302324801683426,
0.3030303120613098,
0.1304347813129425,
0.18518517911434174,
0.0952380895614624,
0.10256409645080566,
0.2380952388048172,
0.2702702581882477
] | S1xTMyHYwB | true | [
"We investigate the deep representation of untrained, random weight CNN-DCN architectures, and show their image reconstruction quality and possible applications."
] |
[
" The current trade-off between depth and computational cost makes it difficult to adopt deep neural networks for many industrial applications, especially when computing power is limited.",
"Here, we are inspired by the idea that, while deeper embeddings are needed to discriminate difficult samples, a large number of samples can be well discriminated via much shallower embeddings.",
"In this study, we introduce the concept of decision gates (d-gate), modules trained to decide whether a sample needs to be projected into a deeper embedding or if an early prediction can be made at the d-gate, thus enabling the computation of dynamic representations at different depths. ",
"The proposed d-gate modules can be integrated with any deep neural network and reduces the average computational cost of the deep neural networks while maintaining modeling accuracy.",
"Experimental results show that leveraging the proposed d-gate modules led to a ~38% speed-up and ~39% FLOPS reduction on ResNet-101 and ~46% speed-up and $\\sim$36\\% FLOPS reduction on DenseNet-201 trained on the CIFAR10 dataset with only ~2% drop in accuracy.",
"Past studies such as BID15 have shown that deeper architectures often lead to better modeling performance; however, deeper architectures also pose a number of issues.",
"Besides becoming more prone to overfitting and becoming more difficult to train, the trade-off between depth and computational cost makes it difficult to adopt deeper architectures for many industrial applications.He et al. BID6 tackled the former issue of degradation in learning deeper neural networks (e.g., vanishing gradient) by introducing the concept of residual learning, where learning is based on the residual mapping rather than directly on the unreferenced mapping.",
"Following that, Xie et al. BID18 took advantage of the inception idea (i.e, split-transform-merge strategy) within a residual block structure to provide better subspace modeling while resolving the degradation problem at the same time, resulting in a ResNext architecture with improved modeling accuracy.",
"To tackle the issue of computational cost, a wide variety of methods have been proposed, including: precision reduction BID9 , model compression BID5 , teacher-student strategies BID7 , and evolutionary algorithms BID12 BID13 .More",
"recently, conditional computation BID0 BID3 BID11 BID17 BID1 and early prediction BID16 methods have been proposed to tackle this issue, which involve the dynamic execution of different modules within a network. Conditional",
"computation methods have largely been motivated by the idea that residual networks can be considered as an ensemble of shallower networks. As such, these",
"methods take advantage of skip connections to determine which residual modules are necessary to be executed, with most leveraging reinforcement learning.In this study, we explore the idea of early prediction but instead draw inspiration from the soft-margin support vector BID2 theory for decision-making. Specifically,",
"we introduce the concept of decision gates (d-gate), modules trained to decide whether a sample needs to be projected into a deeper embedding or if an early prediction can be made at the d-gate, thus enabling the conditional computation of dynamic representations at different depths. The proposed",
"d-gate modules can be integrated with any deep neural network without the need to train networks from scratch, and thus reduces the average computational complexity of the deep neural networks while maintaining modeling accuracy.",
"The efficacy of the proposed d-gate modules is examined with two different network architectures (ResNet101 BID6 and DenseNet201 BID8 ) on the CIFAR10 dataset.",
"A key benefit of the proposed d-gate modules is that it enables fine control over the trade-off between modeling accuracy and computational cost by adjusting the d-gate decision thresholds.",
"By decreasing the d-gate decision thresholds, the number of samples undergoing early prediction increases, thus reducing the average computational cost of network predictions greatly.",
"For this study, we integrated two d-gate modules in ResNet-101 (after the first and second main blocks) and DenseNet-201 (after the first and second dense blocks), and explore different d-gate configurations.",
"The networks are implemented in the Pytorch framework and the prediction speeds are reported based on single Nvidia Titan Xp GPU.It can be observed from TAB0 that the computational cost of ResNet network can be reduced by 67 MFLOPS while maintaining the same level of accuracy as to the original ResNet-101 by integrating two d-gate modules with decision thresholds of (t1, t2) = (2.5, 2.5).",
"The integration of d-gate modules can reduce the computational cost of ResNet-101 network by ∼39% (i.e., lower by 1.95 GFLOPS) with 1.7% drop in accuracy compared to the original ResNet-101 (with distance thresholds (t1, t2) = (1.0, 2.0) in d-gate1 and d-gate2), resulting in a ∼38% speed-up.",
"The experiments for DenseNet-201 show that it is possible to reduce the number of FLOPs by 970 MFLOPs (∼36% reduction) with only a ∼2% drop in accuracy, leading to a ∼46% speed-up.",
"Furthermore, a 2.3× speed-up can be achieved with d-gate modules compared to the original DenseNet-201 within a 3% accuracy margin.",
"Based on the experimental results, the proposed d-gate modules lead to a significant increase in prediction speed, making it well-suited for industrial applications.In addition to the d-gate modules being proposed, one of the key contributions of this paper is the introduction of a hinge loss for training the d-gate modules.",
"Past studies BID10 have argued that crossentropy results in a small margin between the decision boundaries and the training data.",
"As such, it is very difficult to trust the confidence values of the Softmax layer to decide about the sample since there is no valuable information in the Softmax output.",
"To demonstrate the effectiveness of the hinge loss leveraged in the proposed d-gates compared to the cross-entropy loss, an additional comparative experiment was conducted.",
"More specifically, two decision gates were added to ResNet101 in the same way as reported.",
"However, rather than train using the proposed hinge loss, the decision gates were instead trained via a cross-entropy loss.",
"This enables us to compare the effect of hinge loss vs. cross-entropy loss on decision gate functionality.",
"FIG1 demonstrates the accuracy vs. number of FLOPs for the network where the decision gates were trained based on the proposed hinge loss approach compared to trained using a regular cross-entropy training procedure.",
"It can be observed that, with the same number of FLOPs in the network, the network where the decision gates were trained based on the proposed hinge loss provides much higher modeling accuracy compared to that trained via cross-entropy loss.",
"The accuracy gap increases exponentially when the decision gates are configured such that the network uses fewer number of FLOPs.",
"What this illustrates is the aforementioned issue with the use of cross-entropy loss and decision boundaries."
] | [
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.17391303181648254,
0.0833333283662796,
0.09836065024137497,
0.13636362552642822,
0.11538460850715637,
0.09302324801683426,
0.1315789371728897,
0.09999999403953552,
0.039215680211782455,
0.11538460850715637,
0.04651162400841713,
0.032258059829473495,
0.09999999403953552,
0.16326530277729034,
0.04651162400841713,
0,
0,
0,
0.07692307233810425,
0.060606054961681366,
0.07999999821186066,
0.09999999403953552,
0.14035087823867798,
0.051282044500112534,
0.045454539358615875,
0.04878048226237297,
0.11428570747375488,
0.052631575614213943,
0.1666666567325592,
0.16326530277729034,
0.07407406717538834,
0,
0
] | ByGYF_J4j7 | true | [
"This paper introduces a new dynamic feature representation approach to provide a more efficient way to do inference on deep neural networks."
] |
[
"This work provides an additional step in the theoretical understanding of neural networks.",
"We consider neural networks with one hidden layer and show that when learning symmetric functions, one can choose initial conditions so that standard SGD training efficiently produces generalization guarantees.",
"We empirically verify this and show that this does not hold when the initial conditions are chosen at random.",
"The proof of convergence investigates the interaction between the two layers of the network.",
"Our results highlight the importance of using symmetry in the design of neural networks.",
"Building a theory that can help to understand neural networks and guide their construction is one of the current challenges of machine learning.",
"Here we wish to shed some light on the role symmetry plays in the construction of neural networks.",
"It is well-known that symmetry can be used to enhance the performance of neural networks.",
"For example, convolutional neural networks (CNNs) (see Lecun et al. (1998) ) use the translational symmetry of images to classify images better than fully connected neural networks.",
"Our focus is on the role of symmetry in the initialization stage.",
"We show that symmetry-based initialization can be the difference between failure and success.",
"On a high-level, the study of neural networks can be partitioned to three different aspects.",
"Expressiveness Given an architecture, what are the functions it can approximate well?",
"Training Given a network with a \"proper\" architecture, can the network fit the training data and in a reasonable time?",
"Generalization Given that the training seemed successful, will the true error be small as well?",
"We study these aspects for the first \"non trivial\" case of neural networks, networks with one hidden layer.",
"We are mostly interested in the initialization phase.",
"If we take a network with the appropriate architecture, we can always initialize it to the desired function.",
"A standard method (that induces a non trivial learning problem) is using random weights to initialize the network.",
"A different reasonable choice is to require the initialization to be useful for an entire class of functions.",
"We follow the latter option.",
"Our focus is on the role of symmetry.",
"We consider the following class of symmetric functions S = S n = n ∑ i=0 a i · 1 |x|=i : a 1 , . . . , a n ∈ {±1} , where x ∈ {0, 1} n and |x| = ∑ i x i .",
"The functions in this class are invariant under arbitrary permutations of the input's coordinates.",
"The parity function π(x) = (−1) |x| and the majority function are well-known examples of symmetric functions.",
"Expressiveness for this class was explored by Minsky and Papert (1988) .",
"They showed that the parity function cannot be represented using a network with limited \"connectivity\".",
"Contrastingly, if we use a fully connected network with one hidden layer and a common activation function (like sign, sigmoid, or ReLU) only O(n) neurons are needed.",
"We provide such explicit representations for all functions in S; see Lemmas 1 and 2.",
"We also provide useful information on both the training phase and generalization capabilities of the neural network.",
"We show that, with proper initialization, the training process (using standard SGD) efficiently converges to zero empirical error, and that consequently the network has small true error as well.",
"Theorem 1.",
"There exists a constant c > 1 so that the following holds.",
"There exists a network with one hidden layer, cn neurons with sigmoid or ReLU activations, and an initialization such that for all distributions D over X = {0, 1} n and all functions f ∈ S with sample size m ≥ c(n+log(1/δ ))/ε, after performing poly(n) SGD updates with a fixed step size h = 1/poly(n) it holds that",
"is the network after training over S.",
"The number of parameters in the network described in Theorem 1 is Ω(n 2 ).",
"So in general one could expect overfitting when the sample size is as small as O(n).",
"Nevertheless, the theorem provides generalization guarantees, even for such a small sample size.",
"The initialization phase plays an important role in proving Theorem 1.",
"To emphasize this, we report an empirical phenomenon (this is \"folklore\").",
"We show that a network cannot learn parity from a random initialization (see Section 5.3).",
"On one hand, if the network size is big, we can bring the empirical error to zero (as suggested in Soudry and Carmon (2016) ), but the true error is close to 1/2.",
"On the other hand, if its size is too small, the network is not even able to achieve small empirical error (see Figure 5 ).",
"We observe a similar phenomenon also for a random symmetric function.",
"An open question remains: why is it true that a sample of size polynomial in n does not suffice to learn parity (with random initialization)?",
"A similar phenomenon was theoretically explained by Shamir (2016) and Song et al. (2017) .",
"The parity function belongs to the class of all parities",
"where · is the standard inner product.",
"This class is efficiently PAC-learnable with O(n) samples using Gaussian elimination.",
"A continuous version of P was studied by Shamir (2016) and Song et al. (2017) .",
"To study the training phase, they used a generalized notion of statistical queries (SQ); see Kearns (1998) .",
"In this framework, they show that most functions in the class P cannot be efficiently learned (roughly stated, learning the class requires an exponential amount of resources).",
"This framework, however, does not seem to capture actual training of neural networks using SGD.",
"For example, it is not clear if one SGD update corresponds to a single query in this model.",
"In addition, typically one receives a dataset and performs the training by going over it many times, whereas the query model estimates the gradient using a fresh batch of samples in each iteration.",
"The query model also assumes the noise to be adversarial, an assumption that does not necessarily hold in reality.",
"Finally, the SQ-based lower bound holds for every initialization (in particular, for the initialization we use here), so it does not capture the efficient training process Theorem 1 describes.",
"Theorem 1 shows, however, that with symmetry-based initialization, parity can be efficiently learned.",
"So, in a nutshell, parity can not be learned as part of P, but it can be learned as part of S. One could wonder why the hardness proof for P cannot be applied for S as both classes consist of many input sensitive functions.",
"The answer lies in the fact that P has a far bigger statistical dimension than S (all functions in P are orthogonal to each other, unlike S).",
"The proof of the theorem utilizes the different behavior of the two layers in the network.",
"SGD is performed using a step size h that is polynomially small in n.",
"The analysis shows that in a polynomial number of steps that is independent of the choice of h the following two properties hold:",
"(i) the output neuron reaches a \"good\" state and",
"(ii) the hidden layer does not change in a \"meaningful\" way.",
"These two properties hold when h is small enough.",
"In Section 5.2, we experiment with large values of h.",
"We see that, although the training error is zero, the true error becomes large.",
"Here is a high level description of the proof.",
"The neurons in the hidden layer define an \"embedding\" of the inputs space X = {0, 1} n into R (a.k.a. the feature map).",
"This embedding changes in time according to the training examples and process.",
"The proof shows that if at any point in time this embedding has good enough margin, then training with standard SGD quickly converges.",
"This is explained in more detail in Section 3.",
"It remains an interesting open problem to understand this phenomenon in greater generality, using a cleaner and more abstract language.",
"This work demonstrates that symmetries can play a critical role when designing a neural network.",
"We proved that any symmetric function can be learned by a shallow neural network, with proper initialization.",
"We demonstrated by simulations that this neural network is stable under corruption of data, and that the small step size is the proof is necessary.",
"We also demonstrated that the parity function or a random symmetric function cannot be learned with random initialization.",
"How to explain this empirical phenomenon is still an open question.",
"The works Shamir (2016) and Song et al. (2017) treated parities using the language of SQ.",
"This language obscures the inner mechanism of the network training, so a more concrete explanation is currently missing.",
"We proved in a special case that the standard SGD training of a network efficiently produces low true error.",
"The general problem that remains is proving similar results for general neural networks.",
"A suggestion for future works is to try to identify favorable geometric states of the network that guarantee fast convergence and generalization.",
"Proof.",
"For all k ∈ A and x ∈ X of weight k,",
"the first inequality holds since ∆ i (x) ≥ 0 for all i and x.",
"For all k ∈ A and x ∈ X of weight k,",
"= 2 exp(−2.5)/(1 − exp(−5)) < 0.17; the first equality follows from the definition, the second equality follows from σ (5(x + 0.5)) − σ (5(x − 0.5)) = σ (5(x + 0.5)) + σ (5(−x + 0.5)) − 1 for all x, the first inequality neglects the negative sums, and the second inequality follows because exp(ξ ) > σ (ξ ) for all ξ ."
] | [
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.25806450843811035,
0.2222222238779068,
0.1111111044883728,
0.13793103396892548,
0.2666666507720947,
0.25,
0.22857142984867096,
0.3030303120613098,
0.1904761791229248,
0.13793103396892548,
0.12903225421905518,
0.3030303120613098,
0.13333332538604736,
0.11764705181121826,
0.0624999962747097,
0.2222222238779068,
0.07692307233810425,
0.11764705181121826,
0.0555555522441864,
0.17142856121063232,
0.08695651590824127,
0.1538461446762085,
0.21276594698429108,
0.1875,
0.1764705777168274,
0.06896550953388214,
0.060606054961681366,
0,
0,
0.1764705777168274,
0.043478257954120636,
0.06666666269302368,
0,
0.07999999821186066,
0.1249999925494194,
0.12121211737394333,
0.06451612710952759,
0,
0,
0.060606054961681366,
0.08695651590824127,
0.04878048226237297,
0.0714285671710968,
0.09302324801683426,
0,
0.2142857164144516,
0.07999999821186066,
0.06896550953388214,
0.060606054961681366,
0.17142856121063232,
0.1860465109348297,
0.1818181723356247,
0,
0.0833333283662796,
0.05405404791235924,
0.04651162400841713,
0.06451612710952759,
0.11538460850715637,
0.04651162400841713,
0.13333332538604736,
0,
0.10810810327529907,
0.07407406717538834,
0.06896550953388214,
0.07407406717538834,
0.06896550953388214,
0.06666666269302368,
0.14814814925193787,
0.0952380895614624,
0.06666666269302368,
0,
0,
0,
0.1875,
0.17142856121063232,
0.1538461446762085,
0.11764705181121826,
0,
0.11764705181121826,
0.11428570747375488,
0.1111111044883728,
0.13333332538604736,
0.10256409645080566,
0.06896550953388214,
0.0624999962747097,
0.06896550953388214,
0.03703703358769417
] | Skeh-xBYDH | true | [
"When initialized properly, neural networks can learn the simple class of symmetric functions; when initialized randomly, they fail. "
] |
[
"Architecture search aims at automatically finding neural architectures that are competitive with architectures designed by human experts.",
"While recent approaches have achieved state-of-the-art predictive performance for image recognition, they are problematic under resource constraints for two reasons: (1) the neural architectures found are solely optimized for high predictive performance, without penalizing excessive resource consumption; (2)most architecture search methods require vast computational resources.",
"We address the first shortcoming by proposing LEMONADE, an evolutionary algorithm for multi-objective architecture search that allows approximating the Pareto-front of architectures under multiple objectives, such as predictive performance and number of parameters, in a single run of the method.",
"We address the second shortcoming by proposing a Lamarckian inheritance mechanism for LEMONADE which generates children networks that are warmstarted with the predictive performance of their trained parents.",
"This is accomplished by using (approximate) network morphism operators for generating children.",
"The combination of these two contributions allows finding models that are on par or even outperform different-sized NASNets, MobileNets, MobileNets V2 and Wide Residual Networks on CIFAR-10 and ImageNet64x64 within only one week on eight GPUs, which is about 20-40x less compute power than previous architecture search methods that yield state-of-the-art performance.",
"Deep learning has enabled remarkable progress on a variety of perceptual tasks, such as image recognition BID12 , speech recognition , and machine translation BID0 .",
"One crucial aspect for this progress are novel neural architectures BID25 He et al., 2016; BID7 .",
"Currently employed architectures have mostly been developed manually by human experts, which is a time-consuming and error-prone process.",
"Because of this, there is growing interest in automatic architecture search methods (Elsken et al., 2018) .",
"Some of the architectures found in an automated way have already outperformed the best manually-designed ones; however, algorithms such as by BID32 ; ; BID20 BID36 for finding these architectures require enormous computational resources often in the range of thousands of GPU days.Prior work on architecture search has typically framed the problem as a single-objective optimization problem.",
"However, most applications of deep learning do not only require high predictive performance on unseen data but also low resource-consumption in terms of, e.g., inference time, model size or energy consumption.",
"Moreover, there is typically an implicit trade-off between predictive performance and consumption of resources.",
"Recently, several architectures have been manually designed that aim at reducing resource-consumption while retaining high predictive performance BID8 BID22 .",
"Automatically found neural architectures have also been down-scaled to reduce resource consumption .",
"However, very little previous work has taken the trade-off between resource-consumption and predictive performance into account during automatic architecture search.In this work, we make the following two main contributions:1.",
"To overcome the need for thousands of GPU days BID32 BID21 , we make use of operators acting on the space of neural network architectures that preserve the function a network represents, dubbed network morphisms (Chen et al., 2015; BID27 , obviating training from scratch and thereby substantially reducing the required training time per network.",
"This mechanism can be interpreted as Lamarckian inheritance in the context of evolutionary algorithms, where Lamarckism refers to a mechanism which allows passing skills acquired during an individual's lifetime (e.g., by means of learning), on to children by means of inheritance.",
"Since network morphisms are limited to solely increasing a network's size (and therefore likely also resource consumption), we introduce approximate network morphisms (Section 3.2) to also allow shrinking networks, which is essential in the context of multi-objective search.",
"The proposed Lamarckian inheritance mechanism could in principle be combined with any evolutionary algorithm for architecture search, or any other method using (a combination of) localized changes in architecture space.2.",
"We propose a Lamarckian Evolutionary algorithm for Multi-Objective Neural Architecture DEsign, dubbed LEMONADE, Section 4, which is suited for the joint optimization of several objectives, such as predictive performance, inference time, or number of parameters.",
"LEMONADE maintains a population of networks on an approximation of the Pareto front of the multiple objectives.",
"In contrast to generic multi-objective algorithms, LEMONADE exploits that evaluating certain objectives (such as an architecture's number of parameters) is cheap while evaluating the predictive performance on validation data is expensive (since it requires training the model first).",
"Thus, LEMONADE handles its various objectives differently: it first selects a subset of architectures, assigning higher probability to architectures that would fill gaps on the Pareto front for the \"cheap\" objectives; then, it trains and evaluates only this subset, further reducing the computational resource requirements during architecture search.",
"In contrast to other multi-objective architecture search methods, LEMONADE",
"(i) does not require to define a trade-off between performance and other objectives a-priori (e.g., by weighting objectives when using scalarization methods) but rather returns a set of architectures, which allows the user to select a suitable model a-posteriori;",
"(ii) LEMONADE does not require to be initialized with well performing architectures; it can be initialized with trivial architectures and hence requires less prior knowledge.",
"Also, LEMONADE can handle various search spaces, including complex topologies with multiple branches and skip connections.We evaluate LEMONADE for up to five objectives on two different search spaces for image classification:",
"(i) non-modularized architectures and",
"(ii) cells that are used as repeatable building blocks within an architecture BID31 and also allow transfer to other data sets.",
"LEMONADE returns a population of CNNs covering architectures with 10 000 to 10 000 000 parameters.Within only 5 days on 16 GPUs, LEMONADE discovers architectures that are competitive in terms of predictive performance and resource consumption with hand-designed networks, such as MobileNet V2 BID22 , as well as architectures that were automatically designed using 40x greater resources and other multi-objective methods (Dong et al., 2018) .",
"We have proposed LEMONADE, a multi-objective evolutionary algorithm for architecture search.",
"The algorithm employs a Lamarckian inheritance mechanism based on (approximate) network morphism operators to speed up the training of novel architectures.",
"Moreover, LEMONADE exploits the fact that evaluating several objectives, such as the performance of a neural network, is orders of magnitude more expensive than evaluating, e.g., a model's number of parameters.",
"Experiments on CIFAR-10 and ImageNet64x64 show that LEMONADE is able to find competitive models and cells both in terms of accuracy and of resource efficiency.We believe that using more sophisticated concepts from the multi-objective evolutionary algorithms literature and using other network operators (e.g., crossovers and advanced compression methods) could further improve LEMONADE's performance in the future."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.060606054961681366,
0.035087715834379196,
0.22641508281230927,
0.22727271914482117,
0.06896550953388214,
0.0615384578704834,
0.14999999105930328,
0.05882352590560913,
0.11428570747375488,
0,
0.1230769157409668,
0.03999999538064003,
0.06451612710952759,
0,
0,
0.043478257954120636,
0.12903225421905518,
0.18867924809455872,
0.038461532443761826,
0.2222222238779068,
0.3199999928474426,
0.12903225421905518,
0.038461532443761826,
0.12903225421905518,
0,
0.07407406717538834,
0.05128204822540283,
0.17391303181648254,
0.0952380895614624,
0.052631575614213943,
0.0845070406794548,
0.2857142686843872,
0.2631579041481018,
0.043478257954120636,
0.1492537260055542
] | ByME42AqK7 | true | [
"We propose a method for efficient Multi-Objective Neural Architecture Search based on Lamarckian inheritance and evolutionary algorithms."
] |
[
"We address two challenges of probabilistic topic modelling in order to better estimate\n",
"the probability of a word in a given context, i.e., P(wordjcontext) : (1) No\n",
"Language Structure in Context: Probabilistic topic models ignore word order by\n",
"summarizing a given context as a “bag-of-word” and consequently the semantics\n",
"of words in the context is lost.",
"In this work, we incorporate language structure\n",
"by combining a neural autoregressive topic model (TM) with a LSTM based language\n",
"model (LSTM-LM) in a single probabilistic framework.",
"The LSTM-LM\n",
"learns a vector-space representation of each word by accounting for word order\n",
"in local collocation patterns, while the TM simultaneously learns a latent representation\n",
"from the entire document.",
"In addition, the LSTM-LM models complex\n",
"characteristics of language (e.g., syntax and semantics), while the TM discovers\n",
"the underlying thematic structure in a collection of documents.",
"We unite two complementary\n",
"paradigms of learning the meaning of word occurrences by combining\n",
"a topic model and a language model in a unified probabilistic framework, named\n",
"as ctx-DocNADE.",
"(2) Limited Context and/or Smaller training corpus of documents:\n",
"In settings with a small number of word occurrences (i.e., lack of context)\n",
"in short text or data sparsity in a corpus of few documents, the application of TMs\n",
"is challenging.",
"We address this challenge by incorporating external knowledge\n",
"into neural autoregressive topic models via a language modelling approach: we\n",
"use word embeddings as input of a LSTM-LM with the aim to improve the wordtopic\n",
"mapping on a smaller and/or short-text corpus.",
"The proposed DocNADE\n",
"extension is named as ctx-DocNADEe.\n\n",
"We present novel neural autoregressive topic model variants coupled with neural\n",
"language models and embeddings priors that consistently outperform state-of-theart\n",
"generative topic models in terms of generalization (perplexity), interpretability\n",
"(topic coherence) and applicability (retrieval and classification) over 6 long-text\n",
"and 8 short-text datasets from diverse domains.",
"Probabilistic topic models, such as LDA BID1 , Replicated Softmax (RSM) (Salakhutdinov & Hinton, 2009 ) and Document Neural Autoregressive Distribution Estimator (DocNADE) variants BID12 BID34 BID15 BID8 are often used to extract topics from text collections, and predict the probabilities of each word in a given document belonging to each topic.",
"Subsequently, they learn latent document representations that can be used to perform natural language processing (NLP) tasks such as information retrieval (IR), document classification or summarization.",
"However, such probabilistic topic models ignore word order and represent a given context as a bag of its words, thereby disregarding semantic information.To motivate our first task of extending probabilistic topic models to incorporate word order and language structure, assume that we conduct topic analysis on the following two sentences: When estimating the probability of a word in a given context (here: P (\"bear\"|context)), traditional topic models do not account for language structure since they ignore word order within the context and are based on \"bag-of-words\" (BoWs) only.",
"In this particular setting, the two sentences have the same unigram statistics, but are about different topics.",
"On deciding which topic generated the word \"bear\" in the second sentence, the preceding words \"market falls\" make it more likely that it was generated by a topic that assigns a high probability to words related to stock market trading, where \"bear territory\" is a colloquial expression in the domain.",
"In addition, the language structure (e.g., syntax and semantics) is also ignored.",
"For instance, the word \"bear\" in the first sentence is a proper noun and subject while it is an object in the second.",
"In practice, topic models also ignore functional words such as \"into\", which may not be appropriate in some scenarios.Recently, BID23 have shown that a deep contextualized LSTM-based language model (LSTM-LM) is able to capture different language concepts in a layer-wise fashion, e.g., the lowest layer captures language syntax and topmost layer captures semantics.",
"However, in LSTM-LMs the probability of a word is a function of its sentence only and word occurrences are modeled in a fine granularity.",
"Consequently, LSTM-LMs do not capture semantics at a document level.",
"To this end, recent studies such as TDLM BID14 , Topic-RNN (Dieng et al., 2016) and TCNLM BID32 have integrated the merits of latent topic and neural language models (LMs); however, they have focused on improving LMs with global (semantics) dependencies using latent topics.Similarly, while bi-gram LDA based topic models BID31 BID33 and n-gram based topic learning BID15 can capture word order in short contexts, they are unable to capture long term dependencies and language concepts.",
"In contrast, DocNADE variants BID12 BID8 ) learns word occurrences across documents i.e., coarse granularity (in the sense that the topic assigned to a given word occurrence equally depends on all the other words appearing in the same document); however since it is based on the BoW assumption all language structure is ignored.",
"In language modeling, BID17 have shown that recurrent neural networks result in a significant reduction of perplexity over standard n-gram models.Contribution 1: We introduce language structure into neural autoregressive topic models via a LSTM-LM, thereby accounting for word ordering (or semantic regularities), language concepts and long-range dependencies.",
"This allows for the accurate prediction of words, where the probability of each word is a function of global and local (semantics) contexts, modeled via DocNADE and LSTM-LM, respectively.",
"The proposed neural topic model is named as contextualized-Document Neural Autoregressive Distribution Estimator (ctx-DocNADE) and offers learning complementary semantics by combining joint word and latent topic learning in a unified neural autoregressive framework.",
"For instance, FIG0 (left and middle) shows the complementary topic and word semantics, based on TM and LM representations of the term \"fall\".",
"Observe that the topic captures the usage of \"fall\" in the context of stock market trading, attributed to the global (semantic) view.While this is a powerful approach for incorporating language structure and word order in particular for long texts and corpora with many documents, learning from contextual information remains challenging in settings with short texts and few documents, since (1) limited word co-occurrences or little context (2) significant word non-overlap in such short texts and (3) small training corpus of documents lead to little evidence for learning word co-occurrences.",
"However, distributional word representations (i.e. word embeddings) BID22 have shown to capture both the semantic and syntactic relatedness in words and demonstrated impressive performance in NLP tasks.For example, assume that we conduct topic analysis over the two short text fragments: Deal with stock index falls and Brace for market share drops.",
"Traditional topic models with \"BoW\" assumption will not be able to infer relatedness between word pairs such as (falls, drops) due to the lack of word-overlap and small context in the two phrases.",
"However, in the distributed embedding space, the word pairs are semantically related as shown in FIG0 (left).",
"DISPLAYFORM0 Related work such as BID26 employed web search results to improve the information in short texts and BID24 introduced word similarity via thesauri and dictionaries into LDA.",
"BID5 and BID20 integrated word embeddings into LDA and Dirichlet Multinomial Mixture (DMM) BID21 models.",
"Recently, BID8 extends DocNADE by introducing pre-trained word embeddings in topic learning.",
"However, they ignore the underlying language structure, e.g., word ordering, syntax, etc.",
"In addition, DocNADE and its extensions outperform LDA and RSM topic models in terms of perplexity and IR.Contribution 2: We incorporate distributed compositional priors in DocNADE: we use pre-trained word embeddings via LSTM-LM to supplement the multinomial topic model (i.e., DocNADE) in learning latent topic and textual representations on a smaller corpus and/or short texts.",
"Knowing similarities in a distributed space and integrating this complementary information via a LSTM-LM, a topic representation is much more likely and coherent.Taken together, we combine the advantages of complementary learning and external knowledge, and couple topic-and language models with pre-trained word embeddings to model short and long text documents in a unified neural autoregressive framework, named as ctx-DocNADEe.",
"Our approach learns better textual representations, which we quantify via generalizability (e.g., perplexity), interpretability (e.g., topic extraction and coherence) and applicability (e.g., IR and classification).To",
"illustrate our two contributions, we apply our modeling approaches to 7 long-text and 8 short-text datasets from diverse domains and demonstrate that our approach consistently outperforms state-ofthe-art generative topic models. Our",
"learned representations, result in a gain of: (1) 4.6% (.790 vs .755) in topic coherence, (2) 6.5% (.615 vs .577) in precision at retrieval fraction 0.02, and (3) 4.4% (.662 vs .634) in F 1 for text classification, averaged over 6 long-text and 8 short-text datasets.When applied to short-text and long-text documents, our proposed modeling approaches generate contextualized topic vectors, which we name textTOvec. The",
"code is available at https: //github.com/pgcool/textTOvec.",
"In this work, we have shown that accounting for language concepts such as word ordering, syntactic and semantic information in neural autoregressive topic models helps to better estimate the probability of a word in a given context.",
"To this end, we have combined a neural autoregressive topic-(i.e., DocNADE) and a neural language (e.g., LSTM-LM) model in a single probabilistic framework with an aim to introduce language concepts in each of the autoregressive steps of the topic model.",
"This facilitates learning a latent representation from the entire document whilst accounting for the local dynamics of the collocation patterns, encoded in the internal states of LSTM-LM.",
"We further augment this complementary learning with external knowledge by introducing word embeddings.",
"Our experimental results show that our proposed modeling approaches consistently outperform stateof-the-art generative topic models, quantified by generalization (perplexity), topic interpretability (coherence), and applicability (text retrieval and categorization) on 15 datasets.Label: training Instructors shall have tertiary education and experience in the operation and maintenance of the equipment or sub-system of Plant.",
"They shall be proficient in the use of the English language both written and oral.",
"They shall be able to deliver instructions clearly and systematically.",
"The curriculum vitae of the instructors shall be submitted for acceptance by the Engineer at least 8 weeks before the commencement of any training.Label: maintenance The Contractor shall provide experienced staff for 24 hours per Day, 7 Days per week, throughout the Year, for call out to carry out On-call Maintenance for the Signalling System.Label: cables Unless otherwise specified, this standard is applicable to all cables which include single and multi-core cables and wires, Local Area Network (LAN) cables and Fibre Optic (FO) cables.Label: installation The Contractor shall provide and permanently install the asset labels onto all equipment supplied under this Contract.",
"The Contractor shall liaise and co-ordinate with the Engineer for the format and the content of the labels.",
"The Contractor shall submit the final format and size of the labels as well as the installation layout of the labels on the respective equipment, to the Engineer for acceptance.Label: operations, interlocking It shall be possible to switch any station Interlocking capable of reversing the service into \"Auto-Turnaround Operation\".",
"This facility once selected shall automatically route Trains into and out of these stations, independently of the ATS system.",
"At stations where multiple platforms can be used to reverse the service it shall be possible to select one or both platforms for the service reversal.",
"TAB10 : Perplexity scores for different λ in Generalization task: Ablation over validation set labels are not used during training.",
"The class labels are only used to check if the retrieved documents have the same class label as the query document.",
"To perform document retrieval, we use the same train/development/test split of documents discussed in data statistics (experimental section) for all the datasets during learning.See TAB1 for the hyperparameters in the document retrieval task."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.27586206793785095,
0.12903225421905518,
0.2222222238779068,
0.07692307233810425,
0.17391303181648254,
0.17391303181648254,
0.2857142686843872,
0.17391303181648254,
0.14814814925193787,
0.0714285671710968,
0,
0.09090908616781235,
0.20689654350280762,
0.23999999463558197,
0,
0.07999999821186066,
0.38461539149284363,
0.07999999821186066,
0.06666666269302368,
0.13333332538604736,
0,
0.29629629850387573,
0.13333332538604736,
0,
0,
0,
0.23076923191547394,
0.23999999463558197,
0.3199999928474426,
0.07999999821186066,
0.08695651590824127,
0.15625,
0.09756097197532654,
0.2278480976819992,
0,
0.1111111044883728,
0.19999998807907104,
0.11428570747375488,
0.24242423474788666,
0.17142856121063232,
0,
0.20000000298023224,
0.158730149269104,
0.33898305892944336,
0.1463414579629898,
0.2222222238779068,
0.1666666567325592,
0.20000000298023224,
0.15625,
0.25531914830207825,
0.06451612710952759,
0.1395348757505417,
0.13333332538604736,
0.1428571343421936,
0.06666666269302368,
0.20895521342754364,
0.2686567008495331,
0.09999999403953552,
0.22727271914482117,
0.18666666746139526,
0,
0.35999998450279236,
0.35999998450279236,
0.1538461446762085,
0,
0.16129031777381897,
0.2666666507720947,
0.1538461446762085,
0.08695651590824127,
0.19999998807907104,
0.14814814925193787,
0.11764705181121826,
0.10810810327529907,
0.1111111044883728,
0.05882352590560913,
0.13636362552642822
] | rkgoyn09KQ | true | [
"Unified neural model of topic and language modeling to introduce language structure in topic models for contextualized topic vectors "
] |
[
"The large memory requirements of deep neural networks strain the capabilities of many devices, limiting their deployment and adoption.",
"Model compression methods effectively reduce the memory requirements of these models, usually through applying transformations such as weight pruning or quantization.",
"In this paper, we present a novel scheme for lossy weight encoding which complements conventional compression techniques.",
"The encoding is based on the Bloomier filter, a probabilistic data structure that can save space at the cost of introducing random errors.",
"Leveraging the ability of neural networks to tolerate these imperfections and by re-training around the errors, the proposed technique, Weightless, can compress DNN weights by up to 496x; with the same model accuracy, this results in up to a 1.51x improvement over the state-of-the-art.",
"The continued success of deep neural networks (DNNs) comes with increasing demands on compute, memory, and networking resources.",
"Moreover, the correlation between model size and accuracy suggests that tomorrow's networks will only grow larger.",
"This growth presents a challenge for resource-constrained platforms such as mobile phones and wireless sensors.",
"As new hardware now enables executing DNN inferences on these devices BID0 Qualcomm, 2017) , a practical issue that remains is reducing the burden of distributing the latest models especially in regions of the world not using high-bandwidth networks.",
"For instance, it is estimated that, globally, 800 million users will be using 2G networks by 2020 BID11 , which can take up to 30 minutes to download just 20MB of data.",
"By contrast, today's DNNs are on the order of tens to hundreds of MBs, making them difficult to distribute.",
"In addition to network bandwidth, storage capacity on resource-constrained devices is limited, as more applications look to leverage DNNs.",
"Thus, in order to support state-of-the-art deep learning methods on edge devices, methods to reduce the size of DNN models without sacrificing model accuracy are needed.Model compression is a popular solution for this problem.",
"A variety of compression algorithms have been proposed in recent years and many exploit the intrinsic redundancy in model weights.",
"Broadly speaking, the majority of this work has focused on ways of simplifying or eliminating weight values (e.g., through weight pruning and quantization), while comparatively little effort has been spent on devising techniques for encoding and compressing.In this paper we propose a novel lossy encoding method, Weightless, based on Bloomier filters, a probabilistic data structure BID5 .",
"Bloomier filters inexactly store a function map, and by adjusting the filter parameters, we can elect to use less storage space at the cost of an increasing chance of erroneous values.",
"We use this data structure to compactly encode the weights of a neural network, exploiting redundancy in the weights to tolerate some errors.",
"In conjunction with existing weight simplification techniques, namely pruning and clustering, our approach dramatically reduces the memory and bandwidth requirements of DNNs for over the wire transmission and ondevice storage.",
"Weightless demonstrates compression rates of up to 496× without loss of accuracy, improving on the state of the art by up to 1.51×.",
"Furthermore, we show that Weightless scales better with increasing sparsity, which means more sophisticated pruning methods yield even more benefits.This work demonstrates the efficacy of compressing DNNs with lossy encoding using probabilistic data structures.",
"Even after the same aggressive lossy simplification steps of weight pruning and clustering (see Section 2), there is still sufficient extraneous information left in model weights to allow an approximate encoding scheme to substantially reduce the memory footprint without loss of model accuracy.",
"Section 3 reviews Bloomier filters and details Weightless.",
"State-of-the-art compression results using Weightless are presented in Section 4.",
"Finally, in Section 4.3 shows that Weightless scales better as networks become more sparse compared to the previous best solution.",
"This paper demonstrates a novel lossy encoding scheme, called Weightless, for compressing sparse weights in deep neural networks.",
"The lossy property of Weightless stems from its use of the Bloomier filter, a probabilistic data structure for approximately encoding functions.",
"By first simplifying a model with weight pruning and clustering, we transform its weights to best align with the properties of the Bloomier filter to maximize compression.",
"Combined, Weightless achieves compression of up to 496×, improving the previous state-of-the-art by 1.51×.We",
"also see avenues for continuing this line of research. First",
", as better mechanisms for pruning model weights are discovered, end-to-end compression with Weightless will improve commensurately. Second",
", the theory community has already developed more advanced-albeit more complicatedconstruction algorithms for Bloomier filters, which promise asymptotically better space utilization compared to the method used in this paper. Finally",
", by demonstrating the opportunity for using lossy encoding schemes for model compression, we hope we have opened the door for more research on encoding algorithms and novel uses of probabilistic data structures."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.12903225421905518,
0,
0.06666666269302368,
0.17142856121063232,
0.20000000298023224,
0.12903225421905518,
0.06896550953388214,
0.0714285671710968,
0.16326530277729034,
0.1818181723356247,
0.06666666269302368,
0.06451612710952759,
0.08695651590824127,
0,
0.1269841194152832,
0.0952380895614624,
0.3030303120613098,
0,
0.0624999962747097,
0.17391304671764374,
0.038461536169052124,
0,
0.08695651590824127,
0.11764705181121826,
0.19354838132858276,
0.1818181723356247,
0.10810810327529907,
0.13793103396892548,
0,
0,
0.04878048226237297,
0.19512194395065308
] | S1pWFzbAW | true | [
"We propose a new way to compress neural networks using probabilistic data structures."
] |
[
"Recent work has shown that contextualized word representations derived from neural machine translation (NMT) are a viable alternative to such from simple word predictions tasks.",
"This is because the internal understanding that needs to be built in order to be able to translate from one language to another is much more comprehensive.",
"Unfortunately, computational and memory limitations as of present prevent NMT models from using large word vocabularies, and thus alternatives such as subword units (BPE and morphological segmentations) and characters have been used.",
"Here we study the impact of using different kinds of units on the quality of the resulting representations when used to model syntax, semantics, and morphology. ",
"We found that while representations derived from subwords are slightly better for modeling syntax, character-based representations are superior for modeling morphology and are also more robust to noisy input.",
"Recent years have seen the rise of deep neural networks and the subsequent rise of representation learning based on network-internal activations.",
"Such representations have been shown useful when addressing various problems from fields such as image recognition , speech recognition BID2 , and natural language processing (NLP) BID30 .",
"The central idea is that the internal representations trained to solve an NLP task could be useful for other tasks as well.",
"For example, word embeddings learned for a simple word prediction task in context, word2vec-style BID31 , have now become almost obligatory in state-ofthe-art NLP models.",
"One issue with such word embeddings is that the resulting representation is context-independent.",
"Recently, it has been shown that huge performance gains can be achieved by contextualizing the representations, so that the same word could have a different embedding in different contexts.",
"This is best achieved by changing the auxiliary task, e.g., the ElMo model learns contextualized word embeddings from a language modeling task, using LSTMs BID37 .More",
"recently, it has been shown that complex tasks such as neural machine translation can yield superior representations BID29 . This",
"is because the internal understanding of the input language that needs to be built by the network in order to be able to translate from one language to another needs to be much more comprehensive compared to what would be needed for a simple word prediction task. Such",
"representations have yielded state-of-the-art results for tasks such as sentiment analysis, textual entailment, and question answering.Unfortunately, computational and memory limitations as of present prevent neural machine translation (NMT) models from using large-scale vocabularies, typically limiting them to 30-50k words . This",
"is a severe limitation, as most NLP applications need to handle vocabularies of millions of words, e.g., word2vec BID31 , GloVe BID36 and FastText BID32 offer pre-trained embeddings for 3M, 2M, and 2.5M words/phrases, respectively. The",
"problem is typically addressed using byte-pair encoding (BPE), where words are segmented into pseudo-word character sequences based on frequency BID43 . A somewhat",
"less popular solution is to use characters as the basic unit of representation BID8 . In the case",
"of morphologically complex languages, another alternative is to reduce the vocabulary by using unsupervised morpheme segmentation BID6 ).The impact of",
"using different units of representation in NMT models has been studied in previous work BID27 BID10 BID8 Lee et al., 2017, among others) , but the focus has been exclusively on the quality of the resulting translation output. However, it remains",
"unclear what input and output units should be chosen if we are primarily interested in representation learning. Here, we aim at bridging",
"this gap by evaluating the quality of NMT-derived embeddings originating from units of different granularity when used for modeling morphology, syntax, and semantics (as opposed to end tasks such as sentiment analysis and question answering). Our contributions can be",
"summarized as follows:• We study the impact of using words vs. characters vs. BPE units vs. morphological segments on the quality of representations learned by NMT models when used to model morphology, syntax, and semantics.• We further study the robustness",
"of these representations with respect to noise.• We make practical recommendations",
"based on our results.We found that while representations derived from morphological segments are better for modeling syntax, character-based ones are superior for morphology and are also more robust to noise.",
"Comparing Performance Across Tasks Character-based representations outperformed in the case of morphological tagging; BPE-based representations performed better than others in the semantic tagging task for German (and about the same in English); and Morfessor performed slightly better than others for syntax.",
"Syntactic tagging requires knowledge of the complete sentence.",
"Splitting a sentence into characters substantially increases the length (from 50 words in a sentence to 250 characters on average) of the sentence.",
"The character-based models lack in capturing long distance dependencies, which could be a reason for their low performance in this task.",
"Similarly, in case of morphological tagging, the information about the morphology of a word is dependent on the surrounding words plus internal information (root, morphemes etc.) presents in the word.",
"The character-based system has access to all of this information which results in high tagging performance.",
"Morfessor performed better than BPE in the morphological tagging task because its segments are linguistically motivated units (segmented into root + morphemes), making the information about the word morphology explicit in the representation.",
"In comparison, BPE solely focuses on the frequency of characters occurring together in the corpus and can yield linguistically incorrect units.",
"TAB3 summarizes the translation performance of each system.",
"In most of the cases, the subword-based systems perform better than the word-based and character-based systems.",
"However, this is not true in the case of using their representations as feature in the core NLP tasks.",
"For example, we found that character-based representations perform better than others in the morphological tagging task.",
"On an additional note, BPE-based representations although perform better for some tasks, are sensitive to noise.",
"Their ability to segment any unknown words into two known subwords result in less reliable systems.",
"Notably, the translation performance of the BPE-based system falls below the character-based system even with 10% noise only.",
"We studied the impact of using different representation units -words, characters, BPE units, and morphological segments on the representations learned by NMT.",
"Unlike previous work, which targeted end tasks such as sentiment analysis and question answering, here we focused on modeling morphology, syntax and semantics.",
"We found that",
"(i) while representations derived from subwords units are slightly better for modeling syntax,",
"(ii) character representations are distinctly better for modeling morphology, and",
"(iii) are also more robust to noise in contrast to subword representations,",
"(iv) and that using all representations together works best.",
"Based on our findings, we conjecture that although BPE segmentation is a de-facto standard in building state-of-the-art NMT systems, the underlying representations it yields are suboptimal for external tasks.",
"Character-based representations provide a more viable and robust alternative in this regard, followed by morphological segmentation.",
"In future work, we plan to explore specialized character-based architectures for NMT.",
"We further want to study how different units affect representation quality in non-recurrent models such as the Transformer BID48 and in convolutional architectures BID14 .A",
"SUPPLEMENTARY MATERIAL"
] | [
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.08888888359069824,
0.09090908616781235,
0.23999999463558197,
0.8888888955116272,
0.260869562625885,
0.19999998807907104,
0.12765957415103912,
0.13636362552642822,
0,
0.11764705181121826,
0.0833333283662796,
0.1249999925494194,
0.04878048226237297,
0.10344827175140381,
0.16393442451953888,
0.10344827175140381,
0.09090908616781235,
0.15789473056793213,
0.2380952388048172,
0.28070175647735596,
0.09302324801683426,
0.33898305892944336,
0.5714285373687744,
0.23529411852359772,
0.2800000011920929,
0.15094339847564697,
0.13333332538604736,
0.19999998807907104,
0,
0.17391303181648254,
0.10526315122842789,
0.11764705181121826,
0.2380952388048172,
0.13333332538604736,
0.17142856121063232,
0.20512819290161133,
0.10526315122842789,
0.10526315122842789,
0.052631575614213943,
0.10810810327529907,
0.4651162624359131,
0.09090908616781235,
0.07999999821186066,
0.17142856121063232,
0.1249999925494194,
0.12121211737394333,
0.19354838132858276,
0.11764705181121826,
0.10526315122842789,
0.05882352590560913,
0.3478260934352875
] | B1x0E2C5tQ | true | [
"We study the impact of using different kinds of subword units on the quality of the resulting representations when used to model syntax, semantics, and morphology."
] |
[
"Few-shot learning is the process of learning novel classes using only a few examples and it remains a challenging task in machine learning.",
"Many sophisticated few-shot learning algorithms have been proposed based on the notion that networks can easily overfit to novel examples if they are simply fine-tuned using only a few examples.",
"In this study, we show that in the commonly used low-resolution mini-ImageNet dataset, the fine-tuning method achieves higher accuracy than common few-shot learning algorithms in the 1-shot task and nearly the same accuracy as that of the state-of-the-art algorithm in the 5-shot task.",
"We then evaluate our method with more practical tasks, namely the high-resolution single-domain and cross-domain tasks.",
"With both tasks, we show that our method achieves higher accuracy than common few-shot learning algorithms.",
"We further analyze the experimental results and show that:",
"1) the retraining process can be stabilized by employing a low learning rate,",
"2) using adaptive gradient optimizers during fine-tuning can increase test accuracy, and",
"3) test accuracy can be improved by updating the entire network when a large domain-shift exists between base and novel classes.",
"Previous studies have shown that high image classification performance can be achieved by using deep networks and big datasets (Krizhevsky et al., 2012; Simonyan & Zisserman, 2015; He et al., 2016; Szegedy et al., 2015) .",
"However, the performances of these algorithms rely heavily on extensive manually annotated images, and considerable cost is often incurred in preparing these datasets.",
"To avoid this problem, few-shot learning, which is a task of learning novel classes using only a few examples, has been actively researched.",
"However, few-shot learning remains a considerably challenging task in machine learning, and classification accuracy in few-shot tasks is much lower than that of the many-shot regime.",
"This is because a network pretrained using base classes must adapt to novel classes using only a few examples.",
"The simplest means of overcoming this difficulty is to fine-tune the network using novel classes.",
"However, the number of trainable parameters of deep networks is so large that we believe that networks can easily overfit to novel classes if we simply fine-tune the networks using only a few examples.",
"For example, the number of trainable parameters in the ResNet-152 (He et al., 2016 ) is approximately 60 M, which is much greater than the number of novel examples (e.g., 25 for 5-way 5-shot learning), and this leads us to the idea of overfitting.",
"Using various sophisticated methods, numerous studies have been conducted to prevent networks from overfitting.",
"However, the performance of a naive fine-tuning method has not been well investigated, and Chen et al. (2019) has pointed out that performance of this method had been underestimated in previous studies.",
"Therefore, in this study, we analyze the performance of a fine-tuning method and show that it can achieve higher classification accuracy than common few-shot learning methods and, in some cases, can achieve an accuracy approximating that of the state-of-the-art algorithm.",
"We also experimentally show that:",
"1) a low learning rate stabilizes the retraining process,",
"2) using an adaptive gradient optimizer when fine-tuning the network increases test accuracy, and",
"3) updating the entire network increases test accuracy when a large domain shift occurs between base and novel classes.",
"To evaluate accuracy in few-shot image classification tasks, the mini-ImageNet dataset (Vinyals et al., 2016) has been used in many previous studies.",
"This is a subset of the ImageNet dataset (Deng et al., 2009) in which each image is resized to 84 × 84 to reduce computational cost.the high-resolution mini-ImageNet dataset and cross-domain dataset.",
"Both datasets contain higherresolution images than the original mini-ImageNet dataset, and the cross-domain dataset represents a greater challenge because base and novel classes are sampled from different datasets.",
"Thus, a larger domain shift occurs between these classes.",
"In this study, we evaluate the performance of our method using the high-resolution mini-ImageNet dataset (high-resolution single-domain task) and cross-domain dataset (cross-domain task) as well as the common low-resolution mini-ImageNet dataset (low-resolution single-domain task).",
"Details of these datasets are provided in Section 2.3.",
"The main contributions of this study are as follows: 1) We show that in the common low-resolution single-domain task, our fine-tuning method achieves higher accuracy than common few-shot learning algorithms in the 1-shot task and nearly the same accuracy as that of the state-of-the-art method in the 5-shot task.",
"We also show that our method achieves higher accuracy than common few-shot learning methods both in the high-resolution single-domain and cross-domain tasks.",
"Note that we do not compare the performance of our method with the state-of-the-art algorithm in the high-resolution single-domain and cross-domain tasks because the performances for these tasks are not reported in the corresponding papers.",
"2) We further analyze the experimental results and show that a low learning rate stabilizes the relearning process, that test accuracy can be increased by using an adaptive gradient optimizer such as the Adam optimizer, and that updating the entire network can increase test accuracy when a large domain shift occurs.",
"2 OVERVIEW OF FEW-SHOT LEARNING 2.1 NOTATION Few-shot learning is a task of learning novel classes using only a few labeled examples.",
"This task is also called N -way K-shot learning, where N denotes the number of novel classes and K is the number of labeled examples per class.",
"We focus on the 5-way learning task such as in previous studies (Chen et al., 2019; Schwartz et al., 2018) .",
"Labeled and unlabeled examples of novel classes are called support and query sets, respectively.",
"A network is pretrained using base classes, which contain numerous labeled examples.",
"Base and novel classes are mutually exclusive.",
"Base classes are used for pretraining, and novel classes are used for retraining and testing.",
"Validation classes are used to determine a learning rate and the number of epochs required to retrain the network.",
"In this study, we showed that in the low-resolution single-domain task, our fine-tuning method achieved higher accuracy than common few-shot learning methods in the 1-shot task and nearly the same accuracy as the state-of-the-art method in the 5-shot task.",
"We also evaluated our method with more practical tasks, such as the high-resolution single-domain and cross-domain tasks.",
"In both tasks, our method achieved higher accuracy than common few-shot learning methods.",
"We then experimentally showed that:",
"1) a low learning rate stabilizes the retraining process,",
"2) adaptive gradient optimizers such as Adam improve test accuracy, and",
"3) updating the entire network results in higher accuracy when a large domain shift occurs.",
"We believe that these insights into fine-tuning for few-shot learning tasks will help our community tackle this challenging task."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.1304347813129425,
0.2181818187236786,
0.23728813230991364,
0.1428571343421936,
0.1904761791229248,
0,
0.05128204822540283,
0.052631575614213943,
0.12765957415103912,
0.06896550953388214,
0.0833333283662796,
0.2083333283662796,
0.3199999928474426,
0.1428571343421936,
0.09756097197532654,
0.14814814925193787,
0.1230769157409668,
0.04999999701976776,
0.18867923319339752,
0.33898305892944336,
0,
0.05714285373687744,
0.04999999701976776,
0.13333332538604736,
0.1666666567325592,
0.14814814925193787,
0.07843136787414551,
0.05714285373687744,
0.039215680211782455,
0.0555555522441864,
0.25806450843811035,
0.2916666567325592,
0.18518517911434174,
0.08955223113298416,
0.08695651590824127,
0.0833333283662796,
0.08888888359069824,
0.05128204822540283,
0.052631575614213943,
0.060606058686971664,
0.0555555522441864,
0.09302324801683426,
0.2857142686843872,
0.1395348757505417,
0.20512820780277252,
0,
0.05714285373687744,
0,
0.1463414579629898,
0.17777776718139648
] | BJxpbREKvB | true | [
"An empirical study that provides a novel perspective on few-shot learning, in which a fine-tuning method shows comparable accuracy to more complex state-of-the-art methods in several classification tasks."
] |
[
"We propose an approach to generate realistic and high-fidelity stock market data based on generative adversarial networks.\n",
"We model the order stream as a stochastic process with finite history dependence, and employ a conditional Wasserstein GAN to capture history dependence of orders in a stock market. \n",
"We test our approach with actual market and synthetic data on a number of different statistics, and find the generated data to be close to real data.",
"Financial markets are among the most well-studied and closely watched complex multiagent systems in existence.",
"Well-functioning financial markets are critical to the operation of a complex global economy, and small changes in the efficiency or stability of such markets can have enormous ramifications.",
"Accurate modeling of financial markets can support improved design and regulation of these critical institutions.",
"There is a vast literature on financial market modeling, though still a large gap between the state-of-art and the ideal.",
"Analytic approaches provide insight through highly stylized model forms.",
"Agent-based models accommodate greater dynamic complexity, and are often able to reproduce \"stylized facts\" of real-world markets BID10 .",
"Currently lacking, however, is a simulation capable of producing market data at high fidelity and high realism.",
"Our aim is to develop such a model, to support a range of market design and analysis problems.",
"This work provides a first step, learning a high-fidelity generator from real stock market data streams.Our main contribution is an approach to produce stock market data that is close to real market data, using a Wasserstein generative adversarial network (WGAN) .",
"There are many challenges that we overcome as part of this contribution.",
"The first is how to represent a stream of stock market orders as data that can be used in a WGAN.",
"Towards this end, we assume the stock market data stream to arise from a stochastic process with finite (but long) memory dependence.",
"The stochastic process view also makes precise the conditional distribution that the generator is learning as well the joint distribution that the critic of the WGAN distinguishes by estimating the earth-mover distance.The second main challenge is the design of the network architecture.",
"We choose a conditional WGAN to capture the history dependence of the stochastic process, with both the generator and critic conditional on history of orders and the time of day.",
"A single LSTM layer is used to summarize the history succinctly.",
"The internal architecture of both the generator and critic uses a standard convolutional structure.",
"The generator outputs the next stock market order as well as how this order changes the active orders in the market.",
"Part of the generator output, which updates the active market orders, is produced using a pre-trained network to approximate the deterministic buy and sell order matching in the stock market.Finally, we experiment with synthetic and real market data.",
"The synthetic data is produced using a stock market simulator that has been used in several agent-based financial studies.",
"The real data was obtained from OneMarketData, a financial data provider and publisher of the OneTick database product.",
"We evaluate the generated data using various statistics such as the distribution of price and quantity of orders, inter-arrival times of orders, and the best bid and best ask evolution over time.",
"We find the generated data matches the corresponding statistics in real data (simulated or actual stock market) closely.",
"Our results reveal that GANs can be used to simulate a stock market.",
"While our results are promising, there are open issues that provide for further research material.",
"One experimental aspect is to try different size of the network in the WGAN, possibly dependent on the data size of the given stock and testing with many different variety of stocks.",
"Another open research issue is to output cancellations in a more intelligent manner than the heuristic approach we use now.",
"Overall, our work provides fertile ground for future research at the intersection of deep learning and finance."
] | [
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.9714285731315613,
0.22727271914482117,
0.3499999940395355,
0.0624999962747097,
0.0952380895614624,
0.06451612710952759,
0.17142856121063232,
0,
0.11428570747375488,
0.1818181723356247,
0.1818181723356247,
0.36734694242477417,
0,
0.21621620655059814,
0.20512819290161133,
0,
0.20512819290161133,
0.0714285671710968,
0.06451612710952759,
0.12121211737394333,
0.19999998807907104,
0.1666666567325592,
0.11764705181121826,
0.1463414579629898,
0.1818181723356247,
0.19999998807907104,
0,
0.2380952388048172,
0.10810810327529907,
0.05882352590560913
] | rke41hC5Km | true | [
"We propose an approach to generate realistic and high-fidelity stock market data based on generative adversarial networks."
] |
[
"We present a novel black-box adversarial attack algorithm with state-of-the-art model evasion rates for query efficiency under $\\ell_\\infty$ and $\\ell_2$ metrics.",
"It exploits a \\textit{sign-based}, rather than magnitude-based, gradient estimation approach that shifts the gradient estimation from continuous to binary black-box optimization.",
"It adaptively constructs queries to estimate the gradient, one query relying upon the previous, rather than re-estimating the gradient each step with random query construction.",
"Its reliance on sign bits yields a smaller memory footprint and it requires neither hyperparameter tuning or dimensionality reduction.",
"Further, its theoretical performance is guaranteed and it can characterize adversarial subspaces better than white-box gradient-aligned subspaces.",
"On two public black-box attack challenges and a model robustly trained against transfer attacks, the algorithm's evasion rates surpass all submitted attacks.",
"For a suite of published models, the algorithm is $3.8\\times$ less failure-prone while spending $2.5\\times$ fewer queries versus the best combination of state of art algorithms.",
"For example, it evades a standard MNIST model using just $12$ queries on average.",
"Similar performance is observed on a standard IMAGENET model with an average of $579$ queries.",
"Problem.",
"Deep Neural Networks (DNNs) are vulnerable to adversarial examples, which are malicious inputs designed to fool the model's prediction-see (Biggio and Roli, 2018) for a comprehensive, recent overview of adversarial examples.",
"Research on generating these malicious inputs started in the white-box setting, where access to the gradients of the models is assumed.",
"Since the gradient points to the direction of steepest ascent, an input can be perturbed along the gradient's direction to maximize the network's loss, thereby potentially causing misclassification under class prediction, e.g. with images, or evasion under detection, e.g. with malware.",
"The assumption of access to the underlying gradient does not however reflect real world scenarios.",
"Attack algorithms under a more realistic, restrictive black-box threat model, which assumes access to predictions in lieu of gradients, are therefore studied.",
"Central to their approaches is estimating the gradient.",
"To estimate the magnitudes and signs of the gradient, the community at large has formulated a continuous optimization problem of O(n) complexity where n is the input dimensionality.",
"Most recently work has sought to reduce this complexity by means of data-/time-dependent priors Ilyas et al. (2019) .",
"In this paper, we take a different tact and reduce the central problem to just estimating the signs of the gradients.",
"Our intuition arises from observing that estimating the sign of the top 30% gradient coordinates by magnitude is enough to achieve a rough misclassification rate of 70%.",
"Figure 1 reproducing Ilyas et al. (2019) illustrates this observation for the MNIST dataset-see Appendix A for other datasets.",
"Therefore our goal is to recover the sign of the gradient with high query efficiency so we can use it to generate adversarial examples as effective as those generated by full gradient estimation approaches.",
"Related Work.",
"We organize the related work in two themes, namely Adversarial Example Generation and Sign-Based Optimization.",
"The literature of the first theme primarily divides into white-box and black-box settings.",
"The white-box setting, while not the focus of this work, follows from the works of Biggio et al. (2013) and Goodfellow et al. (2015) who introduced the Fast Gradient Sign Method (FGSM), including several methods to produce adversarial examples for various learning tasks and threat perturbation constraints (Carlini and Wagner, 2017; Moosavi-Dezfooli et al., 2016; Hayes and Danezis, 2017; Al-Dujaili et al., 2018; Kurakin et al., 2017; Shamir et al., 2019) .",
"Turning to the blackbox setting and iterative optimization schemes, Narodytska and Kasiviswanathan (2017) , without using any gradient information, use a naive policy of perturbing random segments of an image to generate adversarial examples.",
"Bhagoji et al. (2017) reduce the dimensions of the feature space using Principal Component Analysis (PCA) and random feature grouping, before estimating gradients.",
"Chen et al. (2017) introduce a principled approach by using gradient based optimization.",
"They employ finite differences, a zeroth-order optimization means, to estimate the gradient and then use it to design a gradient-based attack.",
"While this approach successfully generates adversarial examples, it is expensive in how many times the model is queried.",
"Ilyas et al. (2018) substitute traditional finite differences methods with Natural Evolutionary Strategies (NES) to obtain an estimate of the gradient.",
"Tu et al. (2018) provide an adaptive random gradient estimation algorithm that balances query counts and distortion, and introduces a trained auto-encoder to achieve attack acceleration.",
"Ilyas et al. (2019) extend this line of work by proposing the idea of gradient priors and bandits: Bandits T D .",
"Our work contrasts with the general approach of these works in two ways:",
"a) We focus on estimating the sign of the gradient and investigate whether this estimation suffices to efficiently generate adversarial examples.",
"b) The above methods employ random sampling in constructing queries to the model while our construction is adaptive.",
"1 Another approach involves learning adversarial examples for one model (with access to its gradient information) to transfer them against another (Liu et al., 2016; Papernot et al., 2017) .",
"Alternately, Xiao et al. (2018) use a Generative Adversarial Network (GAN) to generate adversarial examples which are based on small norm-bounded perturbations.",
"These methods involve learning on a different model, which is expensive, and not amenable to comparison with setups-including ours-that directly query the model of interest.",
"Figure 1: Misclassification rate of an MNIST model on the noisy FGSM's adversarial examples as a function of correctly estimated coordinates of sign(∇ x f (x, y)) on 1000 random MNIST images.",
"Estimating the sign of the top 30% gradient coordinates (in terms of their magnitudes) is enough to achieve a rough misclassification rate of 70%.",
"More details can be found in Appendix A.",
"Sign-Based Optimization.",
"In the context of generalpurpose continuous optimization methods, signbased stochastic gradient descent was studied in both zeroth-and first-order setups.",
"In the latter, Bernstein et al. (2018) analyzed signSGD, a sign-based Stochastic Gradient Descent, and showed that it enjoys a faster empirical convergence than SGD in addition to the cost reduction of communicating gradients across multiple workers.",
"Liu et al. (2019) extended signSGD to zeroth-order setup with the ZO-SignSGD algorithm.",
"ZO-SignSGD (Liu et al., 2019) was shown to outperform NES against a blackbox model on MNIST.",
"These approaches use the sign of the gradient (or its zero-order estimate) to achieve better convergence, whereas our approach both estimates and uses the sign of the gradient.",
"Contributions.",
"We present the following contributions at the intersection of adversarial machine learning and black-box (zeroth-order) optimization:",
"1) We exploit the separability property of the directional derivative of the loss function of the model under attack in the direction of {±1} n vectors, to propose a divide-and-conquer, adaptive, memory-efficient algorithm, we name SignHunter, to estimate the gradient sign bits.",
"2) We provide a worst-case theoretical guarantee on the number of queries required by SignHunter to perform at least as well as FGSM (Goodfellow et al., 2015) , which has access to the model's gradient.",
"To our knowledge, no black-box attack from the literature offers a similar performance guarantee.",
"3) We evaluate our approach on a rigorous set of experiments on both, standard and adversarially hardened models.",
"All other previous works on this topic have published their results on a subset of the datasets and threat models we experimentally validate in this work.",
"Through these experiments, we demonstrate that SignHunter's adaptive search for the gradient sign allows it to craft adversarial examples within a mere fraction of the theoretical number of queries thus outperforming FGSM and state-of-the-art black-box attacks.",
"4) We release a software framework to systematically benchmark adversarial black-box attacks, including SignHunter's, on MNIST, CIFAR10, and IMAGENET models in terms of success rate, query count, and other metrics.",
"5) We demonstrate how SignHunter can be used to characterize adversarial cones in a black-box setup and in doing so, highlight the gradient masking effect.",
"Notation.",
"Let n denote the dimension of datapoint x.",
"Denote a hidden n-dimensional binary code by q * .",
"That is, q * ∈ H ≡ {−1, +1} n .",
"Further, denote the directional derivative of some function f at a point x in the direction of a vector v by D v f (x) ≡ v T ∇ x f (x) which often can be approximated by the finite difference method.",
"That is, for δ > 0, we have",
"Let Π S (·) be the projection operator onto the set S, B p (x, ) be the p ball of radius around x.",
"Assuming a black-box threat model, we studied the problem of generating adversarial examples for neural nets and proposed the gradient sign estimation problem as the core challenge in crafting (Tramèr et al., 2017a , Figure 2 ), for 500 correctly classified points x and ∈ {4, 10, 16}, we plot the probability that we find at least k orthogonal vectors r i -computed based on (Tramèr et al., 2017a , Lemma 7)-such that ||r i || ∞ = and x + r i is misclassified.",
"For both models and for the same points x, SAAS finds more orthogonal adversarial vectors r i than GAAS, thereby providing a better characterization of the space of adversarial examples in the vicinity of a point, albeit without a white-box access to the models.",
"these examples.",
"We formulate the problem as a binary black-box optimization one: maximizing the directional derivative in the direction of {±1} n vectors, approximated by the finite difference of the queries' loss values.",
"The separability property of the directional derivative helped us devise SignHunter, a query-efficient, tuning-free divide-and-conquer algorithm with a small memory footprint that is guaranteed to perform at least as well as FGSM after O(n) queries.",
"No similar guarantee is found in the literature.",
"In practice, SignHunter needs a mere fraction of this number of queries to craft adversarial examples.",
"The algorithm is one of its kind to construct adaptive queries instead of queries that are based on i.i.d. random vectors.",
"Robust to gradient masking, SignHunter can also be used to estimate the dimensionality of adversarial cones.",
"Moreover, SignHunter achieves the highest evasion rate on two public black-box attack challenges and breaks a model that argues robustness against substitute-model attacks."
] | [
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.20512819290161133,
0.8108108043670654,
0.19999998807907104,
0.05405404791235924,
0.05882352590560913,
0.09999999403953552,
0.04651162400841713,
0.0624999962747097,
0.060606054961681366,
0.08695651590824127,
0.05405404791235924,
0.07692307233810425,
0.12121211737394333,
0.14999999105930328,
0.1538461446762085,
0.1428571343421936,
0.0555555522441864,
0.10810810327529907,
0.23255813121795654,
0,
0.1249999925494194,
0.060606054961681366,
0.06451612710952759,
0.0555555522441864,
0.16326530277729034,
0,
0.25806450843811035,
0.21621620655059814,
0.05714285373687744,
0.10256409645080566,
0.23255813121795654,
0.052631575614213943,
0.06451612710952759,
0.21052631735801697,
0.0555555522441864,
0.13333332538604736,
0.09999999403953552,
0.09302324801683426,
0.043478257954120636,
0.1538461446762085,
0,
0.1621621549129486,
0.15094339847564697,
0.06451612710952759,
0.11428570747375488,
0.14999999105930328,
0.1818181723356247,
0.15686273574829102,
0.1599999964237213,
0.1875,
0.17142856121063232,
0.0476190410554409,
0.19230768084526062,
0.1702127605676651,
0.2380952388048172,
0,
0.14814814925193787,
0,
0.04081632196903229,
0,
0,
0.1190476194024086,
0.11320754140615463,
0.22727271914482117,
0.11764705181121826,
0,
0.12121211737394333,
0.10526315122842789,
0.12121211737394333,
0.1463414579629898
] | SygW0TEFwH | true | [
"We present a sign-based, rather than magnitude-based, gradient estimation approach that shifts gradient estimation from continuous to binary black-box optimization."
] |
[
"Recurrent Neural Networks (RNNs) are widely used models for sequence data.",
"Just like for feedforward networks, it has become common to build \"deep\" RNNs, i.e., stack multiple recurrent layers to obtain higher-level abstractions of the data.",
"However, this works only for a handful of layers.",
"Unlike feedforward networks, stacking more than a few recurrent units (e.g., LSTM cells) usually hurts model performance, the reason being vanishing or exploding gradients during training.",
"We investigate the training of multi-layer RNNs and examine the magnitude of the gradients as they propagate through the network.",
"We show that, depending on the structure of the basic recurrent unit, the gradients are systematically attenuated or amplified, so that with an increasing depth they tend to vanish, respectively explode.",
"Based on our analysis we design a new type of gated cell that better preserves gradient magnitude, and therefore makes it possible to train deeper RNNs.",
"We experimentally validate our design with five different sequence modelling tasks on three different datasets.",
"The proposed stackable recurrent (STAR) cell allows for substantially deeper recurrent architectures, with improved performance.",
"Recurrent Neural Networks (RNN) have established themselves as a powerful tool for modelling sequential data.",
"They have significantly advanced a number of applications, notably language processing and speech recognition (Sutskever et al., 2014; Graves et al., 2013; Vinyals & Le, 2015) .",
"The basic building block of an RNN is a computational unit (or cell) that combines two inputs: the data of the current time step in the sequence and the unit's own output from the previous time step.",
"While RNNs are an effective approach that can in principle handle sequences of arbitrary and varying length, they are (in their basic form) challenged by long-term dependencies, since learning those would require the propagation of gradients over many time steps.",
"To alleviate this limitation, gated architectures have been proposed, most prominently Long Short-Term Memory (LSTM) cells (Hochreiter & Schmidhuber, 1997) and Gated Recurrent Units (GRU, Chung et al., 2014) .",
"They use a gating mechanism to store and propagate information over longer time intervals, thus mitigating the vanishing gradient problem.",
"Although such networks can, in principle, capture long-term dependencies, it is known that more abstract and longer-term features are often represented better by deeper architectures (Bengio et al., 2009) .",
"To that end, multiple recurrent cells are stacked on top of each other in a feedforward manner, i.e., the output (or the hidden state) of the lower cell is connected to the input gate of the next-higher cell.",
"Many works have used such deep recurrent architectures, e.g., (Chung et al., 2015; Zilly et al., 2017) , and have shown their ability to extract more complex features from the input and make better predictions.",
"The need for multi-layer RNNs is particularly apparent for image-like input data, where multiple convolutional layers are required to extract a good representation, while the recurrence captures the evolution of each layer over time.",
"Since recurrent architectures are trained by propagating gradients across time, it is convenient to \"unwrap\" them into a lattice with two axes for depth (abstraction level) and time, see Fig. 1 .",
"This view makes it apparent that gradients flow in two directions, namely backwards in time and downwards from deeper to shallower layers.",
"In this paper we ask the question how the basic recurrent unit must be designed to ensure the \"vertical\" gradient flow across layers is stable and not impaired by vanishing or exploding gradients.",
"We show that stacking several layers of common RNN cells, by their construction, leads to instabilities (e.g., for deep LSTMs the gradients tend to vanish; for deep vanilla RNNs they tend to explode).",
"Our study makes three contributions:",
"(i) We analyse how the magnitude of the gradient changes as it propagates through a cell of the two-dimensional deep RNN lattice.",
"We show that, depending on the inner architecture of the employed RNN cell, gradients tend to be either amplified or attenuated.",
"As the depth increases, the repeated amplification (resp., attenuation) increases the risk of exploding (resp., vanishing) gradients.",
"(ii) We then leverage our analysis to design a new form of gated cell, termed the STAR (stackable recurrent) unit, which better preserves the gradient magnitude inside the RNN lattice.",
"It can therefore be stacked to much greater depth and still remains trainable.",
"(iii) Finally, we compare deep recurrent architectures built from different basic cells in an extensive set of experiments with three popular datasets.",
"The results confirm our analysis: training deep recurrent nets fail with most conventional units, whereas the proposed STAR unit allows for significantly deeper architectures.",
"In several cases, the ability to go deeper also leads to improved performance.",
"We have investigated the problem of vanishing/exploding gradient in deep RNNs.",
"In a first step, we analyse how the derivatives of the non-linear activation functions rescale the gradients as they propagate through the temporally unrolled network.",
"From both, the theoretical analysis, and associated numerical simulations, we find that standard RNN cells do not preserve the gradient magnitudes during backpropagation, and therefore, as the depth of the network grows, the risk that the gradients vanish or explode increases.",
"In a second step, we have proposed a new RNN cell, termed the STAckable Recurrent unit, which better preserves gradients through deep architectures and facilitates their training.",
"An extensive evaluation on three popular datasets confirms that STAR units can be stacked into deeper architectures than other RNN cells.",
"We see two main directions for future work.",
"On the one hand, it would be worthwhile to develop a more formal and thorough mathematical analysis of the gradient flow, and perhaps even derive rigorous bounds for specific cell types, that could, in turn, inform the network design.",
"On the other hand, it appears promising to investigate whether the analysis of the gradient flows could serve as a basis for better initialisation schemes to compensate the systematic influences of the cells structure, e.g., gating functions, in the training of deep RNNs.",
"C TRAINING DETAILS C.1",
"PIXEL-BY-PIXEL MNIST Following Tallec & Ollivier, chrono initialisation is applied for the bias term of k, b k .",
"The basic idea is that k should not be too large; such that the memory h can be retained over longer time intervals.",
"The same initialisation is used for the input and forget bias of the LSTM and the RHN and for the forget bias of LSTMw/f.",
"For the final prediction, a feedforward layer with softmax activation converts the hidden state to a class label.",
"The numbers of hidden units in the RNN layers are set to 128.",
"All networks are trained for 100 epochs with batch size 100, using the Adam optimizer (Kingma & Ba, 2014) with learning rate 0.001, β 1 = 0.9 and β 2 = 0.999."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0,
0.045454539358615875,
0.07407406717538834,
0.08695651590824127,
0.29411762952804565,
0.08510638028383255,
0.3181818127632141,
0.1249999925494194,
0,
0.060606054961681366,
0.09302324801683426,
0.25,
0.178571417927742,
0.0416666604578495,
0.21052631735801697,
0.0833333283662796,
0.11764705181121826,
0.1599999964237213,
0.1599999964237213,
0.0833333283662796,
0.1538461446762085,
0.16326530277729034,
0.2083333283662796,
0,
0.3243243098258972,
0.15789473056793213,
0.060606054961681366,
0.30434781312942505,
0.06451612710952759,
0.19999998807907104,
0.1428571343421936,
0.06666666269302368,
0.41379308700561523,
0.14999999105930328,
0.23076923191547394,
0.3181818127632141,
0.051282044500112534,
0.07692307233810425,
0.18518517911434174,
0.2181818187236786,
0,
0.0555555522441864,
0.051282044500112534,
0.12121211737394333,
0.11764705181121826,
0.19354838132858276,
0.0833333283662796
] | SkgNZeSKPB | true | [
"We analyze the gradient propagation in deep RNNs and from our analysis, we propose a new multi-layer deep RNN."
] |
[
"Despite its empirical success, the theoretical underpinnings of the stability, convergence and acceleration properties of batch normalization (BN) remain elusive.",
"In this paper, we attack this problem from a modelling approach, where we perform thorough theoretical analysis on BN applied to simplified model: ordinary least squares (OLS).",
"We discover that gradient descent on OLS with BN has interesting properties, including a scaling law, convergence for arbitrary learning rates for the weights, asymptotic acceleration effects, as well as insensitivity to choice of learning rates.",
"We then demonstrate numerically that these findings are not specific to the OLS problem and hold qualitatively for more complex supervised learning problems.",
"This points to a new direction towards uncovering the mathematical principles that underlies batch normalization.",
"Batch normalization BID7 (BN) is one of the most important techniques for training deep neural networks and has proven extremely effective in avoiding gradient blowups during back-propagation and speeding up convergence.",
"In its original introduction BID7 , the desirable effects of BN are attributed to the so-called \"reduction of covariate shift\".",
"However, it is unclear what this statement means in precise mathematical terms.",
"To date, there lacks a comprehensive theoretical analysis of the effect of batch normalization.In this paper, we study the convergence and stability of gradient descent with batch normalization (BNGD) via a modeling approach.",
"More concretely, we consider a simplified supervised learning problem: ordinary least squares regression, and analyze precisely the effect of BNGD when applied to this problem.",
"Much akin to the mathematical modeling of physical processes, the least-squares problem serves as an idealized \"model\" of the effect of BN for general supervised learning tasks.",
"A key reason for this choice is that the dynamics of GD without BN (hereafter called GD for simplicity) in least-squares regression is completely understood, thus allowing us to isolate and contrast the additional effects of batch normalization.The modeling approach proceeds in the following steps.",
"First, we derive precise mathematical results on the convergence and stability of BNGD applied to the least-squares problem.",
"In particular, we show that BNGD converges for any constant learning rate ε ∈ (0, 1], regardless of the conditioning of the regression problem.",
"This is in stark contrast with GD, where the condition number of the problem adversely affect stability and convergence.",
"Many insights can be distilled from the analysis of the OLS model.",
"For instance, we may attribute the stability of BNGD to an interesting scaling law governing ε and the initial condition; This scaling law is not present in GD.",
"The preceding analysis also implies that if we are allowed to use different learning rates for the BN rescaling variables (ε a ) and the remaining trainable variables (ε), we may conclude that BNGD on our model converges for any ε > 0 as long as ε a ∈ (0, 1].",
"Furthermore, we discover an acceleration effect of BNGD and moreover, there exist regions of ε such that the performance of BNGD is insensitive to changes in ε, which help to explain the robustness of BNGD to the choice of learning rates.",
"We reiterate that contrary to many previous works, all the preceding statements are precise mathematical results that we derive for our simplified model.",
"The last step in our modeling approach is also the most important: we need to demonstrate that these insights are not specific features of our idealized model.",
"Indeed, they should be true characteristics, at least in an approximate sense, of BNGD for general supervised learning problems.",
"We do this by numerically investigating the convergence, stability and scaling behaviors of BNGD on various datasets and model architectures.",
"We find that the key insights derived from our idealized analysis indeed correspond to practical scenarios.",
"In this paper, we adopted a modeling approach to investigate the dynamical properties of batch normalization.",
"The OLS problem is chosen as a point of reference, because of its simplicity and the availability of convergence results for gradient descent.",
"Even in such a simple setting, we saw that BNGD exhibits interesting non-trivial behavior, including scaling laws, robust convergence properties, acceleration, as well as the insensitivity of performance to the choice of learning rates.",
"Although these results are derived only for the OLS model, we show via experiments that these are qualitatively valid for general scenarios encountered in deep learning, and points to a concrete way in uncovering the reasons behind the effectiveness of batch normalization.Interesting future directions include the extension of the results for the OLS model to more general settings of BNGD, where we believe the scaling law (Proposition 3.2) should play a significant role.",
"In addition, we have not touched upon another empirically observed advantage of batch normalization, which is better generalization errors.",
"It will be interesting to see how far the current approach takes us in investigating such probabilistic aspects of BNGD."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.24390242993831635,
0.1249999925494194,
0.290909081697464,
0.30434781312942505,
0.3684210479259491,
0.15094339847564697,
0.1463414579629898,
0,
0.2745097875595093,
0.375,
0.30434781312942505,
0.25806450843811035,
0.25,
0.17777776718139648,
0.1463414579629898,
0.23529411852359772,
0.1666666567325592,
0.24242423474788666,
0.25925925374031067,
0.2222222238779068,
0.2448979616165161,
0.1904761791229248,
0.2857142686843872,
0.307692289352417,
0.307692289352417,
0.1818181723356247,
0.25925925374031067,
0.2531645596027374,
0.0952380895614624,
0.1395348757505417
] | SJg7IsC5KQ | true | [
"We mathematically analyze the effect of batch normalization on a simple model and obtain key new insights that applies to general supervised learning."
] |
[
"Solving tasks in Reinforcement Learning is no easy feat.",
"As the goal of the agent is to maximize the accumulated reward, it often learns to exploit loopholes and misspecifications in the reward signal resulting in unwanted behavior.",
"While constraints may solve this issue, there is no closed form solution for general constraints.",
"In this work we present a novel multi-timescale approach for constrained policy optimization, called `Reward Constrained Policy Optimization' (RCPO), which uses an alternative penalty signal to guide the policy towards a constraint satisfying one.",
"We prove the convergence of our approach and provide empirical evidence of its ability to train constraint satisfying policies.",
"Applying Reinforcement Learning (RL) is generally a hard problem.",
"At each state, the agent performs an action which produces a reward.",
"The goal is to maximize the accumulated reward, hence the reward signal implicitly defines the behavior of the agent.",
"While in computer games (e.g. BID2 ) there exists a pre-defined reward signal, it is not such in many real applications.An example is the Mujoco domain BID33 , in which the goal is to learn to control robotic agents in tasks such as: standing up, walking, navigation and more.",
"Considering the Humanoid domain, the agent is a 3 dimensional humanoid and the task is to walk forward as far as possible (without falling down) within a fixed amount of time.",
"Naturally, a reward is provided based on the forward velocity in order to encourage a larger distance; however, additional reward signals are provided in order to guide the agent, for instance a bonus for staying alive, a penalty for energy usage and a penalty based on the force of impact between the feet and the floor (which should encourage less erratic behavior).",
"Each signal is multiplied by it's own coefficient, which controls the emphasis placed on it.This approach is a multi-objective problem BID20 ; in which for each set of penalty coefficients, there exists a different, optimal solution, also known as Pareto optimality BID34 .",
"In practice, the exact coefficient is selected through a time consuming and a computationally intensive process of hyper-parameter tuning.",
"As our experiments show, the coefficient is not shared across domains, a coefficient which leads to a satisfying behavior on one domain may lead to catastrophic failure on the other (issues also seen in BID17 and BID19 ).",
"Constraints are a natural and consistent approach, an approach which ensures a satisfying behavior without the need for manually selecting the penalty coefficients.In constrained optimization, the task is to maximize a target function f (x) while satisfying an inequality constraint g(x) ≤ α.",
"While constraints are a promising solution to ensuring a satisfying behavior, existing methods are limited in the type of constraints they are able to handle and the algorithms that they may support -they require a parametrization of the policy (policy gradient methods) and propagation of the constraint violation signal over the entire trajectory (e.g. BID26 ).",
"This poses an issue, as Q-learning algorithms such as DQN BID21 do not learn a parametrization of the policy, and common Actor-Critic methods (e.g. BID27 BID22 BID0 Reward shaping BID29 3 BID29 ) build the reward-to-go based on an N-step sample and a bootstrap update from the critic.In this paper, we propose the 'Reward Constrained Policy Optimization' (RCPO) algorithm.",
"RCPO incorporates the constraint as a penalty signal into the reward function.",
"This penalty signal guides the policy towards a constraint satisfying solution.",
"We prove that RCPO converges almost surely, under mild assumptions, to a constraint satisfying solution (Theorem 2).",
"In addition; we show, empirically on a toy domain and six robotics domains, that RCPO results in a constraint satisfying solution while demonstrating faster convergence and improved stability (compared to the standard constraint optimization methods).Related",
"work: Constrained Markov Decision Processes BID1 are an active field of research. CMDP applications",
"cover a vast number of topics, such as: electric grids BID14 , networking BID11 , robotics BID8 BID10 BID0 BID9 and finance BID15 BID32 .The main approaches",
"to solving such problems are (i) Lagrange multipliers",
"BID5 BID4 , (ii) Trust Region BID0 ,",
"(iii) integrating prior",
"knowledge BID9 and (iv) manual selection of",
"the penalty coefficient BID31 BID18 BID25 ."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.1538461446762085,
0.22641508281230927,
0.13636362552642822,
0.22580644488334656,
0.1666666567325592,
0.10256409645080566,
0.1428571343421936,
0.17391303181648254,
0.21917808055877686,
0.178571417927742,
0.1666666567325592,
0.2571428418159485,
0.1249999925494194,
0.2222222238779068,
0.1764705777168274,
0.21917808055877686,
0.12048192322254181,
0.24390242993831635,
0.24390242993831635,
0.3404255211353302,
0.2222222238779068,
0,
0.0357142798602581,
0.052631575614213943,
0,
0,
0,
0.1111111119389534
] | SkfrvsA9FX | true | [
"For complex constraints in which it is not easy to estimate the gradient, we use the discounted penalty as a guiding signal. We prove that under certain assumptions it converges to a feasible solution."
] |
[
"The Handheld Virtual Panel (HVP) is the virtual panel attached to the non-dominant hand’s controller in virtual reality (VR).",
"The HVP is the go-to technique for enabling menus and toolboxes in VR devices.",
"In this paper, we investigate target acquisition performance for the HVP as a function of four factors: target width, target distance, the direction of approach with respect to gravity, and the angle of approach.",
"Our results show that all four factors have significant effects on user performance.",
"Based on the results, we propose guidelines towards the ergonomic and performant design of the HVP interfaces.",
"With the increasing popularity of consumer virtual reality (VR), we see more and more VR apps for creativity and productivity.",
"These apps fundamentally require menus and toolboxes for the assortment of options and controls they offer.",
"And the interaction artifact that is quickly becoming the go-to technique for this is the handheld virtual panel (HVP) .",
"The HVP provides the primary toolbox in Google's TiltBrush [15] (Figure 1 (left)) and Blocks [14] , Oculus's Quill [11] and Medium [10] (Figure 1 (right)), and HTC Vive's MakeVR [18] .",
"Szalvari et al. in 1997 [30, 31] proposed the personal interaction panel where the user hold a tracked tablet in the second hand while doing their primary interaction with the dominant hand using a stylus.",
"HVPs extend that concept for virtual panels anchored to the controller in the non-dominant hand and using ray-tracing instead of a stylus.",
"There are multiple advantages to such an interaction [20] .",
"First, handheld windows move along with the user, so they are always within reach.",
"Second, they do not overly clutter the user's view, unless explicitly moved by the user.",
"Third, handheld windows take advantage of the proprioceptive sense because they are attached to the non-dominant hand.",
"However, even with the ubiquity of HVP in products and research literature, we do not have a sense of what factors govern performance of target selection in HVPs.",
"Consequently, there is Unpublished working draft.",
"Not for distribution.",
"Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page.",
"Copyrights for components of this work owned by others than ACM must be honored.",
"Abstracting with credit is permitted.",
"To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.",
"Request permissions from [email protected].",
"a need to understand and quantify HVP target selection performance while considering these two factors:",
"1) hand motion here is governed by the direction of motion in relation to the ground due to the effects of gravity, and (2) since both the target and the pointer can be moved and controlled by the user during acquisition, user's approach will vary depending on the angle of movement in addition to distance and width.",
"We conduct a study to measure HVP target acquisition performance in relation to four factors that relate to the direction of movement with respect to gravity, the angle of movement with respect to the body, distance, and width.",
"The results show that the performance depends significantly on all four factors.",
"Based on the results, we propose guidelines towards the ergonomic design of the HVP interfaces.",
"The results suggest that gravity played a major part even when our experiment design minimized fatigue between conditions.",
"The effect would be much more pronounced with longer, fatigue-inducing tasks.",
"Most current HVPs use a cube-style panel with equal vertical and horizontal sizes.",
"One simple solution to minimize the effect of gravity would be to have HVPs that have larger horizontal widths than vertical.",
"Our distance-based results suggest that minimizing hand motion and instead relying on wrist flicks to move the raycast pointer could help performance (see [26, 27] ).",
"Therefore, as opposed to having smaller panels, panel sizes can be increased (at least horizontally) to encourage the use of coarse wrist flicking.",
"Further, the design needs to minimize motion when the user is performing tasks below the panel (for instance, creating a ground texture) and will need to go against gravity to reach the HVP.",
"One solution here would be arrange targets on the panel such that the high frequency targets are placed at the bottom of the panel, thus making them easier to reach from the bottom, while not overtly affecting the performance from top.",
"Another possibility is to retarget the HVP [2] at a lower position while the non-dominant hand remains at the same position so that the user has to move less against gravity to reach the HVP.",
"Retargeting has not been explored in the context of HVPs and could be a really useful technique to counter such effects.",
"However, the tradeoff of increasing the visuohaptic disconnect in this case would need to be explored.",
"Overall, we suggest three takeaways that should be considered by designers for HVPs depending on the context:",
"1) Panels with large horizontal widths as opposed to square shaped ones should be considered to counter effects of gravity and encourage wrist flicking,",
"2) Place high-frequency targets at the bottom of the panel, and",
"3) investigate retargeting of the HVP given the same non-dominant hand positions to minimize user motion against gravity.",
"The handheld virtual panel is the most popular technique for accessing tools or menus in commercial VR creativity and productivity applications.",
"In this paper, we conduct an evaluation of the target acquisition performance in the HVP as a measure of four variables.",
"Our results show that all four have an effect on user performance.",
"While there are expected effects such as reducing acquisition time with increasing width, the evaluation also suggests that gravity may be a crucial issue even when fatigue is minimized.",
"Based on the results, we list takeaways to help improve the design of HVPs and indicate paths for future explorations.",
"We believe addressing the limitations of HVPs uncovered in our study will go a long way in improving the user experience of HVP-based VR applications."
] | [
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.17391303181648254,
0.23255813121795654,
0.5357142686843872,
0.1904761791229248,
0.09090908616781235,
0.21276594698429108,
0.13636362552642822,
0.17777776718139648,
0.1071428507566452,
0.10526315122842789,
0.3199999928474426,
0.052631575614213943,
0.09302324801683426,
0.04651162400841713,
0.13333332538604736,
0.2222222238779068,
0,
0.0625,
0.17910447716712952,
0.09302324801683426,
0.05882352590560913,
0.04255318641662598,
0,
0.1818181723356247,
0.2857142686843872,
0.4912280738353729,
0.19512194395065308,
0.0476190447807312,
0.08510638028383255,
0.09999999403953552,
0.0952380895614624,
0.1249999925494194,
0.145454540848732,
0.07843136787414551,
0.10526315122842789,
0.1269841194152832,
0.1090909019112587,
0.1599999964237213,
0.13636362552642822,
0.08695651590824127,
0.1538461446762085,
0.10256409645080566,
0.1304347813129425,
0.2800000011920929,
0.2083333283662796,
0.19512194395065308,
0.13793103396892548,
0.1666666567325592,
0.15686273574829102
] | YEhL2zUxfO | true | [
"The paper investigates target acquisition for handheld virtual panels in VR and shows that target width, distance, direction of approach with respect to gravity, and angle of approach, all impact user performance."
] |
[
"Deep neural networks have demonstrated unprecedented success in various knowledge management applications.",
"However, the networks created are often very complex, with large numbers of trainable edges which require extensive computational resources.",
"We note that many successful networks nevertheless often contain large numbers of redundant edges.",
"Moreover, many of these edges may have negligible contributions towards the overall network performance.",
"In this paper, we propose a novel iSparse framework and experimentally show, that we can sparsify the network, by 30-50%, without impacting the network performance.",
"iSparse leverages a novel edge significance score, E, to determine the importance of an edge with respect to the final network output.",
"Furthermore, iSparse can be applied both while training a model or on top of a pre-trained model, making it a retraining-free approach - leading to a minimal computational overhead.",
"Comparisons of iSparse against PFEC, NISP, DropConnect, and Retraining-Free on benchmark datasets show that iSparse leads to effective network sparsifications.",
"Deep neural networks (DNNs), particularly convolutional neural networks (CNN), have shown impressive success in many applications, such as facial recognition (Lawrence et al., 1997) , time series analysis (Yang et al., 2015) , speech recognition (Hinton et al., 2012) , object classification (Liang & Hu, 2015) , and video surveillance (Karpathy & et. at., 2014) .",
"As the term \"deep\" neural networks implies, this success often relies on large networks, with large number of trainable edges (weights) (Huang et al., 2017; Zoph et al., 2018; He et al., 2016; Simonyan & Zisserman, 2015) .",
"While a large number of trainable edges help generalize the network for complex and diverse patterns in large-scale datasets, this often comes with enormous computation cost to account for the non-linearity of the deep networks (ReLU, sigmoid, tanh) .",
"In fact, DNNs owe their recent success to hardware level innovations that render the immense computational requirements practical (Ovtcharov & et. al., 2015; Matthieu Courbariaux et al., 2015) .",
"However, the benefits of hardware solutions and optimizations that can be applied to a general purpose DNN or CNN are limited and these solutions are fast reaching their limits.",
"This has lead to significant interest in networkspecific optimization techniques, such as network compression (Choi & et. al., 2018) , pruning (Li et al., 2016; Yu et al., 2018) , and regularization (Srivastava & et. al., 2014; Wan et al., 2013) , aim to reduce the number of edges in the network.",
"However, many of these techniques require retraining the pruned network, leading to the significant amount of computational waste.",
"In this paper, we proposed iSparse, a novel output-informed, framework for edge sparsification in deep neural networks (DNNs).",
"In particular, we propose a novel edge significance score that quantifies the significance of each edge in the network relative to its contribution to the final network output.",
"iSparse leverages this edge significance score to minimize the redundancy in the network by sparsifying those edges that contribute least to the final network output.",
"Experiments, with 11 benchmark datasets and using two well-know network architectures have shown that the proposed iSparse framework enables 30 − 50% network sparsification with minimal impact on the model classification accuracy.",
"Experiments have also shown that the iSparse is highly robust to variations in network elements (activation and model optimization functions) and that iSparse provides a much better accuracy/classification-time trade-off against competitors."
] | [
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0,
0.1428571343421936,
0.05405404791235924,
0.21621620655059814,
0.21739129722118378,
0.380952388048172,
0.16326530277729034,
0.1428571343421936,
0,
0.1428571343421936,
0.14035087823867798,
0.03999999538064003,
0.08163265138864517,
0.10169491171836853,
0.05128204822540283,
0.04878048226237297,
0.2222222238779068,
0.3636363446712494,
0.26923075318336487,
0.11764705181121826
] | ryefmpEYPr | true | [
"iSparse eliminates irrelevant or insignificant network edges with minimal impact on network performance by determining edge importance w.r.t. the final network output. "
] |
[
"Language modeling tasks, in which words are predicted on the basis of a local context, have been very effective for learning word embeddings and context dependent representations of phrases.",
"Motivated by the observation that efforts to code\n",
"world knowledge into machine readable knowledge bases tend to be entity-centric,\n",
"we investigate the use of a fill-in-the-blank task to learn context independent representations of entities from the contexts in which those entities were mentioned.\n",
"We show that large scale training of neural models allows us to learn extremely high fidelity entity typing information, which we demonstrate with few-shot reconstruction of Wikipedia categories.",
"Our learning approach is powerful enough\n",
"to encode specialized topics such as Giro d’Italia cyclists.",
"A long term goal of artificial intelligence has been the development and population of an entitycentric representation of human knowledge.",
"Efforts have been made to create the knowledge representation with knowledge engineers BID10 or crowdsourcers BID1 .",
"However, these methods have relied heavily on human definitions of their ontologies, which are both limited in scope and brittle in nature.",
"Conversely, due to recent advances in deep learning, we can now learn robust general purpose representations of words BID13 and contextualized phrases BID16 BID6 directly from large textual corpora.Consider the following context in which an entity mention is replaced with the [MASK] symbol:.",
". . [MASK] , a Russian factory worker, was the first woman in space . . .As readers, we understand that first woman in space is a unique identifier, and we are able to fill in the blank unambiguously. The central hypothesis of this paper is that, by matching entities to the contexts in which they are mentioned, we should be able to build a representation for Valentina Tereshkova that encodes the fact that she was the first woman in space.To do this, we start with BERT BID6 , a powerful pretrained text encoder, to encode contexts-Wikipedia text in which a hyperlinked span has been blanked out-and we train an entity encoder to match the BERT representation of the entity's contexts. We experiment with a lookup table that maps each entity to a fixed length vector, which we call RELIC (Representations of Entities Learned In Context). We hypothesize that the dedicated entity representations in RELIC should be able to capture knowledge that is not present in BERT. To test this, we compare RELIC to two BERT-based entity encoders: one that encodes the entity's canonical name, and one that encodes the first paragraph of the entity's Wikipedia page.Ultimately, we would like our representations to encode all of the salient information about each entity. However, for this initial work, we study our representations' ability to capture Wikipedia categorical information encoded by human experts. We show that given just a few exemplar entities of a Wikipedia category such as Giro d'Italia cyclists, we can use RELIC to recover the remaining entities of that category with good precision.",
"We demonstrated that the RELIC fill-in-the-blank task allows us to learn highly interesting representations of entities with their own latent ontology, which we empirically verify through a few-shot Wikipedia category reconstruction task.",
"We encourage researchers to explore the properties of our entity representations and BERT context encoder, which we will release publicly."
] | [
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.0952380895614624,
0.09090908616781235,
0,
0.1666666567325592,
0.3414634168148041,
0,
0,
0,
0.06896550953388214,
0,
0.178571417927742,
0.125,
0.31111109256744385,
0.1764705777168274
] | BJgum4Qgu4 | true | [
"We learn entity representations that can reconstruct Wikipedia categories with just a few exemplars."
] |
[
"Variational autoencoders (VAEs) have been successful at learning a low-dimensional manifold from high-dimensional data with complex dependencies.",
"At their core, they consist of a powerful Bayesian probabilistic inference model, to capture the salient features of the data.",
"In training, they exploit the power of variational inference, by optimizing a lower bound on the model evidence.",
"The latent representation and the performance of VAEs are heavily influenced by the type of bound used as a cost function.",
"Significant research work has been carried out into the development of tighter bounds than the original ELBO, to more accurately approximate the true log-likelihood.",
"By leveraging the q-deformed logarithm in the traditional lower bounds, ELBO and IWAE, and the upper bound CUBO, we bring contributions to this direction of research.",
"In this proof-of-concept study, we explore different ways of creating these q-deformed bounds that are tighter than the classical ones and we show improvements in the performance of such VAEs on the binarized MNIST dataset.\n",
"Variational autoencoders (VAEs) BID10 , BID4 ) are powerful Bayesian probabilistic models, which combine the advantages of neural networks with those of Bayesian inference.",
"They consist of an encoder created with a neural network architecture, which maps the high-dimensional input data, x, to a low-dimensional latent representation, z, through the posterior probability distribution, p(z|x).",
"Then, samples from this latent distribution are decoded back to a high-dimensional signal, through another neural network architecture and the probability distribution p(x|z).",
"Integration performed with these probability distributions from the Bayesian framework of VAEs is intractable.",
"As a solution, variational inference is employed to perform learning in these models, whereby a tractable bound on the model evidence is optimized instead of the intractable model evidence itself BID3 .",
"By design, the output model is set as p(x|z), usually a Bernoulli or a Gaussian probability distribution, depending on whether the target is discrete or continuous, and the prior distribution of the latent space as p(z).",
"However, the true posterior distribution, p(z|x), remains unknown and is intractable.",
"To solve this issue, an approximate posterior distribution, q(z|x), is learnt by means of a lower bound on the model evidence, termed the ELBO.",
"For one data point, x (i) , writing out the Kullback-Leibler divergence between the true and approximate posterior distributions and using its positivity property yields this bound: DISPLAYFORM0 The lower bound on the model evidence, the ELBO, now becomes the cost function used during the training phase of the VAEs.",
"Over time, the first term shows how the reconstruction loss changes and the second term how far the approximate posterior is to the prior distribution.",
"The result of inference and the performance of VAEs on reconstructing and generating images heavily depend on the type of bound employed in training.",
"A significant body of work has been carried out to replace the ELBO with tighter bounds on the model evidence.",
"On the one hand, starting from an unbiased estimator of the true log-likelihood, the authors of BID0 derive an importance sampling estimate of the model evidence, the IWAE.",
"This represents one of the tightest bounds of VAEs and has only recently been improved on in BID8 , BID11 .",
"Increasing the number of importance samples in the IWAE objective, decreases the signal-to-noise-ratio of the gradients, which makes the learning more difficult, as the gradients suffer from a larger level of noise BID8 .",
"Several strategies are able to correct this issue.",
"In the first algorithm, MIWAE, the outer expectation of the IWAE objective is approximated with more than one sample, as is the case in the IWAE.",
"The second algorithm, CIWAE, represents a convex combination of the ELBO and the IWAE bounds and the third algorithm, PIWAE, separately trains the encoder and the decoder networks with different IWAE objectives.On the other hand, leveraging different divergences between the true and the approximate posterior distributions has lead to diverse bounds on the model evidence.",
"Starting from the Rényi α-divergence BID9 between such distributions, a family of lower and upper bounds are obtained, parameterized by α BID6 .",
"However, these lower bounds become competitive with the IWAE, only in the limit α → −∞.",
"In addition, the upper bounds suffer from approximation errors and bias and the means to select the best value of the hyperparameter α is unknown.",
"Through an importance sampling scheme similar to the one found in the IWAE, these Rényi α bounds are tightened in BID15 .",
"If the Rényi α-divergence is replaced with the χ 2 divergence, the bound on the model evidence becomes the upper bound CUBO BID1 .",
"The Rényi α-family of bounds and others lose their interpretability as a reconstruction loss and a Kullback-Leibler divergence term that measures how close the approximate posterior is to the prior distribution.",
"They remain just a cost function optimized during training.With different compositions of convex and concave functions, the approaches described above are unified in the K-sample generalized evidence lower bound, GLBO BID11 .",
"This study generalizes the concept of maximizing the logarithm of the model evidence to maximizing the φ-evidence score, where φ(u) is a concave function that replaces the logarithm.",
"It allows for great flexibility in the choice of training objectives in VAEs.",
"One particular setting provides a lower bound, the CLBO, which surpasses the IWAE objective.",
"We addressed the challenging task of deriving tighter bounds on the model evidence of VAEs.",
"Significant research effort has gone in this direction, with several major contributions having been developed so far, which we reviewed in the introduction.",
"We leveraged the q-deformed logarithm function, to explore other ways of tightening the lower bounds.",
"As well as improvements in the estimated true log-likelihood, we found that the q-deformed bounds are much closer to the estimated true log-likelihood, than the classical bounds are.",
"Thus, training with our novel bounds as the cost function may increase the learning ability of VAEs.",
"Through the preliminary experiments we have conducted so far, we have achieved our goal.",
"They show that our approach has merit and that this direction of research is worth pursuing in more depth, to produce more accurate bounds and to study their impact on the performance of VAEs.As future work, similarly to BID8 , we plan to investigate how the tightening the ELBO and the IWAE influences the learning process and affects the gradients and the structure of the latent space, compared with the classical case.",
"In addition, we plan to explore different optimization strategies for q and to study its role in achieving tighter bounds.",
"We will also apply our q-deformed bounds, to investigate the disentanglement problem in VAEs, see for example BID2 .",
"The research question addressed here is how different bounds change the structure of the latent space, to provide better or worse disentanglement scores.",
"Finally, we would also like to test our novel bounds on all the major benchmark datasets used for assessing the performance of VAEs and compare them with other state-of-the-art bounds on the model evidence."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0
] | [
0.06451612710952759,
0.1249999925494194,
0.12903225421905518,
0.060606054961681366,
0.277777761220932,
0.2702702581882477,
0.260869562625885,
0.1111111044883728,
0.0952380895614624,
0.1111111044883728,
0.0714285671710968,
0.14999999105930328,
0.04651162400841713,
0.07999999821186066,
0.05405404791235924,
0.035087715834379196,
0.12121211737394333,
0.060606054961681366,
0.24242423474788666,
0.11428570747375488,
0.12121211737394333,
0.04999999701976776,
0.09090908616781235,
0.11764705181121826,
0.1090909019112587,
0.1111111044883728,
0.20689654350280762,
0.17142856121063232,
0.24242423474788666,
0.0624999962747097,
0.1428571343421936,
0.04444444179534912,
0.11428570747375488,
0.07692307233810425,
0.07407406717538834,
0.2222222238779068,
0.1111111044883728,
0.2857142686843872,
0.3529411852359772,
0.13333332538604736,
0.1538461446762085,
0.11764705926179886,
0.24242423474788666,
0.1875,
0.1666666567325592,
0.1818181723356247
] | rygJV8UKuV | true | [
"Using the q-deformed logarithm, we derive tighter bounds than IWAE, to train variational autoencoders."
] |
[
"A belief persists long in machine learning that enlargement of margins over training data accounts for the resistance of models to overfitting by increasing the robustness.",
"Yet Breiman shows a dilemma (Breiman, 1999) that a uniform improvement on margin distribution \\emph{does not} necessarily reduces generalization error.",
"In this paper, we revisit Breiman's dilemma in deep neural networks with recently proposed normalized margins using Lipschitz constant bound by spectral norm products.",
"With both simplified theory and extensive experiments, Breiman's dilemma is shown to rely on dynamics of normalized margin distributions, that reflects the trade-off between model expression power and data complexity.",
"When the complexity of data is comparable to the model expression power in the sense that training and test data share similar phase transitions in normalized margin dynamics, two efficient ways are derived via classic margin-based generalization bounds to successfully predict the trend of generalization error.",
"On the other hand, over-expressed models that exhibit uniform improvements on training normalized margins may lose such a prediction power and fail to prevent the overfitting. \n",
"Margin, as a measurement of the robustness allowing some perturbations on classifier without changing its decision on training data, has a long history in characterizing the performance of classification algorithms in machine learning.",
"As early as BID17 , it played a central role in the proof on finite-stopping or convergence of perceptron algorithm when training data is separable.",
"Equipped with convex optimization technique, a plethora of large margin classifiers are triggered by support vector machines BID3 BID23 .",
"AdaBoost, an iterative algorithm to combine an ensemble of classifiers proposed by BID4 , often exhibits a resistance to overfitting phenomenon that during the training process the generalization error keeps on non-increasing when the training error drops to zero.",
"Toward deciphering the such a resistance of overfitting phenomenon, BID19 proposed an explanation that the training process keeps on improving a notion of classification margins in boosting, among later works on consistency of boosting with early stopping regularization BID2 BID30 BID28 .",
"Lately such a resistance to overfitting is again observed in deep neural networks with overparameterized models .",
"A renaissance of margin theory is proposed by BID0 with a normalization of network using Lipschitz constants bounded by products of operator spectral norms.",
"It inspires many further investigations in various settings BID14 BID16 BID12 .However",
", the improvement of margin distributions does not necessarily guarantee a better generalization performance, which is at least traced back to BID1 in his effort to understanding AdaBoost. In this",
"work, Breiman designed an algorithm arc-gv such that the margin can be maximized via a prediction game, then he demonstrated an example that one can achieve uniformly larger margin distributions on training data than AdaBoost but suffer a higher generalization error. In the",
"end of this paper, Breiman made the following comments with a dilemma: \"The results above leave us in a quandary. The laboratory results for various arcing algorithms are excellent, but the theory is in disarray. The evidence is that if we try too hard to make the margins larger, then overfitting sets in. ... My sense of it is that we just do not understand enough about what is going on.\"Breiman's dilemma triggers some further explorations to understand the limitation of margin theory in boosting BID18 Wang et al., 2008; BID27 . In particular",
", BID18 points out that the trees found by arg-gv have larger model complexity in terms of deeper average depth than AdaBoost, suggesting that margin maximization in arc-gv does not necessarily control the model complexity. The latter works",
"provide tighter bounds based on VC-dimension and optimized quantile training margins, which however do not apply to over-parametrized models in deep neural networks and the case where the training margin distributions are uniformly improved.In this paper, we are going to revisit Breiman's dilemma in the scenario of deep neural networks. Both the success",
"and failure can be seen on normalized margin based bounds on generalization error. First of all, let",
"'s look at the following illustration example.Example (Breiman's Dilemma with a CNN). A basic 5-layer",
"convolutional neural network of c channels (see Section 3 for details) is trained with CIFAR-10 dataset whose 10 percent labels are randomly permuted. When c = 50 with",
"92, 610 parameters, FIG0 shows the training error and generalization (test) error in solid curves. From the generalization",
"error in (a) one can see that overfitting",
"indeed happens after about 10 epochs, despite that training error continuously drops down to zero. One can successfully predict such",
"an overfitting phenomenon from FIG0 (b), the evolution of normalized",
"margin distributions defined later in this paper. In (b), while small margins are",
"monotonically",
"improved during training, large margins undergoes a phase transition from increase to decrease around 10 epochs such that one can predict the tendency of generalization error in (a) using large margin dynamics. Two particular",
"sections of large margin dynamics",
"are highlighted in (b), one at 8.3 on x-axis that measures the percentage",
"of normalized training margins no more than 8.3 (training margin error) and the other at 0.8 on y-axis that measures the normalized margins at quantile q = 0.8 (i.e. 1/γ q,t ). Both of them meet the tendency of generalization error",
"in (a) and find good early stopping time to avoid overfitting",
". However, as we increase the channel number to c = 400 with",
"about 5.8M parameters and retrain the model, (c) shows a similar overfitting phenomenon in generalization",
"error; on the other hand, (d) exhibits a monotonic improvement of normalized margin distributions",
"without a phase transition during the training and thus fails to capture the overfitting. This demonstrates the Breiman's dilemma in CNN. A key insight behind this",
"dilemma, is that one needs a trade-off between",
"the model expression power and the complexity of the dataset to endorse margin bounds a prediction power. On one hand, when the model has a limited expression power relative to the",
"training dataset, in the sense that the training margin distributions CAN NOT be uniformly improved during training, the generalization or test error may be predicted from dynamics of normalized margin distributions. On the other hand, if we push too hard to improve the margin by giving model",
"too much degree of freedom such that the training margins are uniformly improved during training process, the predictability may be lost. A trade-off is thus necessary to balance the complexity of model and dataset",
", otherwise one is doomed to meet Breiman's dilemma when the models arbitrarily increase the expression power.The example above shows that the expression power of models relative to the complexity of dataset, can be observed from the dynamics of normalized margins in training, instead of counting the number of parameters in neural networks. In the sequel, our main contributions are to make these precise by revisiting",
"the Rademacher complexity bounds with Lipschitz constants BID0 .• With the Lipschitz-normalized margins, a linear inequality is established between",
"training margin and test margin in Theorem 1. When both training and test normalized margin distributions undergo similar phase",
"transitions on increase-decrease during the training process, one may predict the generalization error based on the training margins as illustrated in FIG0 .• In a dual direction, one can define a quantile margin via the inverse of margin distribution",
"functions, to establish another linear inequality between the inverse quantile margins and the test margins as shown in Theorem 2. Quantile margin is far easier to tune in practice and enjoys a stronger prediction power exploiting",
"an adaptive selection of margins along model training.• In all cases, Breiman's dilemma may fail both of the methods above when dynamics of normalized training",
"margins undergo different phase transitions to that of test margins during training, where a uniform improvement of margins results in overfitting.Section 2 describes our method to derive the two linear inequalities of generalization bounds above.Extensive experimental results are shown in Section 3 and Appendix with basic CNNs, AlexNet, VGG, ResNet, and various datasets including CIFAR10, CIFAR100, and mini-Imagenet.",
"In this paper, we show that Breiman's dilemma is ubiquitous in deep learning, in addition to previous studies on Boosting algorithms.",
"We exhibit that Breiman's dilemma is closely related to the tradeoff between model expression power and data complexity.",
"A novel perspective on phase transitions in dynamics of Lipschitz-normalized margin distributions is proposed to inspect when the model has over-representation power compared to the dataset, instead of merely counting the number of parameters.",
"A data-driven early stopping rule by monitoring the margin dynamics is a future direction to explore.",
"Lipschitz semi-norm plays an important role in normalizing or regularizing neural networks, e.g. in GANs BID7 BID14 , therefore a more careful treatment deserves further pursuits."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.2545454502105713,
0.2800000011920929,
0.2181818187236786,
0.3333333134651184,
0.3478260934352875,
0.35087719559669495,
0.1355932205915451,
0.1428571343421936,
0.1599999964237213,
0.2222222238779068,
0.20895521342754364,
0.2978723347187042,
0.19230768084526062,
0.04651162400841713,
0.29999998211860657,
0.20588234066963196,
0.20560747385025024,
0.1249999925494194,
0.2702702581882477,
0.25531914830207825,
0.04255318641662598,
0.10526315122842789,
0.17391303181648254,
0.1538461446762085,
0.19607841968536377,
0.09756097197532654,
0.22727271914482117,
0.375,
0.1666666716337204,
0.13333332538604736,
0.24242423474788666,
0.1428571343421936,
0.04651162400841713,
0.1702127605676651,
0.2666666507720947,
0.1818181723356247,
0.1538461446762085,
0.19230768084526062,
0.31884056329727173,
0.2950819730758667,
0.2682926654815674,
0.08163265138864517,
0.2222222238779068,
0.29032257199287415,
0.25806450843811035,
0.2222222238779068,
0.2716049253940582,
0.23529411852359772,
0.20408162474632263,
0.2666666507720947,
0.21276594698429108,
0.07017543166875839
] | Byl_ciRcY7 | true | [
"Bregman's dilemma is shown in deep learning that improvement of margins of over-parameterized models may result in overfitting, and dynamics of normalized margin distributions are proposed to predict generalization error and identify such a dilemma. "
] |
[
"It has been an open research challenge for developing an end-to-end multi-domain task-oriented dialogue system, in which a human can converse with the dialogue agent to complete tasks in more than one domain.",
"First, tracking belief states of multi-domain dialogues is difficult as the dialogue agent must obtain the complete belief states from all relevant domains, each of which can have shared slots common among domains as well as unique slots specifically for the domain only.",
"Second, the dialogue agent must also process various types of information, including contextual information from dialogue context, decoded dialogue states of current dialogue turn, and queried results from a knowledge base, to semantically shape context-aware and task-specific responses to human.",
"To address these challenges, we propose an end-to-end neural architecture for task-oriented dialogues in multiple domains.",
"We propose a novel Multi-level Neural Belief Tracker which tracks the dialogue belief states by learning signals at both slot and domain level independently.",
"The representations are combined in a Late Fusion approach to form joint feature vectors of (domain, slot) pairs.",
"Following recent work in end-to-end dialogue systems, we incorporate the belief tracker with generation components to address end-to-end dialogue tasks.",
"We achieve state-of-the-art performance on the MultiWOZ2.1 benchmark with 50.91% joint goal accuracy and competitive measures in task-completion and response generation.",
"In a task-oriented dialogue system, the Dialogue State Tracking (DST) module is responsible for updating dialogue states (essentially, what the user wants) at each dialogue turn.",
"The DST supports the dialogue agent to steer the conversation towards task completion.",
"As defined by Henderson et al. (2014a) , a dialogue belief state consists of inform slots -information to query a given knowledge base or database (DB), and request slots -information to be returned to the users.",
"Task-oriented dialogues can be categorized as either single-domain or multi-domain dialogues.",
"In single-domain dialogues, humans converse with the dialogue agent to complete tasks of one domain.",
"In contrast, in multi-domain dialogues, the tasks of interest can come from different domains.",
"A dialogue state in a multi-domain dialogue should include all inform and request slots of corresponding domains up to the current turn.",
"Examples of a single-domain dialogue and a multi-domain dialogue with annotated states after each turn can be seen in Figure 1 .",
"Despite there being several efforts in developing task-oriented dialogue systems in a single domain (Wen et al., 2016a; Lei et al., 2018) , there have been limited contributions for multi-domain task-oriented dialogues.",
"Developing end-to-end systems for multi-domain dialogues faces several challenges: (1) Belief states in multi-domain dialogues are usually larger and more complex than in single-domain, because of the diverse information from multiple domains.",
"Each domain can have shared slots that are common among domains or unique slots that are not shared with any.",
"(2) In an end-to-end system, the dialogue agent must incorporate information from source sequences, e.g. dialogue context and human utterances, as well as tracked belief states and extracted information from knowledge base, to semantically shape a relevant response with accurate information for task completion.",
"Directly applying methods for single-domain dialogues to multi-domain dialogues is not straightforward because the belief states extend across multiple domains.",
"A possible solution is to process a multi-domain dialogue for N D times for N D domains, each time obtaining a belief state of one domain.",
"However, this approach does not allow learning co-references in dialogues whereby users can switch from one domain to another turn by turn.",
"We propose an end-to-end dialogue system approach which explicitly track the dialogue states in multiple domains altogether.",
"Specifically, (1) we propose Multi-level Neural Belief Tracker to process contextual information for both slot-level and domain-level signals independently.",
"The two levels are subsequently combined to learn multi-domain dialogue states.",
"Our dialogue state tracker enables shared learning of slots common among domains as well as learning of unique slots in each domain.",
"(2) we utilize multi-head attention layers (Vaswani et al., 2017) to comprehensively process various types of information: dialogue context, user utterances, belief states of both inform and request slots, and DB query results.",
"The multi-head structure allows the model to independently attend to the features over multiple representation sub-spaces; and (3) we combine all components to create a dialogue system from state tracking to response generation.",
"The system can be jointly learned in an end-to-end manner.",
"Our end-to-end dialogue system utilizes supervision signals of dialogue states and output responses without using system action annotation.",
"To comprehensively validate our method, we compare our models with baselines in end-to-end, DST, and context-to-text generation settings.",
"We achieve the state-of-the-art performance in DST, task-completion, and response generation in the MultiWOZ2.1 corpus Eric et al., 2019 ) as compared to other baselines in similar settings.",
"In context-to-text generation setting that allows supervision of dialogue acts, our models can achieve competitive measures of Inform and BLEU metric.",
"In this work, we proposed an end-to-end dialogue system with a novel Multi-level Neural Belief Tracker.",
"Our DST module can track complex belief states of multiple domains and output more accurate dialogue states.",
"The DST is combined with attention-based generation module to generate dialogue responses.",
"Evaluated on the large-scale multi-domain dialogue benchmark MultiWOZ2.1, our models achieve the state-of-the-art performance in DST and competitive measures in taskcompletion and response generation.",
"Figure 3 : Example dialogue with the input system response St−1 and current user utterance Ut, and the output belief state BSt and system response St. Compared with TSCP (Row 3), our dialogue state and response (Last Row) are more correct and closer to the ground truth (Row 2).",
"Visualization of attention to the user utterance sequence at slot-level (lower right) and domain-level (upper right) is also included.",
"More red denotes higher attention score between domain or slot representation and token representation.",
"Best viewed in color."
] | [
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.2181818187236786,
0.03333332762122154,
0.10344827175140381,
0.1463414579629898,
0.20408162474632263,
0.09302324801683426,
0.2790697515010834,
0.42553192377090454,
0.0833333283662796,
0.10810810327529907,
0.1428571343421936,
0,
0.09999999403953552,
0.05128204822540283,
0.21739129722118378,
0.27272728085517883,
0.11538460850715637,
0.1111111044883728,
0.04878048226237297,
0.25,
0,
0.12765957415103912,
0.04347825422883034,
0.2926829159259796,
0.045454539358615875,
0.0555555522441864,
0.1860465109348297,
0.07017543166875839,
0.25925925374031067,
0.22857142984867096,
0.19512194395065308,
0.1904761791229248,
0.307692289352417,
0.13333332538604736,
0.39024388790130615,
0.09756097197532654,
0.1621621549129486,
0.3404255211353302,
0.19672130048274994,
0.04651162400841713,
0.052631575614213943,
0.06896551698446274
] | rylK-kBYwr | true | [
"We proposed an end-to-end dialogue system with a novel multi-level dialogue state tracker and achieved consistent performance on MultiWOZ2.1 in state tracking, task completion, and response generation performance."
] |
[
"Score matching provides an effective approach to learning flexible unnormalized models, but its scalability is limited by the need to evaluate a second-order derivative. ",
"In this paper,we connect a general family of learning objectives including score matching to Wassersteingradient flows.",
"This connection enables us to design a scalable approximation to theseobjectives, with a form similar to single-step contrastive divergence.",
"We present applications in training implicit variational and Wasserstein auto-encoders with manifold-valued priors.",
"Unnormalized models define the model distribution as q(x; θ) ∝ exp(−E(x; θ)), where E(x; θ) is an energy function that can be parameterized by e.g. DNNs.",
"Unnormalized models can be used directly for density estimation, but another important application is in gradient estimation for implicit variational inference, where we can use score estimation in latent space to approximate an intractable learning objective.",
"This approach leads to improved performance in training implicit auto-encoders (Song et al., 2019) .",
"Maximum likelihood estimation for unnormalized models is intractable, and score matching (Hyvärinen, 2005 ) is a popular alternative.",
"Score matching optimizes the Fisher divergence",
"where we denote the data distribution as p.",
"Hyvärinen (2005) shows D F is equivalent to E p(x) ∆ log q(x; θ) + 1 2 ∇ log q(x; θ) 2 , where ∆ = i ∂ 2 i is the Laplacian; the equivalent form can be estimated using samples from p.",
"So far, when E has a complex parameterization, calculating the equivalent objective is still difficult, as it involves the second-order derivatives; and in practice, people turn to scalable approximations of the score matching objective (Song et al., 2019; Hyvarinen, 2007; Vincent, 2011) or other objectives such as the kernelized Stein discrepancy (KSD; Liu et al., 2016b; Liu and Wang, 2017) .",
"However, these approximations are developed on a case-by-case basis, leaving important applications unaddressed; for example, there is a lack of scalable learning methods for models on manifolds (Mardia et al., 2016) .",
"In this work, we present a unifying perspective to this problem, and derive scalable approximations for a variety of objectives including score matching.",
"We start by interpreting these objectives as the initial velocity of certain distribution-space gradient flows, which are simulated by common samplers.",
"This novel interpretation leads to a scalable approximation algorithm for all such objectives, reminiscent to single-step contrastive divergence (CD-1).",
"We refer to any objective bearing the above interpretation as above as a \"minimum velocity learning objective\", a term coined in the unpublished work Movellan (2007) .",
"Our formulation is a distribution-space generalization of their work, and applies to different objectives as the choice of distribution space varies.",
"Another gap we fill in is the development of a practically applicable algorithm: while the idea of approximating score matching with CD-1 is also explored in (Hyvarinen, 2007; Movellan, 2007) , previously the approximation suffers from an infinite variance problem, and is thus believed to be impractical (Hyvarinen, 2007; Saremi et al., 2018) ; we present a simple fix to this issue.",
"Additionally, we present an approximation to the objective function instead of its gradient, thus enabling the use of regularization like early-stopping.",
"Other related work will be reviewed in Appendix C.",
"One important application of our framework is in learning unnormalized models on manifolds.",
"This is needed in areas such as image analysis (Srivastava et al., 2007) , geology (Davis and Sampson, 1986) and bioinformatics (Boomsma et al., 2008) .",
"Moreover, as we present an approximation to the Riemannian score matching objective, it enables flexible inference for VAEs and WAEs with manifold-valued latent variables, as it enables gradient estimation for implicit variational distributions on manifolds.",
"It is believed that auto-encoders with a manifold-valued latent space can capture the distribution of certain types of data better (Mathieu et al., 2019; Anonymous, 2020; Davidson et al., 2018) .",
"As we will see in Section 3, our method leads to improved performance of VAEs and WAEs."
] | [
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.09756097197532654,
0.1818181723356247,
0.24242423474788666,
0.4000000059604645,
0,
0.12244897335767746,
0.1875,
0.11764705181121826,
0,
0,
0.03999999538064003,
0.17391304671764374,
0.17391303181648254,
0.31578946113586426,
0.10810810327529907,
0.2857142686843872,
0.20512819290161133,
0.21621620655059814,
0.20588235557079315,
0.2222222238779068,
0.07692307233810425,
0.13333332538604736,
0.10256409645080566,
0.2916666567325592,
0.08888888359069824,
0.3529411852359772
] | HJl4FJnEYr | true | [
"We present a scalable approximation to a wide range of EBM objectives, and applications in implicit VAEs and WAEs"
] |
[
"This paper develops variational continual learning (VCL), a simple but general framework for continual learning that fuses online variational inference (VI) and recent advances in Monte Carlo VI for neural networks.",
"The framework can successfully train both deep discriminative models and deep generative models in complex continual learning settings where existing tasks evolve over time and entirely new tasks emerge.",
"Experimental results show that VCL outperforms state-of-the-art continual learning methods on a variety of tasks, avoiding catastrophic forgetting in a fully automatic way.",
"Continual learning (also called life-long learning and incremental learning) is a very general form of online learning in which data continuously arrive in a possibly non i.i.d. way, tasks may change over time (e.g. new classes may be discovered), and entirely new tasks can emerge BID43 BID47 BID39 .",
"What is more, continual learning systems must adapt to perform well on the entire set of tasks in an incremental way that avoids revisiting all previous data at each stage.",
"This is a key problem in machine learning since real world tasks continually evolve over time (e.g. they suffer from covariate and dataset shift) and the size of datasets often prohibits frequent batch updating.",
"Moreover, practitioners are often interested in solving a set of related tasks that benefit from being handled jointly in order to leverage multi-task transfer.",
"Continual learning is also of interest to cognitive science, being an intrinsic human ability.The ubiquity of deep learning means that it is important to develop deep continual learning methods.",
"However, it is challenging to strike a balance between adapting to recent data and retaining knowledge from old data.",
"Too much plasticity leads to the infamous catastrophic forgetting problem BID34 BID36 BID13 and too much stability leads to an inability to adapt.",
"Recently there has been a resurgence of interest in this area.",
"One approach trains individual models on each task and then carries out a second stage of training to combine them BID28 .",
"A more elegant and more flexible approach maintains a single model and uses a single type of regularized training that prevents drastic changes in the parameters which have a large influence on prediction, but allows other parameters to change more freely BID29 BID26 BID50 .",
"The approach developed here follows this venerable work, but is arguably more principled, extensible and automatic.",
"This paper is built on the observation that there already exists an extremely general framework for continual learning: Bayesian inference.",
"Critically, Bayesian inference retains a distribution over model parameters that indicates the plausibility of any setting given the observed data.",
"When new data arrive, we combine what previous data have told us about the model parameters (the previous posterior) with what the current data are telling us (the likelihood).",
"Multiplying and renormalizing yields the new posterior, from which point we can recurse.",
"Critically, the previous posterior constrains parameters that strongly influence prediction, preventing them from changing drastically, but it allows other parameters to change.",
"The wrinkle is that exact Bayesian inference is typically intractable and so approximations are required.",
"Fortunately, there is an extensive literature on approximate inference for neural networks.",
"We merge online variational inference (VI) BID11 BID42 BID4 with Monte Carlo VI for neural networks BID3 to yield variational continual learning (VCL).",
"In addition, we extend VCL to include a small episodic memory by combining VI with the coreset data summarization method BID0 BID19 .",
"We demonstrate that the framework is general, applicable to both deep discriminative models and deep generative models, and that it yields excellent performance.",
"Approximate Bayesian inference provides a natural framework for continual learning.",
"Variational Continual Learning (VCL), developed in this paper, is an approach in this vein that extends online variational inference to handle more general continual learning tasks and complex neural network models.",
"VCL can be enhanced by including a small episodic memory that leverages coreset algorithms from statistics and connects to message-scheduling in variational message passing.",
"We demonstrated how the VCL framework can be applied to both discriminative and generative models.",
"Experimental results showed state-of-the-art performance when compared to previous continual learning approaches, even though VCL has no free parameters in its objective function.",
"Future work should explore alternative approximate inference methods using the same framework and also develop more sophisticated episodic memories.",
"Finally, we note that VCL is ideally suited for efficient model refinement in sequential decision making problems, such as reinforcement learning and active learning.",
"DISPLAYFORM0 Figure 6: Generated images from each of the generators after training.",
"Each of the columns shows the images generated from a specific task's generator, and each of the lines shows the generations from generators of all trained tasks.",
"Clearly the naive approach suffers from catastrophic forgetting, while other approaches successfully remember previous tasks."
] | [
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.41025641560554504,
0.2702702581882477,
0.23529411852359772,
0.1111111044883728,
0.1428571343421936,
0.17391304671764374,
0.11428570747375488,
0.1666666567325592,
0.06896550953388214,
0,
0.17391303181648254,
0.12121211737394333,
0.08163265138864517,
0,
0.25,
0.06451612710952759,
0,
0,
0,
0,
0.0833333283662796,
0.1764705777168274,
0.11764705181121826,
0.1249999925494194,
0.3636363446712494,
0.19512194395065308,
0.1111111044883728,
0.07407406717538834,
0.17142856121063232,
0,
0.17142856121063232,
0,
0.0624999962747097,
0
] | BkQqq0gRb | true | [
"This paper develops a principled method for continual learning in deep models."
] |
[
"In partially observable (PO) environments, deep reinforcement learning (RL) agents often suffer from unsatisfactory performance, since two problems need to be tackled together: how to extract information from the raw observations to solve the task, and how to improve the policy.",
"In this study, we propose an RL algorithm for solving PO tasks.",
"Our method comprises two parts: a variational recurrent model (VRM) for modeling the environment, and an RL controller that has access to both the environment and the VRM.",
"The proposed algorithm was tested in two types of PO robotic control tasks, those in which either coordinates or velocities were not observable and those that require long-term memorization.",
"Our experiments show that the proposed algorithm achieved better data efficiency and/or learned more optimal policy than other alternative approaches in tasks in which unobserved states cannot be inferred from raw observations in a simple manner.",
"Model-free deep reinforcement learning (RL) algorithms have been developed to solve difficult control and decision-making tasks by self-exploration (Sutton & Barto, 1998; Mnih et al., 2015; Silver et al., 2016) .",
"While various kinds of fully observable environments have been well investigated, recently, partially observable (PO) environments (Hafner et al., 2018; Igl et al., 2018; Lee et al., 2019; Jaderberg et al., 2019) have commanded greater attention, since real-world applications often need to tackle incomplete information and a non-trivial solution is highly desirable.",
"There are many types of PO tasks; however, those that can be solved by taking the history of observations into account are more common.",
"These tasks are often encountered in real life, such as videos games that require memorization of previous events (Kapturowski et al., 2018; Jaderberg et al., 2019) and robotic control using real-time images as input (Hafner et al., 2018; Lee et al., 2019) .",
"While humans are good at solving these tasks by extracting crucial information from the past observations, deep RL agents often have difficulty acquiring satisfactory policy and achieving good data efficiency, compared to those in fully observable tasks (Hafner et al., 2018; Lee et al., 2019) .",
"For solving such PO tasks, several categories of methods have been proposed.",
"One simple, straightforward solution is to include a history of raw observations in the current \"observation\" (McCallum, 1993; Lee et al., 2019) .",
"Unfortunately, this method can be impractical when decision-making requires a long-term memory because dimension of observation become unacceptably large if a long history is included.",
"Another category is based on model-free RL methods with recurrent neural networks (RNN) as function approximators (Schmidhuber, 1990; 1991; Igl et al., 2018; Kapturowski et al., 2018; Jaderberg et al., 2019) , which is usually more tractable to implement.",
"In this case, RNNs need to tackle two problems simultaneously (Lee et al., 2019) : learning representation (encoded by hidden states of the RNN) of the underlying states of the environment from the state-transition data, and learning to maximize returns using the learned representation.",
"As most RL algorithms use a bootstrapping strategy to learn the expected return and to improve the policy (Sutton & Barto, 1998) , it is challenging to train the RNN stably and efficiently, since RNNs are relatively more difficult to train (Pascanu et al., 2013) than feedforward neural networks.",
"The third category considers learning a model of the environment and estimating a belief state, extracted from a sequence of state-transitions (Kaelbling et al., 1998; Ha & Schmidhuber, 2018; Lee et al., 2019) .",
"The belief state is an agent-estimated variable encoding underlying states of the environment that determines state-transitions and rewards.",
"Perfectly-estimated belief states can thus be taken as \"observations\" of an RL agent that contains complete information for solving the task.",
"Therefore, solving a PO task is segregated into a representation learning problem and a fully observable RL problem.",
"Since fully observable RL problems have been well explored by the RL community, the critical challenge here is how to estimate the belief state.",
"In this study, we developed a variational recurrent model (VRM) that models sequential observations and rewards using a latent stochastic variable.",
"The VRM is an extension of the variational recurrent neural network (VRNN) model (Chung et al., 2015) that takes actions into account.",
"Our approach falls into the third category by taking the internal states of the VRM together with raw observations as the belief state.",
"We then propose an algorithm to solve PO tasks by training the VRM and a feed-forward RL controller network, respectively.",
"The algorithm can be applied in an end-to-end manner, without fine tuning of a hyperparameters.",
"We then experimentally evaluated the proposed algorithm in various PO versions of robotic control tasks.",
"The agents showed substantial policy improvement in all tasks, and in some tasks the algorithm performed essentially as in fully observable cases.",
"In particular, our algorithm demonstrates greater performance compared to alternative approaches in environments where only velocity information is observable or in which long-term memorization is needed.",
"In this paper, we proposed a variational recurrent model for learning to represent underlying states of PO environments and the corresponding algorithm for solving POMDPs.",
"Our experimental results demonstrate effectiveness of the proposed algorithm in tasks in which underlying states cannot be simply inferred using a short sequence of observations.",
"Our work can be considered an attempt to understand how RL benefits from stochastic Bayesian inference of state-transitions, which actually happens in the brain (Funamizu et al., 2016) , but has been considered less often in RL studies.",
"We used stochastic models in this work which we actually found perform better than deterministic ones, even through the environments we used are deterministic (Appendix C).",
"The VRNN can be replaced with other alternatives (Bayer & Osendorfer, 2014; Goyal et al., 2017) to potentially improve performance, although developing model architecture is beyond the scope of the current study.",
"Moreover, a recent study (Ahmadi & Tani, 2019) showed a novel way of inference using back-propagation of prediction errors, which may also benefit our future studies.",
"Many researchers think that there are two distinct systems for model-based and model-free RL in the brain (Gläscher et al., 2010; Lee et al., 2014) and a number of studies investigated how and when the brain switches between them (Smittenaar et al., 2013; Lee et al., 2014) .",
"However, Stachenfeld et al. (2017) suggested that the hippocampus can learn a successor representation of the environment that benefits both model-free and model-based RL, contrary to the aforementioned conventional view.",
"We further propose another possibility, that a model is learned, but not used for planning or dreaming.",
"This blurs the distinction between model-based and model-free RL.",
"Tuomas Haarnoja, Aurick Zhou, Pieter Abbeel, and Sergey Levine.",
"Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor.",
"In International Conference on Machine Learning, pp. 1856-1865, 2018a."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.07843136787414551,
0.27586206793785095,
0.3333333432674408,
0.045454539358615875,
0.15686273574829102,
0.08695651590824127,
0.03333332762122154,
0.10256409645080566,
0.039215680211782455,
0.16949151456356049,
0.06896550953388214,
0.09999999403953552,
0.04878048226237297,
0.07999999821186066,
0.19230768084526062,
0.10169491171836853,
0.1304347813129425,
0.17142856121063232,
0.2631579041481018,
0.1875,
0.15789473056793213,
0.2702702581882477,
0.19999998807907104,
0.1621621549129486,
0.2702702581882477,
0.1249999925494194,
0.1249999925494194,
0.10810810327529907,
0.04878048226237297,
0.5365853905677795,
0.29999998211860657,
0.07692307233810425,
0.04999999329447746,
0.08163265138864517,
0.09756097197532654,
0.15094339847564697,
0.09090908616781235,
0.1764705777168274,
0.1538461446762085,
0,
0.13793103396892548,
0
] | r1lL4a4tDB | true | [
"A deep RL algorithm for solving POMDPs by auto-encoding the underlying states using a variational recurrent model"
] |
[
"This paper formalises the problem of online algorithm selection in the context of Reinforcement Learning (RL).",
"The setup is as follows: given an episodic task and a finite number of off-policy RL algorithms, a meta-algorithm has to decide which RL algorithm is in control during the next episode so as to maximize the expected return.",
"The article presents a novel meta-algorithm, called Epochal Stochastic Bandit Algorithm Selection (ESBAS).",
"Its principle is to freeze the policy updates at each epoch, and to leave a rebooted stochastic bandit in charge of the algorithm selection.",
"Under some assumptions, a thorough theoretical analysis demonstrates its near-optimality considering the structural sampling budget limitations.",
"ESBAS is first empirically evaluated on a dialogue task where it is shown to outperform each individual algorithm in most configurations.",
"ESBAS is then adapted to a true online setting where algorithms update their policies after each transition, which we call SSBAS.",
"SSBAS is evaluated on a fruit collection task where it is shown to adapt the stepsize parameter more efficiently than the classical hyperbolic decay, and on an Atari game, where it improves the performance by a wide margin.",
"Reinforcement Learning (RL, BID18 ) is a machine learning framework for optimising the behaviour of an agent interacting with an unknown environment.",
"For the most practical problems, such as dialogue or robotics, trajectory collection is costly and sample efficiency is the main key performance indicator.",
"Consequently, when applying RL to a new problem, one must carefully choose in advance a model, a representation, an optimisation technique and their parameters.",
"Facing the complexity of choice, RL and domain expertise is not sufficient.",
"Confronted to the cost of data, the popular trial and error approach shows its limits.We develop an online learning version (Gagliolo & Schmidhuber, 2006; BID1 of Algorithm Selection (AS, BID15 ; BID17 BID5 ).",
"It consists in testing several algorithms on the task and in selecting the best one at a given time.",
"For clarity, throughout the whole article, the algorithm selector is called a meta-algorithm, and the set of algorithms available to the meta-algorithm is called a portfolio.",
"The meta-algorithm maximises an objective function such as the RL return.",
"Beyond the sample efficiency objective, the online AS approach besides addresses four practical problems for online RL-based systems.",
"First, it improves robustness: if an algorithm fails to terminate, or outputs to an aberrant policy, it will be dismissed and others will be selected instead.",
"Second, convergence guarantees and empirical efficiency may be united by covering the empirically efficient algorithms with slower algorithms that have convergence guarantees.",
"Third, it enables curriculum learning: shallow models control the policy in the early stages, while deep models discover the best solution in late stages.",
"And four, it allows to define an objective function that is not an RL return.A fair algorithm selection implies a fair budget allocation between the algorithms, so that they can be equitably evaluated and compared.",
"In order to comply with this requirement, the reinforcement algorithms in the portfolio are assumed to be off-policy, and are trained on every trajectory, regardless which algorithm controls it.",
"Section 2 provides a unifying view of RL algorithms, that allows information sharing between algorithms, whatever their state representations and optimisation techniques.",
"It also formalises the problem of online selection of off-policy RL algorithms.Next, Section 3 presents the Epochal Stochastic Bandit AS (ESBAS), a novel meta-algorithm addressing the online off-policy RL AS problem.",
"Its principle relies on a doubling trick: it divides the time-scale into epochs of exponential length inside which the algorithms are not allowed to update their policies.",
"During each epoch, the algorithms have therefore a constant policy and a stochastic multi-armed bandit can be in charge of the AS with strong pseudo-regret theoretical guaranties.",
"A thorough theoretical analysis provides for ESBAS upper bounds.",
"Then, Section 4 evaluates ESBAS on a dialogue task where it outperforms each individual algorithm in most configurations.Afterwards, in Section 5, ESBAS, which is initially designed for a growing batch RL setting, is adapted to a true online setting where algorithms update their policies after each transition, which we call SSBAS.",
"It is evaluated on a fruit collection task where it is shown to adapt the stepsize parameter more efficiently than the classical hyperbolic decay, and on Q*bert, where running several DQN with different network size and depth in parallel allows to improve the final performance by a wide margin.",
"Finally, Section 6 concludes the paper with prospective ideas of improvement.",
"In this article, we tackle the problem of selecting online off-policy RL algorithms.",
"The problem is formalised as follows: from a fixed portfolio of algorithms, a meta-algorithm learns which one performs the best on the task at hand.",
"Fairness of algorithm evaluation implies that the RL algorithms learn off-policy.",
"ESBAS, a novel meta-algorithm, is proposed.",
"Its principle is to divide the meta-time scale into epochs.",
"Algorithms are allowed to update their policies only at the start each epoch.",
"As the policies are constant inside each epoch, the problem can be cast into a stochastic multi-armed bandit.",
"An implementation is detailed and a theoretical analysis leads to upper bounds on the regrets.",
"ESBAS is designed for the growing batch RL setting.",
"This limited online setting is required in many real-world applications where updating the policy requires a lot of resources.Experiments are first led on a negotiation dialogue game, interacting with a human data-built simulated user.",
"In most settings, not only ESBAS demonstrates its efficiency to select the best algorithm, but it also outperforms the best algorithm in the portfolio thanks to curriculum learning, and variance reduction similar to that of Ensemble Learning.",
"Then, ESBAS is adapted to a full online setting, where algorithms are allowed to update after each transition.",
"This meta-algorithm, called SSBAS, is empirically validated on a fruit collection task where it performs efficient hyper-parameter optimisation.",
"SSBAS is also evaluated on the Q*bert Atari game, where it achieves a substantial improvement over the single algorithm counterparts.We interpret ESBAS/SSBAS's success at reliably outperforming the best algorithm in the portfolio as the result of the four following potential added values.",
"First, curriculum learning: ESBAS/SSBAS selects the algorithm that is the most fitted with the data size.",
"This property allows for instance to use shallow algorithms when having only a few data and deep algorithms once collected a lot.",
"Second, diversified policies: ESBAS/SSBAS computes and experiments several policies.",
"Those diversified policies generate trajectories that are less redundant, and therefore more informational.",
"As a result, the policies trained on these trajectories should be more efficient.",
"Third, robustness: if one algorithm fails at finding good policies, it will soon be discarded.",
"This property prevents the agent from repeating again and again the same obvious mistakes.",
"Four and last, run adaptation: of course, there has to be an algorithm that is the best on average for one given task at one given meta-time.",
"But depending on the variance in the trajectory collection, it did not necessarily train the best policy for each run.",
"The ESBAS/SSBAS meta-algorithm tries and selects the algorithm that is the best at each run.",
"Some of those properties are inherited by algorithm selection similarity with ensemble learning (Dietterich, 2002) .",
"BID23 uses a vote amongst the algorithms to decide the control of the next transition.",
"Instead, ESBAS/SSBAS selects the best performing algorithm.Regarding the portfolio design, it mostly depends on the available computational power per sample ratio.",
"For practical implementations, we recommend to limit the use of two highly demanding algorithms, paired with several faster algorithms that can take care of first learning stages, and to use algorithms that are diverse regarding models, hypotheses, etc.",
"Adding two algorithms that are too similar adds inertia, while they are likely to not be distinguishable by ESBAS/SSBAS.",
"More detailed recommendations for building an efficient RL portfolio are left for future work.",
"Speech recognition score Section C.1.1"
] | [
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.9629629850387573,
0.17391304671764374,
0,
0.2857142686843872,
0.06896550953388214,
0.12121211737394333,
0.05882352590560913,
0.045454543083906174,
0.23529411852359772,
0.05882352590560913,
0.05714285373687744,
0.1599999964237213,
0.1304347813129425,
0.13333332538604736,
0.1818181723356247,
0.0833333283662796,
0.13793103396892548,
0.05882352590560913,
0.0624999962747097,
0.12121211737394333,
0.1304347813129425,
0.1538461446762085,
0.05882352590560913,
0.3243243098258972,
0.10256409645080566,
0.15789473056793213,
0,
0.10526315122842789,
0.07407406717538834,
0.25,
0.307692289352417,
0.1666666567325592,
0.25,
0,
0.08695651590824127,
0.07692307233810425,
0.13333332538604736,
0.0714285671710968,
0.09090908616781235,
0.21739129722118378,
0.2222222238779068,
0.06666666269302368,
0.06451612710952759,
0.1599999964237213,
0.14814814925193787,
0.060606054961681366,
0,
0,
0.07692307233810425,
0.0714285671710968,
0.1599999964237213,
0.15789473056793213,
0.12903225421905518,
0.14814814925193787,
0.2142857164144516,
0.1538461446762085,
0.12121211737394333,
0.08695651590824127,
0,
0,
0
] | SyoDInJ0- | true | [
"This paper formalises the problem of online algorithm selection in the context of Reinforcement Learning."
] |
[
"In this paper, we present a new generative model for learning latent embeddings.",
"Compared to the classical generative process, where each observed data point is generated from an individual latent variable, our approach assumes a global latent variable to generate the whole set of observed data points.",
"We then propose a learning objective that is derived as an approximation to a lower bound to the data log likelihood, leading to our algorithm, WiSE-ALE.",
"Compared to the standard ELBO objective, where the variational posterior for each data point is encouraged to match the prior distribution, the WiSE-ALE objective matches the averaged posterior, over all samples, with the prior, allowing the sample-wise posterior distributions to have a wider range of acceptable embedding mean and variance and leading to better reconstruction quality in the auto-encoding process.",
"Through various examples and comparison to other state-of-the-art VAE models, we demonstrate that WiSE-ALE has excellent information embedding properties, whilst still retaining the ability to learn a smooth, compact representation.",
"Unsupervised learning is a central task in machine learning.",
"Its objective can be informally described as learning a representation of some observed forms of information in a way that the representation summarizes the overall statistical regularities of the data BID0 .",
"Deep generative models are a popular choice for unsupervised learning, as they marry deep learning with probabilistic models to estimate a joint probability between high dimensional input variables x and unobserved latent variables z.",
"Early successes of deep generative models came from Restricted Boltzmann Machines BID7 and Deep Boltzmann Machines BID15 , which aim to learn a compact representation of data.",
"However, the fully stochastic nature of the network requires layer-by-layer pre-training using MCMC-based sampling algorithms, resulting in heavy computation cost.",
"BID9 consider the objective of optimizing the parameters in an auto-encoder network by deriving an analytic solution to a variational lower bound of the log likelihood of the data, leading to the Auto-Encoding Variational Bayes (AEVB) algorithm.",
"They apply a reparameterization trick to maximally utilize deterministic mappings in the network, significantly simplifying the training procedure and reducing instability.",
"Furthermore, a regularization term naturally occurs in their model, allowing a prior p(z) to be placed over every sample embedding q(z|x).",
"As a result, the learned representation becomes compact and smooth; see e.g. FIG0 where we learn a 2D embedding of MNIST digits using 4 different methods and visualize the aggregate posterior distribution of 64 random samples in the learnt 2D embedding space.",
"However, because the choice of the prior is often uninformative, the smoothness constraint imposed by this regularization term can cause information loss between the input samples and the latent embeddings, as shown by the merging of individual embedding distributions in FIG0",
"(d) (especially in the outer areas away from zero code).",
"Extreme effects of such behaviours can be noticed from β-VAE BID6 , a derivative algorithm of AEVB which further increases the weighting on the regularizing term with the aim of learning an even smoother, disentangled representation of the data.",
"As shown in FIG0",
"(e), the individual embedding distributions are almost indistinguishable, leading to an overly severe information bottleneck which can cause high rates of distortion BID16 .",
"In contrast, perfect reconstruction can be achieved using WAE (Tolstikhin et al., 2017) , but the learnt embedding distributions appear to severely non-smooth ( FIG0 ), indicating a small amount of noise in the latent space would cause generation process to fail.In this paper, we propose WiSE-ALE (a wide sample estimator), which imposes a prior on the bulk statistics of a mini-batch of latent embeddings.",
"Learning under our WiSE-ALE objective does not penalize individual embeddings lying away from the zero code, so long as the aggregate distribution (the average of all individual embedding distributions) does not violate the prior significantly.",
"Hence, our approach mitigates the distortion caused by the current form of the prior constraint in the AEVB objective.",
"Furthermore, the objective of our WiSE-ALE algorithm is derived by applying variational inference in a simple latent variable model (Section",
"2) and with further approximation, we derive an analytic form of the learning objective, resulting in efficient learning algorithm.In general, the latent representation learned using our algorithm enjoys the following properties:",
"1) smoothness, as indicated in FIG0 , the probability density for each individual embedding distribution decays smoothly from the peak value;",
"2) compactness, as individual embeddings tend to occupy a maximal local area in the latent space with minimal gaps in between; and",
"3) separation, indicated by the narrow, but clear borders between neighbouring embedding distributions as opposed to the merging seen in AEVB.",
"In summary, our contributions are:• proposing a new latent variable model that uses a single global latent variable to generate the entire dataset,• deriving a variational lower bound to the data log likelihood in our latent variable model, which allows us to impose prior constraint on the bulk statistics of a mini-batch embedding distributions,• and deriving analytic approximations to the lower bound, leading to our efficient WiSE-ALE learning algorithm.In the rest of the paper, we first review directed graphical models in Section",
"2. We then derive our variational lower bound and its analytic approximations in Section",
"3. Related work is discussed in Section",
"4. Experiment results are analyzed in Section 5, leading to conclusions in Section 6.",
"In this paper, we propose a new latent variable model where a global latent variable is used to generate the entire dataset.",
"We then derive a variational lower bound to the data log likelihood, which allows us to impose a prior constraint on the bulk statistics of the aggregate posterior distribution for the entire dataset.",
"Using an analytic approximation to this lower bound as our learning objective, we propose WiSE-ALE algorithm.",
"We have demonstrated its ability to achieve excellent reconstruction quality, as well as forming a smooth, compact and meaningful latent representation.",
"In the future, we would like to understand the properties of the latent embeddings learnt through our method and apply it for suitable applications.",
"In this appendix, we omit the trainable parameters φ and θ in the expressions of distributions for simplicity.",
"For example, q(z|x) is equivalent to q φ (z|x) and p(x|z) represents p θ (x|z)."
] | [
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.4285714328289032,
0.22727271914482117,
0.2631579041481018,
0.1269841194152832,
0.13636362552642822,
0.08695651590824127,
0.14999999105930328,
0.17391303181648254,
0.20512820780277252,
0,
0.09090908616781235,
0.11428570747375488,
0.11428570747375488,
0.07843136787414551,
0.04081632196903229,
0,
0.0833333283662796,
0,
0.052631575614213943,
0.138888880610466,
0.04444444179534912,
0,
0.22857142984867096,
0.04651162400841713,
0.05714285373687744,
0.2222222238779068,
0.05714285373687744,
0.18666666746139526,
0.06896550953388214,
0,
0.07407406717538834,
0.4117647111415863,
0.23255813121795654,
0.12903225421905518,
0.22857142984867096,
0.21621620655059814,
0.0624999962747097,
0.06666666269302368
] | HylINLUKuV | true | [
"We propose a new latent variable model to learn latent embeddings for some high-dimensional data. "
] |
[
"We improve the robustness of deep neural nets to adversarial attacks by using an interpolating function as the output activation. ",
"This data-dependent activation function remarkably improves both classification accuracy and stability to adversarial perturbations.",
"Together with the total variation minimization of adversarial images and augmented training, under the strongest attack, we achieve up to 20.6%, 50.7%, and 68.7% accuracy improvement w.r.t. ",
"the fast gradient sign method, iterative fast gradient sign method, and Carlini-WagnerL2attacks, respectively. ",
"Our defense strategy is additive to many of the existing methods. ",
"We give an intuitive explanation of our defense strategy via analyzing the geometry of the feature space.",
"For reproducibility, the code will be available on GitHub.",
"The adversarial vulnerability BID26 of deep neural nets (DNNs) threatens their applicability in security critical tasks, e.g., autonomous cars BID0 , robotics BID8 , DNN-based malware detection systems BID20 BID7 .",
"Since the pioneering work by BID26 , many advanced adversarial attack schemes have been devised to generate imperceptible perturbations to sufficiently fool the DNNs BID6 BID19 BID5 BID29 BID11 BID2 .",
"And not only are adversarial attacks successful in white-box attacks, i.e. when the adversary has access to the DNN parameters, but they are also successful in black-box attacks, i.e. it has no access to the parameters.",
"Black-box attacks are successful because one can perturb an image so it misclassifies on one DNN, and the same perturbed image also has a significant chance to be misclassified by another DNN; this is known as transferability of adversarial examples BID22 ).",
"Due to this transferability, it is very easy to attack neural nets in a blackbox fashion BID14 BID4 .",
"In fact, there exist universal perturbations that can imperceptibly perturb any image and cause misclassification for any given network (MoosaviDezfooli et al. (2017) ).",
"There is much recent research on designing advanced adversarial attacks and defending against adversarial perturbation.In this work, we propose to defend against adversarial attacks by changing the DNNs' output activation function to a manifold-interpolating function, in order to seamlessly utilize the training data's information when performing inference.",
"Together with the total variation minimization (TVM) and augmented training, we show state-of-the-art defense results on the CIFAR-10 benchmark.",
"Moreover, we show that adversarial images generated from attacking the DNNs with an interpolating function are more transferable to other DNNs, than those resulting from attacking standard DNNs."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0
] | [
0.15789473056793213,
0.1875,
0.16326530277729034,
0.0714285671710968,
0.06666666269302368,
0.12121211737394333,
0.07407406717538834,
0.0416666604578495,
0.043478257954120636,
0.04444443807005882,
0.10344827175140381,
0,
0.09756097197532654,
0.20338982343673706,
0.277777761220932,
0.04651162400841713
] | r1z1UjA5FX | true | [
"We proposal strategies for adversarial defense based on data dependent activation function, total variation minimization, and training data augmentation"
] |
[
"This work presents a scalable solution to continuous visual speech recognition.",
"To achieve this, we constructed the largest existing visual speech recognition dataset, consisting of pairs of text and video clips of faces speaking (3,886 hours of video).",
"In tandem, we designed and trained an integrated lipreading system, consisting of a video processing pipeline that maps raw video to stable videos of lips and sequences of phonemes, a scalable deep neural network that maps the lip videos to sequences of phoneme distributions, and a production-level speech decoder that outputs sequences of words.",
"The proposed system achieves a word error rate (WER) of 40.9% as measured on a held-out set.",
"In comparison, professional lipreaders achieve either 86.4% or 92.9% WER on the same dataset when having access to additional types of contextual information.",
"Our approach significantly improves on previous lipreading approaches, including variants of LipNet and of Watch, Attend, and Spell (WAS), which are only capable of 89.8% and 76.8% WER respectively.",
"Deep learning techniques have allowed for significant advances in lipreading over the last few years BID6 BID72 BID30 BID80 .",
"However, these approaches have often been limited to narrow vocabularies, and relatively small datasets BID6 BID72 BID80 .",
"Often the approaches focus on single-word classification BID26 BID11 BID76 BID67 BID44 BID68 BID45 BID51 BID52 BID46 BID29 BID4 BID69 BID75 and do not attack the continuous recognition setting.",
"In this paper, we contribute a novel method for large-vocabulary continuous visual speech recognition.",
"We report substantial reductions in word error rate (WER) over the state-of-the-art approaches even with a larger vocabulary.Assisting people with speech impairments is a key motivating factor behind this work.",
"Visual speech recognition could positively impact the lives of hundreds of thousands of patients with speech impairments worldwide.",
"For example, in the U.S. alone 103,925 tracheostomies were performed in 2014 (HCUPnet, 2014) , a procedure that can result in a difficulty to speak (disphonia) or an inability to produce voiced sound (aphonia).",
"While this paper focuses on a scalable solution to lipreading using a vast diverse dataset, we also expand on this important medical application in Appendix A. The discussion there has been provided by medical experts and is aimed at medical practitioners.We propose a novel lipreading system, illustrated in Figure 1 , which transforms raw video into a word sequence.",
"The first component of this system is a data processing pipeline used to create the Large-Scale Visual Speech Recognition (LSVSR) dataset used in this work, distilled from YouTube videos and consisting of phoneme sequences paired with video clips of faces speaking (3,886 hours of video).",
"The creation of the dataset alone required a non-trivial combination of computer vision and machine learning techniques.",
"At a high-level this process takes as input raw video and annotated audio segments, filters and preprocesses them, and produces a collection of aligned phoneme and lip frame sequences.",
"Compared to previous work on visual speech recognition, our pipeline uses landmark smoothing, a blurriness filter, an improved speaking classifier network and outputs phonemes.",
"The details of this process are described in Section",
"3. Figure 1: The full visual speech recognition system introduced by this work consists of a data processing pipeline that generates lip and phoneme clips from YouTube videos (see Section 3), and a scalable deep neural network for phoneme recognition combined with a production-grade word-level decoding module used for inference (see Section 4).Next",
", this work introduces a new neural network architecture for lipreading, which we call Vision to Phoneme (V2P), trained to produce a sequence of phoneme distributions given a sequence of video frames. In",
"light of the large scale of our dataset, the network design has been highly tuned to maximize predictive performance subject to the strong computational and memory limits of modern GPUs in a distributed setting. In",
"this setting we found that techniques such as group normalization BID79 to be key to the reported results. Furthermore",
", our approach is the first to combine a deep learning-based visual speech recognition model with production-grade word-level decoding techniques. By decoupling",
"phoneme prediction and word decoding as is often done in speech recognition, we are able to arbitrarily extend the vocabulary without retraining the neural network. Details of our",
"model and this decoding process are given in Section 4. By design,",
"the trained model only performs well under optimal lighting conditions, within a certain distance from a subject, and at high quality. It does not perform",
"well in other contexts.Finally, this entire lipreading system results in an unprecedented WER of 40.9% as measured on a held-out set from our dataset. In comparison, professional",
"lipreaders achieve either 86.4% or 92.9% WER on the same dataset, depending on the amount of context given. Similarly, previous state-of-the-art",
"approaches such as variants of LipNet Assael et al. (2017) and of Watch, Attend, and Spell (WAS) demonstrated WERs of only 89.8% and 76.8% respectively.",
"We presented a novel, large-scale visual speech recognition system.",
"Our system consists of a data processing pipeline used to construct a vast dataset-an order of magnitude greater than all previous approaches both in terms of vocabulary and the sheer number of example sequences.",
"We described a scalable model for producing phoneme and word sequences from processed video clips that is capable of nearly halving the error rate of the previous state-of-the-art methods on this dataset, and achieving a new state-of-the-art in a dataset presented contemporaneously with this work.",
"The combination of methods in this work represents a significant improvement in lipreading performance, a technology which can enhance automatic speech recognition systems, and which has enormous potential to improve the lives of speech impaired patients worldwide.A MEDICAL APPLICATIONS As a consequence of injury or disease and its associated treatment, millions of people worldwide have communication problems preventing them from generating sound.",
"As hearing aids and cochlear transplants have transformed the lives of people with hearing loss, there is potential for lip reading technology to provide alternative communication strategies for people who have lost their voice.Aphonia is the inability to produce voiced sound.",
"It may result from injury, paralysis, removal or other disorders of the larynx.",
"Common examples of primary aphonia include bilateral recurrent laryngeal nerve damage as a result of thyroidectomy (removal of the thyroid gland and any tumour) for thyroid cancer, laryngectomy (surgical removal of the voice box) for laryngeal cancers, or tracheostomy (the creation of an alternate airway in the neck bypassing the voicebox).",
"Dysphonia is difficulty in speaking due to a physical disorder of the mouth, tongue, throat, or vocal cords.",
"Unlike aphonia, patients retain some ability to speak.",
"For example, in Spasmodic dysphonia, a disorder in which the laryngeal muscles go into periods of spasm, patients experience breaks or interruptions in the voice, often every few sentences, which can make a person difficult to understand.We see this work having potential medical applications for patients with aphonia or dysphonia in at least two distinct settings.",
"Firstly, an acute care setting (i.e. a hospital with an emergency room and an intensive care unit), patients frequently undergo elective (planned) or emergency (unplanned) procedures (e.g. Tracheostomy) which may result in aphonia or dysphonia.",
"In the U.S. 103,925 tracheostomies were performed in 2014, resulting in an average hospital stay of 29 days (HCUPnet, 2014) .",
"Similarly, in England and Wales 15,000 tracheostomies are performed each year The Health Foundation (2014).Where",
"these procedures are unplanned, there is often no time or opportunity to psychologically prepare the patient for their loss of voice, or to teach the patient alternative communication strategies. Some",
"conditions that necessitate tracheotomy, such as high spinal cord injuries, also affect limb function, further hampering alternative communication methods such as writing.Even where procedures are planned, such as for head and neck cancers, despite preparation of the patient through consultation with a speech and language therapist, many patients find their loss of voice highly frustrating especially in the immediate post-operative period.Secondly, where surgery has left these patients cancer-free, they may live for many years, even decades without the ability to speak effectively, in these patients we can envisage that they may use this technology in the community, after discharge from hospital. While",
"some patients may either have tracheotomy reversed, or adapt to speaking via a voice prosthesis, electro-larynx or esophageal speech, many patients do not achieve functional spoken communication. Even",
"in those who achieve good face-to-face spoken communication, few laryngectomy patients can communicate effectively on the telephone, and face the frequent frustration of being hung-up on by call centres and others who do not know them.Acute care applications. It is",
"widely acknowledged that patients with communication disabilities, including speech impairment or aphonia can pose significant challenges in the clinical environment, especially in acute care settings, leading to potentially poorer quality of care BID42 . While",
"some patients will be aware prior to surgery that they may wake up unable to speak, for many patients in the acute setting (e.g. Cervical Spinal Cord Injury, sudden airway obstruction) who wake up following an unplanned tracheotomy, their sudden inability to communicate can be phenomenally distressing.Community applications. Patients",
"who are discharged from hospital without the ability to speak, or with poor speech quality, face a multitude of challenges in day-to-day life which limits their independence, social functioning and ability to seek employment.We hypothesize that the application of technology capable of lip-reading individuals with the ability to move their facial muscles, but without the ability to speak audibly could significantly improve quality of life for these patients. Where the",
"application of this technology improves the person's ability to communicate over the telephone, it would enhance not only their social interactions, but also their ability to work effectively in jobs that require speaking over the phone.Finally, in patients who are neither able to speak, nor to move their arms, this technology could represent a step-change in terms of the speed at which they can communicate, as compared to eye-tracking or facial muscle based approaches in use today."
] | [
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
1,
0.17142856121063232,
0.16326530277729034,
0.0714285671710968,
0.0555555522441864,
0,
0,
0.0714285671710968,
0.10256409645080566,
0.4000000059604645,
0.14999999105930328,
0.1538461446762085,
0.0952380895614624,
0.12903225421905518,
0.07843136787414551,
0.07407406717538834,
0.0555555522441864,
0.2857142686843872,
0,
0.21052631735801697,
0.1538461446762085,
0.09756097197532654,
0.06896550953388214,
0.3030303120613098,
0.10526315122842789,
0,
0.05714285373687744,
0.05128204822540283,
0,
0,
0.4000000059604645,
0.09756097197532654,
0.12244897335767746,
0.15625,
0.043478257954120636,
0,
0.038461536169052124,
0.13793103396892548,
0.10526315122842789,
0.09999999403953552,
0.04651162400841713,
0,
0,
0.054054051637649536,
0.06315789371728897,
0.10810810327529907,
0,
0.09302325546741486,
0.0363636314868927,
0.0923076868057251,
0.0833333283662796
] | HJxpDiC5tX | true | [
"This work presents a scalable solution to continuous visual speech recognition."
] |
[
"Previous work (Bowman et al., 2015; Yang et al., 2017) has found difficulty developing generative models based on variational autoencoders (VAEs) for text.",
"To address the problem of the decoder ignoring information from the encoder (posterior collapse), these previous models weaken the capacity of the decoder to force the model to use information from latent variables.",
"However, this strategy is not ideal as it degrades the quality of generated text and increases hyper-parameters.",
"In this paper, we propose a new VAE for text utilizing a multimodal prior distribution, a modified encoder, and multi-task learning.",
"We show our model can generate well-conditioned sentences without weakening the capacity of the decoder.",
"Also, the multimodal prior distribution improves the interpretability of acquired representations.",
"Research into generative models for text is an important field in natural language processing (NLP) and various models have been historically proposed.",
"Although supervised learning with recurrent neural networks is the predominant way to construct generative language models BID22 BID28 BID26 , auto-regressive word-by-word sequence generation is not good at capturing interpretable representations of text or controlling text generation with global features BID1 .",
"In order to generate sentences conditioned on probabilistic latent variables, BID1 proposed Variational Autoencoders (VAEs) BID11 for sentences.",
"However, some serious problems that prevent training of the model have been reported.The problem that has been mainly discussed in previous papers is called \"posterior collapse\" BID25 .",
"Because decoders for textual VAEs are trained with \"teacher forcing\" BID27 , they can be trained to some extent without relying on latent variables.",
"As a result, the KL term of the optimization function (Equation 1) converges to zero and encoder input is ignored BID1 .",
"Successful textual VAEs have solved this problem by handicapping the decoder so the model is forced to utilize latent variables BID1 BID30 .",
"However, we believe that weakening the capacity of the decoder may lower the quality of generated texts and requires careful hyper-parameter turning to find the proper capacity.",
"Therefore, we take a different approach.We focus on two overlooked problems.",
"First, previous research fails to address the problem inherent to the structure of VAEs.",
"The fundamental cause of posterior collapse (apart from teacher forcing) is the existence of a suboptimal local minimum for the KL term.",
"Second, although existing models use a LSTM as the encoder, it is known that this simple model is not sufficient for text generation tasks (Bahdanau et al., 2014; BID14 BID26 .",
"In this work, we propose a new architecture for textual VAEs with two modifications to solve these problems.First, we use a multimodal prior distribution and an unimodal posterior distribution to eliminate the explicit minima of ignoring the encoder (Chapter 3.2).",
"Multimodal prior distributions for VAEs have been proposed recently for image and video tasks BID7 BID3 .",
"Specifically, our model uses a Gaussian Mixture distribution as prior distribution which is trained with the method proposed by BID23 .(a",
") The overall architecture of existing models.(b",
") The overall architecture of our model. In",
"the encoder, hidden states of the self-attention Encoder and BoW are concatenated. The",
"decoder estimates BoW of the input text from the latent variables as a sub-task in addition to generating text. In",
"our model, the prior distribution of the latent variables is a Gaussian mixture model. Second",
", we modify the encoder (Chapter 3.3). We empirically",
"compare a number of existing encoders and adopt a combination of two. The first is the",
"recently proposed method of embedding text into fixed-size variables using the attention mechanism BID12 . Although this method",
"was originally proposed for classification tasks, we show this encoder is also effective at text generation tasks. The second is a a Bag-of-Words",
"encoding of input text to help the encoder. It has been reported that a simple",
"Bag-of-Words encoding is effective at embedding the semantic content of a sentence BID18 . Our experiments show that the modified",
"encoder produces improved results only when other parts of the model are modifed as well to stabilize training. Additionally, our results imply that the",
"self-attention encoder captures grammatical structure and Bag-of-Words captures semantic content.Finally, to help the model acquire meaningful latent variables without weakening the decoder, we add multi-task learning (Chapter 3.4). We find that a simple sub-task of predicting",
"words included in the text significantly improves the quality of output text. It should be noted that this task does not cause",
"posterior collapse as it does not require teacher forcing.With these modifications, our model outperforms baselines on BLEU score, showing that generated texts are well conditioned on information from the encoder (Chapter 4.3). Additionally, we show that each component of the",
"multimodal prior distribution captures grammatical or contextual features and improves interpretability of the global features (Chapter 4.5). BID1 is the first work to apply VAEs to language",
"modeling. They identify the problem of posterior collapse",
"for textual VAEs and propose the usage of word dropout and KL annealing. BID16 models text as Bag-of-Words with VAEs. This",
"is part of the motivation behind the usage of",
"Bag-of-Words for textual VAEs. BID30 hypothesize that posterior collapse can be prevented",
"by controlling the capacity of the decoder and propose a model with a dilated CNN decoder which allows changing the effective filter size. BID21 use a deconvolutional layer without teacher forcing",
"to force the model into using information from the encoder.",
"and increases hyper-parameters.",
"We show",
"(i) multimodal prior distribution,",
"(ii) improvement of the encoder and",
"(iii) multi-task learning can improve the model with a simple LSTM decoder."
] | [
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.1818181723356247,
0.13333332538604736,
0.25641024112701416,
0.24390242993831635,
0.3333333432674408,
0.375,
0.1395348757505417,
0.16949151456356049,
0.05128204822540283,
0.1249999925494194,
0.08888888359069824,
0.1904761791229248,
0.09302324801683426,
0.22727271914482117,
0.11764705181121826,
0.11764705181121826,
0.1904761791229248,
0.23076923191547394,
0.20338982343673706,
0.10810810327529907,
0.1904761791229248,
0.06666666269302368,
0.13333332538604736,
0.1764705777168274,
0.19999998807907104,
0.2222222238779068,
0.1249999925494194,
0.2222222238779068,
0.15789473056793213,
0.1860465109348297,
0.21621620655059814,
0.14999999105930328,
0.13636362552642822,
0.3103448152542114,
0.2380952388048172,
0.09836065024137497,
0.21276594698429108,
0.20000000298023224,
0.2926829159259796,
0.13793103396892548,
0.05882352590560913,
0.3333333432674408,
0.12903225421905518,
0.07999999821186066,
0,
0.2142857164144516,
0.1764705777168274
] | H1eZ6sRcFm | true | [
"We propose a model of variational autoencoders for text modeling without weakening the decoder, which improves the quality of text generation and interpretability of acquired representations."
] |
[
"Model pruning seeks to induce sparsity in a deep neural network's various connection matrices, thereby reducing the number of nonzero-valued parameters in the model.",
"Recent reports (Han et al., 2015; Narang et al., 2017) prune deep networks at the cost of only a marginal loss in accuracy and achieve a sizable reduction in model size.",
"This hints at the possibility that the baseline models in these experiments are perhaps severely over-parameterized at the outset and a viable alternative for model compression might be to simply reduce the number of hidden units while maintaining the model's dense connection structure, exposing a similar trade-off in model size and accuracy.",
"We investigate these two distinct paths for model compression within the context of energy-efficient inference in resource-constrained environments and propose a new gradual pruning technique that is simple and straightforward to apply across a variety of models/datasets with minimal tuning and can be seamlessly incorporated within the training process.",
"We compare the accuracy of large, but pruned models (large-sparse) and their smaller, but dense (small-dense) counterparts with identical memory footprint.",
"Across a broad range of neural network architectures (deep CNNs, stacked LSTM, and seq2seq LSTM models), we find large-sparse models to consistently outperform small-dense models and achieve up to 10x reduction in number of non-zero parameters with minimal loss in accuracy.",
"Over the past few years, deep neural networks have achieved state-of-the-art performance on several challenging tasks in the domains of computer vision, speech recognition, and natural language processing.",
"Driven by increasing amounts of data and computational power, deep learning models have become bigger and deeper to better learn from data.",
"While these models are typically deployed in a datacenter back-end, preserving user privacy and reducing user-perceived query times mandate the migration of the intelligence offered by these deep neural networks towards edge computing devices.",
"Deploying large, accurate deep learning models to resource-constrained computing environments such as mobile phones, smart cameras etc. for on-device inference poses a few key challenges.",
"Firstly, state-of-the-art deep learning models routinely have millions of parameters requiring~MBs of storage, whereas on-device memory is limited.",
"Furthermore, it is not uncommon for even a single model inference to invoke~billions of memory accesses and arithmetic operations, all of which consume power and dissipate heat which may drain the limited battery capacity and/or test the device's thermal limits.Confronting these challenges, a growing body of work has emerged that intends to discover methods for compressing neural network models while limiting any potential loss in model quality.",
"Latencysensitive workloads relying on energy-efficient on-device neural network inference are often memory bandwidth-bound, and model compression offers the two-fold benefit of reducing the total number of energy-intensive memory accesses as well as improving the inference time due to an effectively higher memory bandwidth for fetching compressed model parameters.",
"Within the realm of model compression techniques, pruning away (forcing to zero) the less salient connections (parameters) in the neural network has been shown to reduce the number of nonzero parameters in the model with little to no loss in the final model quality.",
"Model pruning enables trading off a small degradation in model quality for a reduction in model size, potentially reaping improvements in inference time and energy-efficiency.",
"The resulting pruned model typically has sparse connection matrices, so efficient inference using these sparse models requires purpose-built hardware capable of loading sparse matrices and/or performing sparse matrix-vector operations BID30 BID23 .",
"Also, representing sparse matrices carries with it an additional storage overhead increasing the model's net memory footprint which must also be taken into consideration.In this work, we perform a closer examination of the effectiveness of model pruning as a means for model compression.",
"From the perspective of on-device neural network inference, given a bound on the model's memory footprint, how can we arrive at the most accurate model?",
"We aim to answer this question by comparing the quality of the models obtained through two distinct methods: (1) training a large model, but pruned to obtain a sparse model with a small number of nonzero parameters (large-sparse); and (2) training a small-dense model with size comparable to the large-sparse model.",
"Both of these methods expose a model accuracy and size tradeoff, but differ remarkably in terms of their implications on the design of the underlying hardware architecture.",
"For this comparative study, we pick models across a diverse set of application domains: InceptionV3 BID26 and MobileNets BID13 for image recognitions tasks, stacked LSTMs for language modeling, and seq2seq models used in Google's Neural Machine Translation BID28 system.",
"In the process of this investigation, we also develop a simple gradual pruning approach that requires minimal tuning and can be seamlessly incorporated within the training process and demonstrate its applicability and performance on an assortment of neural network architectures.",
"The net memory footprint of a sparse model includes the storage for the nonzero parameters and any auxiliary data structures needed for indexing these elements.",
"Pruning models helps reduce the number of nonzero-valued connections in the network; however the overhead in sparse matrix storage inevitably diminishes the achievable compression ratio.",
"The bit-mask sparse matrix representation requires 1 bit per matrix element indicating whether the element is nonzero, and a vector containing all the nonzero matrix elements.",
"This representation incurs a constant overhead regardless of the model sparsity.",
"In the compressed sparse row (column) storage (CSR(C)) adopted in BID23 , each nonzero parameter in the sparse matrix is associated with a count (usually stored as a 4 or 5 bit integer) of the number of zeros preceding it.",
"The overhead in this case is proportional to the NNZ in the model.",
"TAB5 compares these two representations for sparse-MobileNets.",
"The CSR(C) representation can enable higher compression ratio for networks with high sparsity.",
"Note, however, that the bit-mask representation offers marginally lower overhead at smaller sparsity levels.In spite of this overhead, large-sparse models appear to achieve higher accuracy than small-dense models with comparable memory footprint.",
"For instance, MobileNet with width multiplier 1 and sparsity 50% has similar footprint as MobileNet with width multiplier 0.75, but obtains higher accuracy.",
"TAB6 further highlights the trade-off between model size and accuracy for dense and sparse models.",
"The performance gap between large-sparse and small-dense models widens for larger models such as as the PTB language models and NMT (see TAB2 and TAB3 ).",
"It is worth noting that the results presented in this work were obtained by training neural networks using 32-bit floating point representation.",
"For neural networks trained to perform inference using reduced precision (8-bit integer, for instance) arithmetic, the memory overhead of sparse matrix storage represents a bigger fraction of the total memory footprint.",
"Quantization of the parameters to a reduced precision number representation is also an effective method for model compression, and the interplay between model quantization and pruning and their collective impact on model accuracy merits a closer examination.",
"We defer that investigation to a future extension to this work.",
"This work sheds light on the model size and accuracy trade-off encountered in pruned deep neural networks.",
"We demonstrate that large-sparse models outperform comparably-sized small-dense models across a diverse set of neural network architectures.",
"We also present a gradual pruning technique that can be applied with ease across these different architectures.",
"We believe these results will encourage the adoption of model pruning as a tool for compressing neural networks for deployment in resource-constrained environments.",
"At the same time, we hold the opinion that our results will provide further impetus to the hardware architecture community to customize the next generation of deep learning accelerator architectures to efficiently handle sparse matrix storage and computations."
] | [
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0,
0,
0.09836065024137497,
0.09836065024137497,
0.7894736528396606,
0.1111111044883728,
0,
0.052631575614213943,
0.03999999538064003,
0.09302324801683426,
0.11428570747375488,
0.07792207598686218,
0.03448275476694107,
0.03999999538064003,
0,
0.08695651590824127,
0.10344827175140381,
0.04878048226237297,
0.17543859779834747,
0.0952380895614624,
0.03703703358769417,
0.07547169178724289,
0.09756097197532654,
0.051282044500112534,
0,
0,
0.038461532443761826,
0,
0,
0.06451612710952759,
0.19999998807907104,
0.15789473056793213,
0.1249999925494194,
0.051282044500112534,
0.04999999329447746,
0.08695651590824127,
0.04081632196903229,
0.1428571343421936,
0.05714285373687744,
0.29411762952804565,
0.17142856121063232,
0.04999999329447746,
0.039215680211782455
] | S1lN69AT- | true | [
"We demonstrate that large, but pruned models (large-sparse) outperform their smaller, but dense (small-dense) counterparts with identical memory footprint."
] |
[
"Large transformer-based language models (LMs) trained on huge text corpora have shown unparalleled generation capabilities.",
"However, controlling attributes of the generated language (e.g. switching topic or sentiment) is difficult without modifying the model architecture or fine-tuning on attribute-specific data and entailing the significant cost of retraining.",
"We propose a simple alternative: the Plug and Play Language Model (PPLM) for controllable language generation, which combines a pretrained LM with one or more simple attribute classifiers that guide text generation without any further training of the LM.",
"In the canonical scenario we present, the attribute models are simple classifiers consisting of a user-specified bag of words or a single learned layer with 100,000 times fewer parameters than the LM.",
"Sampling entails a forward and backward pass in which gradients from the attribute model push the LM's hidden activations and thus guide the generation.",
"Model samples demonstrate control over a range of topics and sentiment styles, and extensive automated and human annotated evaluations show attribute alignment and fluency.",
"PPLMs are flexible in that any combination of differentiable attribute models may be used to steer text generation, which will allow for diverse and creative applications beyond the examples given in this paper.",
"The Transformer architecture (Vaswani et al., 2017) has enabled large-scale language models (LMs) trained on a huge amount of data (Radford et al., 2019; Dai et al., 2019b; Radford et al., 2018b) to greatly improve the state-of-the-art on natural language processing tasks.",
"These models are used to extract contextualized word embeddings for transfer learning purposes (Devlin et al., 2019) and as natural language generators.",
"The latter can leverage large amounts of unannotated data and a simple log-likelihood training objective.",
"However, once such models are trained, controlling attributes of Table 1 : The PPLM employs a pre-trained language model (LM) without any changes to the model parameters and can generate text with controlled attributes such as topic and sentiment.",
"We demonstrate control with two tiny and easy to construct attribute models: a bag of words (BoW) related to a topic and a linear discriminator trained on top of LM latent representations to control sentiment.",
"The underlined prefix is what the LM is conditioned on to generate a passage of text (e.g. The potato",
"The potato The potato The potato The potato The potato The potato The potato The potato The potato The potato The potato The potato The potato The potato The potato The potato).",
"The controlled attributes are colored and bracketed (e.g. [Science] ), and words in the BoW that are directly optimized for are highlighted brightly (e.g. research).",
"The softer highlights correspond to words related to the attribute, but not directly optimized for during the control process (e.g. health).",
"[-] The potato",
"The The potato chip recipe you asked for!",
"We love making these, and I've been doing so for years.",
"I've always had a hard time keeping a recipe secret.",
"I think it's the way our kids love to eat them -so many little ones.",
"[Science] The potato",
"The To conclude, the most significant and lasting damage from the economic crisis in 2008 was that many governments, including those in the political center, lost power for the first time in modern history.",
"generated text becomes difficult without modifying the model architecture to allow for extra input attributes or fine-tuning with attribute-specific data (Keskar et al., 2019; Ziegler et al., 2019) .",
"Controllable generation entails modeling p(x|a), where a is some desired controllable attribute(s) and x the generated sample.",
"However, generative models only learn p(x).",
"In computer vision, Plug & Play Generative Networks (PPGN) from Nguyen et al. (2017) developed a mechanism for generating images with different attributes by plugging a discriminator (attribute model) p(a|x) together with a base generative model p(x) and sampling from the resulting p(x|a) ∝ p(a|x)p(x), effectively creating a conditional generative model on the fly from any supplied attribute model.",
"In a similar manner, we propose the Plug and Play Language Model (PPLM) for conditional language generation that combines one or more simple attribute models p(a|x)-either in the form of a bagof-words (BoW) or single layer classifiers-with a pre-trained, unconditional language model p(x).",
"We sample from the resulting combined model by following gradients in the latent representation space in a manner inspired by the approximate Metropolis-adjusted Langevin (MALA) (Roberts et al., 1996; Roberts & Rosenthal, 1998) sampler deployed in Nguyen et al. (2017) .",
"Optimization is performed ex post facto in the activation space, therefore no re-training or finetuning is needed.",
"Control is fine-grained, with a strength parameter determining how strong the attribute influence should be; a strength of 0 fully recovers the original model p(x).",
"This design allows vast flexibility: users can combine a state-of-the-art generative model, which may be large and difficult to train, with any number of attribute controllers.",
"Attribute models may be easier to train or untrained (in the case of BoW models), and multiple controllers may be combined flexibly during inference.",
"In this paper, we demonstrate the PPLM approach using a GPT-2 345M model (Radford et al., 2019) as the general-purpose LM p(x), but the method applies in any representation space from any transformer-based text generator and allows combination with any attribute model p(a|x).",
"We demonstrate controlled generation with a number of attribute controllers, assembled and combined during generation, each with a different strength, acting as a set of \"control knobs\" that tune generation towards the desired attribute (see examples in Table 1 ).",
"Code for the experiments is available at: https://github.com/uber-research/PPLM.",
"Our key contributions are:",
"• We introduce the Plug and Play LM for controlled language generation, discuss its relation to existing work, and how sampling from a PPLM works (Sections 2 and 3).",
"• We demonstrate controlling of text generation on a range of attributes, including 7 topics each defined using a bag of words, and 1 simple discriminator on sentiments.",
"We quantify effectiveness using both automated evaluation (separately trained perplexity and sentiment models) as well as human evaluation (for attribute relevance and fluency).",
"All evaluations point toward the ability of PPLMs to generate attribute controlled, fluent text (Section 4).",
"• We compare PPLM with strong LM baselines such as CTRL (Keskar et al., 2019) and GPT-2 finetuned for positivty (Ziegler et al., 2019) .",
"Our method, without any LM training, is on par and often outperforms the baselines on attribute relevance and fluency (Section 4.2, and Section 4.3).",
"• We show that the PPLM approach can be used to detoxify certain instances where generation of toxic content is likely by following the negative gradient of a model trained to detect toxicity (Section 4.4).",
"We also show how PPLM can be used for structurally constrained story writing (Section 4.5).",
"We present PPLM, a plug and play method for controlled language generation that allows flexible assembling of a large, pre-trained language model and a BoW or a small, easy-to-train discriminator, and achieves fine-grained control of attributes such as topics and sentiment.",
"Without retraining or fine-tuning the language model, the simple mechanism shows great capability of attribute control while retaining fluency.",
"We believe this method could serve as a simple baseline for the largely open-ended language generation tasks where controlling is challenging.",
"There has recently been a substantial discussion around the ethics of capable language models (Radford et al., 2019; Keskar et al., 2019) , both in their potential to recapitulate problematic social biases and for them to be directly abused for societal harm (e.g. to generate disinformation).",
"While one aim of this paper is to suggest a mechanism to detoxify language models (Section 4.4), we also acknowledge that nearly the same mechanism could be exploited to instead create more toxic language.",
"Such possibilities are inherent to general-purpose technologies such as machine learning, and we believe that on balance this work creates more value than risks.",
"Acknowledgements The authors gratefully thank Bryan McCann for providing samples for the CTRL baseline, Joel Lehman for discussion regarding the ethical implications for this work, Jiale Zhi for help with the computational framework, Colan Chen for creating associated artwork for the blog, Avishek Joey Bose for helpful discussions, Julien Chaumond, Lysandre Debut, Thomas Wolf, and the Hugging Face team for co-producing the PPLM demo and helping integrate the code into their transformers repository, all the annotators at Uber, HKUST and Caltech for their labeling, and members of the Deep Collective research group at Uber AI for helpful discussion, ideas, and feedback on experiments.Without retraining or fine-tuning the language model, the simple mechanism shows great capability of attribute control while retaining fluency.",
"We believe this method could serve as a simple baseline for the largely open-ended language generation tasks where controlling is challenging.",
"We consider three baselines: CTRL, GPT2-FT-RL, and WD.",
"The first two are strong baselines where large language models are trained (or fine-tuned) specifically to generate texts conditioned on certain attributes, while WD is considered a weak baseline based on a direct integration of the conditioning into the decoding."
] | [
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.13793103396892548,
0.2380952388048172,
0.36734694242477417,
0.0952380895614624,
0.17142856121063232,
0.22857142984867096,
0.21739129722118378,
0.08163265138864517,
0.05405404791235924,
0.20689654350280762,
0.3265306055545807,
0.2857142686843872,
0.1875,
0,
0.1111111044883728,
0.11764705181121826,
0,
0,
0.1599999964237213,
0,
0.06896550953388214,
0,
0.09302324801683426,
0.1463414579629898,
0.19354838132858276,
0,
0.0952380895614624,
0.1538461446762085,
0.0833333283662796,
0.06666666269302368,
0.1111111044883728,
0.14999999105930328,
0.1666666567325592,
0.15094339847564697,
0.2083333283662796,
0.08695651590824127,
0,
0.1463414579629898,
0.2631579041481018,
0.1764705777168274,
0.19999998807907104,
0.1111111044883728,
0.2222222238779068,
0.1702127605676651,
0.06666666269302368,
0.25531914830207825,
0.1875,
0.17142856121063232,
0.1071428507566452,
0.08888888359069824,
0.052631575614213943,
0.07407407462596893,
0.17142856121063232,
0.1818181723356247,
0.07999999821186066
] | H1edEyBKDS | true | [
"We control the topic and sentiment of text generation (almost) without any training. "
] |
[
"Existing deep multitask learning (MTL) approaches align layers shared between tasks in a parallel ordering.",
"Such an organization significantly constricts the types of shared structure that can be learned.",
"The necessity of parallel ordering for deep MTL is first tested by comparing it with permuted ordering of shared layers.",
"The results indicate that a flexible ordering can enable more effective sharing, thus motivating the development of a soft ordering approach, which learns how shared layers are applied in different ways for different tasks.",
"Deep MTL with soft ordering outperforms parallel ordering methods across a series of domains.",
"These results suggest that the power of deep MTL comes from learning highly general building blocks that can be assembled to meet the demands of each task.",
"In multitask learning (MTL) BID4 , auxiliary data sets are harnessed to improve overall performance by exploiting regularities present across tasks.",
"As deep learning has yielded state-ofthe-art systems across a range of domains, there has been increased focus on developing deep MTL techniques.",
"Such techniques have been applied across settings such as vision BID2 BID19 BID35 BID37 BID49 BID52 , natural language BID6 BID8 BID12 BID29 BID32 , speech BID16 BID5 BID42 , and reinforcement learning BID7 BID9 BID18 BID40 .",
"Although they improve performance over single-task learning in these settings, these approaches have generally been constrained to joint training of relatively few and/or closely-related tasks.On the other hand, from a perspective of Kolmogorov complexity, \"transfer should always be useful\"; any pair of distributions underlying a pair of tasks must have something in common BID33 BID34 .",
"In principle, even tasks that are \"superficially unrelated\" such as those in vision and NLP can benefit from sharing (even without an adaptor task, such as image captioning).",
"In other words, for a sufficiently expressive class of models, the inductive bias of requiring a model to fit multiple tasks simultaneously should encourage learning to converge to more realistic representations.",
"The expressivity and success of deep models suggest they are ideal candidates for improvement via MTL.",
"So, why have existing approaches to deep MTL been so restricted in scope?MTL",
"is based on the assumption that learned transformations can be shared across tasks. This",
"paper identifies an additional implicit assumption underlying existing approaches to deep MTL: this sharing takes place through parallel ordering of layers. That",
"is, sharing between tasks occurs only at aligned levels (layers) in the feature hierarchy implied by the model architecture. This",
"constraint limits the kind of sharing that can occur between tasks. It requires",
"subsequences of task feature hierarchies to match, which may be difficult to establish as tasks become plentiful and diverse. This paper",
"investigates whether parallel ordering of layers is necessary for deep MTL. As an alternative",
", it introduces methods that make deep MTL more flexible. First, existing",
"approaches are reviewed in the context of their reliance on parallel ordering. Then, as a foil",
"to parallel ordering, permuted ordering is introduced, in which shared layers are applied in different orders for different tasks. The increased ability",
"of permuted ordering to support integration of information across tasks is analyzed, and the results are used to develop a soft ordering approach to deep MTL. In this (a) Classical",
"approaches",
"add a task-specific decoder to the output of the core single-task model for each task; (b) Columnbased approaches",
"include a network column for each task, and define a mechanism for sharing between columns; (c) Supervision at custom",
"depths adds output decoders at depths based on a task hierarchy; (d) Universal representations",
"adapts each layer with a small number of task-specific scaling parameters. Underlying each of these approaches",
"is the assumption of parallel ordering of shared layers (Section 2.2): each one requires aligned sequences of feature extractors across tasks.approach, a joint model learns how to apply shared layers in different ways at different depths for different tasks as it simultaneously learns the parameters of the layers themselves. In a suite of experiments, soft ordering",
"is shown to improve performance over single-task learning as well as over fixed order deep MTL methods.Importantly, soft ordering is not simply a technical improvement, but a new way of thinking about deep MTL. Learning a different soft ordering of layers",
"for each task amounts to discovering a set of generalizable modules that are assembled in different ways for different tasks. This perspective points to future approaches",
"that train a collection of layers on a set of training tasks, which can then be assembled in novel ways for future unseen tasks. Some of the most striking structural regularities",
"observed in the natural, technological and sociological worlds are those that are repeatedly observed across settings and scales; they are ubiquitous and universal. By forcing shared transformations to occur at matching",
"depths in hierarchical feature extraction, deep MTL falls short of capturing this sort of functional regularity. Soft ordering is thus a step towards enabling deep MTL",
"to realize the diverse array of structural regularities found across complex tasks drawn from the real world.",
"In the interest of clarity, the soft ordering approach in this paper was developed as a relatively small step away from the parallel ordering assumption.",
"To develop more practical and specialized methods, inspiration can be taken from recurrent architectures, the approach can be extended to layers of more general structure, and applied to training and understanding general functional building blocks.Connections to recurrent architectures.",
"Eq.",
"7 is defined recursively with respect to the learned layers shared across tasks.",
"Thus, the soft-ordering architecture can be viewed as a new type of recurrent architecture designed specifically for MTL.",
"From this perspective, Figure 3 shows an unrolling of a soft layer module: different scaling parameters are applied at different depths when unrolled for different tasks.",
"Since the type of recurrence induced by soft ordering does not require task input or output to be sequential, methods that use recurrence in such a setting are of particular interest BID26 BID27 BID36 BID44 BID50 .",
"Recurrent methods can also be used to reduce the size of S below O(T D 2 ), e.g., via recurrent hypernetworks BID11 .",
"Finally, Section 4 demonstrated soft ordering where shared learned layers were fully-connected or convolutional; it is also straightforward to extend soft ordering to shared layers with internal recurrence, such as LSTMs BID15 .",
"In this setting, soft ordering can be viewed as inducing a higher-level recurrence.Generalizing the structure of shared layers.",
"For clarity, in this paper all core layers in a given setup had the same shape.",
"Of course, it would be useful to have a generalization of soft ordering that could subsume any modern deep architecture with many layers of varying structure.",
"As given by Eq. 7, soft ordering requires the same shape inputs to the element-wise sum at each depth.",
"Reshapes and/or resampling can be added as adapters between tensors of different shape; alternatively, a function other than a sum could be used.",
"For example, instead of learning a weighting across layers at each depth, a probability of applying each module could be learned in a manner similar to adaptive dropout BID1 BID25 or a sparsely-gated mixture of experts BID43 .",
"Furthermore, the idea of a soft ordering of layers can be extended to soft ordering over modules with more general structure, which may more succinctly capture recurring modularity.Training generalizable building blocks.",
"Because they are used in different ways at different locations for different tasks, the shared trained layers in permuted and soft ordering have learned more general functionality than layers trained in a fixed location or for a single task.",
"A natural hypothesis is that they are then more likely to generalize to future unseen tasks, perhaps even without further training.",
"This ability would be especially useful in the small data regime, where the number of trainable parameters should be limited.",
"For example, given a collection of these layers trained on a previous set of tasks, a model for a new task could learn how to apply these building blocks, e.g., by learning a soft order, while keeping their internal parameters fixed.",
"Learning an efficient set of such generalizable layers would then be akin to learning a set of functional primitives.",
"Such functional modularity and repetition is evident in the natural, technological and sociological worlds, so such a set of functional primitives may align well with complex real-world models.",
"This perspective is related to recent work in reusing modules in the parallel ordering setting BID9 .",
"The different ways in which different tasks learn to use the same set of modules can also help shed light on how tasks are related, especially those that seem superficially disparate (e.g., by extending the analysis performed for FIG3 ), thus assisting in the discovery of real-world regularities.",
"This paper has identified parallel ordering of shared layers as a common assumption underlying existing deep MTL approaches.",
"This assumption restricts the kinds of shared structure that can be learned between tasks.",
"Experiments demonstrate how direct approaches to removing this assumption can ease the integration of information across plentiful and diverse tasks.",
"Soft ordering is introduced as a method for learning how to apply layers in different ways at different depths for different tasks, while simultaneously learning the layers themselves.",
"Soft ordering is shown to outperform parallel ordering methods as well as single-task learning across a suite of domains.",
"These results show that deep MTL can be improved while generating a compact set of multipurpose functional primitives, thus aligning more closely with our understanding of complex real-world processes.All experiments were run with the Keras deep learning framework BID5 , using the Tensorflow backend BID0 .",
"All experiments used the Adam optimizer with default parameters BID20 unless otherwise specified.In each iteration of multitask training, a random batch for each task is processed, and the results are combined across tasks into a single update.",
"Compared to alternating batches between tasks BID32 , processing all tasks simultaneously simplified the training procedure, and led to faster and lower final convergence.",
"When encoders are shared, the inputs of the samples in each batch are the same across tasks.",
"Cross-entropy loss was used for all classification tasks.",
"The overall validation loss is the sum over all per task validation losses.In each experiment, single task, parallel ordering (Eq. 2), permuted ordering (Eq. 3), and soft ordering (Eq. 7) trained an equivalent set of core layers.",
"In permuted ordering, the order of layers was randomly generated for each task each trial.",
"Several trials were run for each setup to produce confidence bounds."
] | [
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.29629629850387573,
0.23076923191547394,
0.19999998807907104,
0.23255813121795654,
0.07999999821186066,
0.2222222238779068,
0.12121211737394333,
0.1875,
0.04255318641662598,
0.10169491171836853,
0,
0.20512820780277252,
0.1428571343421936,
0.07999999821186066,
0.1538461446762085,
0.11764705181121826,
0.06451612710952759,
0.23999999463558197,
0.1249999925494194,
0.1538461446762085,
0.1666666567325592,
0.1428571343421936,
0.0624999962747097,
0.15789473056793213,
0.13793103396892548,
0,
0,
0.07692307233810425,
0.1090909093618393,
0.13636362552642822,
0.05714285373687744,
0.09999999403953552,
0.10526315122842789,
0.11428570747375488,
0.1428571343421936,
0.11764705181121826,
0.1428571343421936,
0.1599999964237213,
0.13793103396892548,
0.0555555522441864,
0.08695651590824127,
0.1111111044883728,
0.05128204822540283,
0.19354838132858276,
0.07407406717538834,
0.10810810327529907,
0.06666666269302368,
0.060606054961681366,
0.09302325546741486,
0.14999999105930328,
0.1395348757505417,
0.0624999962747097,
0.13333332538604736,
0.08163265138864517,
0.13793103396892548,
0.10526315122842789,
0.07407406717538834,
0.0714285671710968,
0.19999998807907104,
0.23076923191547394,
0.1249999925494194,
0.11428570747375488,
0.13793103396892548,
0.18518517911434174,
0.12765957415103912,
0.060606054961681366,
0.1538461446762085,
0,
0.08888888359069824,
0.1538461446762085,
0
] | BkXmYfbAZ | true | [
"Relaxing the constraint of shared hierarchies enables more effective deep multitask learning."
] |
[
"We propose a generic framework to calibrate accuracy and confidence (score) of a prediction through stochastic inferences in deep neural networks.",
"We first analyze relation between variation of multiple model parameters for a single example inference and variance of the corresponding prediction scores by Bayesian modeling of stochastic regularization.",
"Our empirical observation shows that accuracy and score of a prediction are highly correlated with variance of multiple stochastic inferences given by stochastic depth or dropout.",
"Motivated by these facts, we design a novel variance-weighted confidence-integrated loss function that is composed of two cross-entropy loss terms with respect to ground-truth and uniform distribution, which are balanced by variance of stochastic prediction scores.",
"The proposed loss function enables us to learn deep neural networks that predict confidence calibrated scores using a single inference.",
"Our algorithm presents outstanding confidence calibration performance and improves classification accuracy with two popular stochastic regularization techniques---stochastic depth and dropout---in multiple models and datasets; it alleviates overconfidence issue in deep neural networks significantly by training networks to achieve prediction accuracy proportional to confidence of prediction.",
"Deep neural networks have achieved remarkable performance in various tasks, but have critical limitations in reliability of their predictions.",
"One example is that inference results are often overly confident even for unseen or tricky examples; the maximum scores of individual predictions are very high even for out-of-distribution examples and consequently distort interpretation about the predictions.",
"Since many practical applications including autonomous driving, medical diagnosis, and machine inspection require accurate uncertainty estimation as well as high prediction accuracy for each inference, such an overconfidence issue makes deep neural networks inappropriate to be deployed for real-world problems in spite of their impressive accuracy.Regularization is a common technique in training deep neural networks to avoid overfitting problems and improve generalization accuracy BID18 ; BID6 ; BID7 .",
"However, their objectives are not directly related to generating score distributions aligned with uncertainty of individual predictions.",
"In other words, existing deep neural networks are inherently poor at calibrating prediction accuracy and confidence.Our goal is to learn deep neural networks that are able to estimate accuracy and uncertainty of each prediction at the same time.",
"Hence, we propose a generic framework to calibrate prediction score (confidence) with accuracy in deep neural networks.",
"Our algorithm starts with an observation that variance of prediction scores measured from multiple stochastic inferences is highly correlated with accuracy and confidence of the prediction based on the average score, where we employ stochastic regularization techniques such as stochastic depth or dropout to obtain multiple stochastic inference results.",
"We also interpret stochastic regularization as a Bayesian model, which shows relation between stochastic modeling and stochastic inferences of deep neural networks.",
"By exploiting these properties, we design a loss function to enable deep neural network to predict confidence-calibrated scores based only on a single prediction, without stochastic inferences.",
"Our contribution is summarized below:• We provide a generic framework to estimate uncertainty of a prediction based on stochastic inferences in deep neural networks, which is motivated by empirical observation and theoretical analysis.•",
"We design a variance-weighted confidence-integrated loss function in a principled way without hyper-parameters, which enables deep neural networks to produce confidencecalibrated predictions even without stochastic inferences.•",
"The proposed framework presents outstanding performance to reduce overconfidence issue and estimate accurate uncertainty in various architectures and datasets.The rest of the paper is organized as follows. We",
"first discuss prior research related to our algorithm, and describe theoretical background for Bayesian interpretation of our approach in Section 2 and 3, respectively. Section",
"4 presents our confidence calibration algorithm through stochastic inferences, and Section 5 illustrates experimental results.",
"We presented a generic framework for uncertainty estimation of a prediction in deep neural networks by calibrating accuracy and score based on stochastic inferences.",
"Based on Bayesian interpretation of stochastic regularization and our empirical observation results, we claim that variation of multiple stochastic inferences for a single example is a crucial factor to estimate uncertainty of the average prediction.",
"Motivated by this fact, we design the variance-weighted confidence-integrated loss to learn confidence-calibrated networks and enable uncertainty to be estimated by a single prediction.",
"The proposed algorithm is also useful to understand existing confidence calibration methods in a unified way, and we compared our algorithm with other variations within our framework to analyze their characteristics."
] | [
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.4390243887901306,
0.1702127605676651,
0.2222222238779068,
0.29629629850387573,
0.3414634168148041,
0.1355932205915451,
0.052631575614213943,
0.038461532443761826,
0.10256409645080566,
0.10526315122842789,
0.19607841968536377,
0.2631579041481018,
0.12903225421905518,
0.24390242993831635,
0.30434781312942505,
0.30188679695129395,
0.3478260934352875,
0.1666666567325592,
0.04651162400841713,
0.1111111044883728,
0.3636363446712494,
0.23076923191547394,
0.41860464215278625,
0.12244897335767746
] | HJz1vo0cYX | true | [
"We propose a framework to learn confidence-calibrated networks by designing a novel loss function that incorporates predictive uncertainty estimated through stochastic inferences."
] |
[
"Real-life control tasks involve matters of various substances---rigid or soft bodies, liquid, gas---each with distinct physical behaviors.",
"This poses challenges to traditional rigid-body physics engines.",
"Particle-based simulators have been developed to model the dynamics of these complex scenes; however, relying on approximation techniques, their simulation often deviates from real-world physics, especially in the long term.",
"In this paper, we propose to learn a particle-based simulator for complex control tasks.",
"Combining learning with particle-based systems brings in two major benefits: first, the learned simulator, just like other particle-based systems, acts widely on objects of different materials; second, the particle-based representation poses strong inductive bias for learning: particles of the same type have the same dynamics within.",
"This enables the model to quickly adapt to new environments of unknown dynamics within a few observations.",
"We demonstrate robots achieving complex manipulation tasks using the learned simulator, such as manipulating fluids and deformable foam, with experiments both in simulation and in the real world.",
"Our study helps lay the foundation for robot learning of dynamic scenes with particle-based representations.",
"Objects have distinct dynamics.",
"Under the same push, a rigid box will slide, modeling clay will deform, and a cup full of water will fall with water spilling out.",
"The diverse behavior of different objects poses challenges to traditional rigid-body simulators used in robotics BID31 BID30 .",
"Particle-based simulators aim to model the dynamics of these complex scenes BID18 ; however, relying on approximation techniques for the sake of perceptual realism, their simulation often deviates from real world physics, especially in the long term.",
"Developing generalizable and accurate forward dynamics models is of critical importance for robot manipulation of distinct real-life objects.We propose to learn a differentiable, particle-based simulator for complex control tasks, drawing inspiration from recent development in differentiable physical engines BID0 BID3 .",
"In robotics, the use of differentiable simulators, together with continuous and symbolic optimization algorithms, has enabled planning for increasingly complex whole body motions with multi-contact and multi-object interactions BID32 ).",
"Yet these approaches have only tackled local interactions of rigid bodies.",
"We develop dynamic particle interaction networks (DPINets) for learning particle dynamics, focusing on capturing the dynamic, hierarchical, and long-range interactions of particles FIG0 -c).",
"DPI-Nets can then be combined with classic perception and gradient-based control algorithms for robot manipulation of deformable objects FIG0 ).Learning",
"a particle-based simulator brings in two major benefits. First, the",
"learned simulator, just like other particle-based systems, acts widely on objects of different states. DPI-Nets have",
"successfully captured the complex behaviors of deformable objects, fluids, and rigid-bodies. With learned",
"DPINets, our robots have achieved success in manipulation tasks that involve deformable objects of complex physical properties, such as molding plasticine to a target shape.Our project page: http://dpi.csail.mit.edu Perception and control with the learned model. Our system first",
"reconstructs the particle-based shape from visual observation. It then uses gradient-based",
"trajectory optimization to search for the actions that produce the most desired output.Second, the particle-based representation poses strong inductive bias for learning: particles of the same type have the same dynamics within. This enables the model to quickly",
"adapt to new environments of unknown dynamics within a few observations. Experiments suggest that DPI-Nets",
"quickly learn to adapt to characterize a novel object of unknown physical parameters by doing online system identification. The adapted model also helps the",
"robot to successfully manipulate object in the real world.DPI-Nets combine three key features for effective particle-based simulation and control: multi-step spatial propagation, hierarchical particle structure, and dynamic interaction graphs. In particular, it employs dynamic",
"interaction graphs, built on the fly throughout manipulation, to capture the meaningful interactions among particles of deformable objects and fluids. The use of dynamic graphs allows",
"neural models to focus on learning meaningful interactions among particles, and is crucial for obtaining good simulation accuracy and high success rates in manipulation. As objects deform when robots interact",
"with them, a fixed interaction graph over particles is insufficient for robot manipulating non-rigid objects.Experiments demonstrate that DPI-Nets significantly outperform interaction networks BID0 , HRN BID19 , and a few other baselines. More importantly, unlike previous paper",
"that focused purely on forward simulation, we have applied our model to downstream control tasks. Our DPI-Nets enable complex manipulation",
"tasks for deformable objects and fluids, and adapts to scenarios with unknown physical parameters that need to be identified online. We have also performed real-world experiments",
"to demonstrate our model's generalization ability.",
"We have demonstrated that a learned particle dynamics model can approximate the interaction of diverse objects, and can help to solve complex manipulation tasks of deformable objects.",
"Our system requires standard open-source robotics and deep learning toolkits, and can be potentially deployed in household and manufacturing environment.",
"Robot learning of dynamic scenes with particle-based representations shows profound potentials due to the generalizability and expressiveness of the representation.",
"Our study helps lay the foundation for it.A CONTROL ALGORITHM Update A by descending with the gradients ∇ A L state (Ĝ t , G t ) Forward simulation using the current graphĜ t+1 ← Φ(G t ) Make a buffer for storing the simulation results G ←Ḡ ∪Ĝ t+1 for i = t + 1, ..., T − 1 do Forward simulation:Ĝ j+1 ← Φ(Ĝ j ); G ← G ∪Ĝ j+1 end for Updateû t:T by descending with the gradients ∇û t: DISPLAYFORM0"
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.1764705777168274,
0,
0.043478257954120636,
0.12903225421905518,
0.1071428507566452,
0.060606054961681366,
0.1904761791229248,
0.1875,
0.0952380895614624,
0.15789473056793213,
0,
0.07843136787414551,
0.1428571343421936,
0.13333332538604736,
0.0714285671710968,
0.25,
0.31578946113586426,
0,
0,
0.19999998807907104,
0.13793103396892548,
0,
0.0833333283662796,
0.0624999962747097,
0,
0.23999999463558197,
0.2926829159259796,
0.08695651590824127,
0.1538461446762085,
0.05405404791235924,
0.19512194395065308,
0,
0.2857142686843872,
0.05714285373687744,
0.17142856121063232,
0.053333330899477005
] | rJgbSn09Ym | true | [
"Learning particle dynamics with dynamic interaction graphs for simulating and control rigid bodies, deformable objects, and fluids. "
] |
[
"Generative Adversarial Networks (GANs), when trained on large datasets with diverse modes, are known to produce conflated images which do not distinctly belong to any of the modes.",
"We hypothesize that this problem occurs due to the interaction between two facts: (1) For datasets with large variety, it is likely that the modes lie on separate manifolds.",
"(2) The generator (G) is formulated as a continuous function, and the input noise is derived from a connected set, due to which G's output is a connected set.",
"If G covers all modes, then there must be some portion of G's output which connects them.",
"This corresponds to undesirable, conflated images.",
"We develop theoretical arguments to support these intuitions.",
"We propose a novel method to break the second assumption via learnable discontinuities in the latent noise space.",
"Equivalently, it can be viewed as training several generators, thus creating discontinuities in the G function.",
"We also augment the GAN formulation with a classifier C that predicts which noise partition/generator produced the output images, encouraging diversity between each partition/generator.",
"We experiment on MNIST, celebA, STL-10, and a difficult dataset with clearly distinct modes, and show that the noise partitions correspond to different modes of the data distribution, and produce images of superior quality.",
"Generative Adversarial Networks BID8 are powerful generative models that have enjoyed significant attention from the research community in the past few years.",
"Despite several successes, the original formulation for GANs is widely acknowledged to be notoriously difficult to train due to instability issues.",
"In particular, GANs face the mode collapse problem, where the generator resorts to generating a handful of samples which are assigned high probability by the discriminator.",
"Several methods have been introduced to fix the mode collapse problem.",
"BID3 , , BID9 , BID15 , BID21 Despite improvements, state-of-art GANs still fail to generate meaningful samples on diverse and complex datasets such as ImageNet BID5 .",
"GANs trained on such datasets produce conflated images which do not distinctly belong to any of the modes present in the dataset.We hypothesize that this problem occurs due to the continuous nature of the generator function, along with the connectedness of the latent noise space, due to which the output set of the generator is also connected.",
"This poses a problem when dealing with complex real life datasets with varied modes.",
"Strong empirical and theoretical evidence suggests that real life images lie on lowdimensional manifolds BID17 .",
"It is highly probable that distinct modes (say bedroom images and human face images) lie on disjoint manifolds.",
"If we assume that the generator does not suffer from the mode dropping problem, it must cover all these manifolds in its output.",
"However, the output set being connected, must contain parts which do not belong to any of the manifolds, but simply join them.We refer to such parts of the output as tunnels, since they connect otherwise disjoint manifolds.",
"Tunnels do not resemble any of the images in the dataset, and are not similar to any of the modes.",
"They correspond to the conflated images produced by the generator, and are undesirable.",
"By this reasoning, we suggest that GANs with continuous generators and connected latent noise sets must suffer either from a certain degree of mode dropping or from producing conflated, garbled outputs when trained on complex and varied datasets like ImageNet.We develop methods that allow GANs to cover disjoint manifolds without the use of tunnels, while not compromising on mode coverage.",
"Our approach is to create learnable discontinuities in the latent noise space.",
"This is done by learning N different linear mappings (partitions) in the input layer of the generator.",
"A noise vector (sampled from the standard normal distribution), gets mapped to N different vectors by the input layer, and the rest of the processing remains the same as in standard generators.",
"The output set of each mapping is a connected set, but the union of the N output sets could potentially be disconnected.",
"Thus, we break the connectedness assumption leading to the existence of tunnels.",
"To facilitate learning distinct modes by each partition, we introduce a classifier that predicts which partition created a given input.",
"We modify the loss functions to adjust for this change.We experiment on standard datasets: MNIST (LeCun et al., 2010) , celebA BID14 , STL-10 (a subset of ImageNet) BID4 , and a tough artificial dataset with very distinct modes -an equal mixture of LSUN BID22 bedrooms and celebA, to verify the efficacy of our method.",
"We compare our results with one of the best performing GAN variant BID9 , and show an improvement in quality.The major contributions of the paper are summarized below:1.",
"We identify a key problem with training GANs on large & diverse datasets, and provide intuition to explain its cause",
"2. We develop theoretical analyses to support and introduce rigor in the intuitions provided",
"3. Motivated by these analyses, we introduce a novel GAN setup to alleviate the problem",
"4. We experiment on a variety of standard datasets and report improvements over state-of-art formulations 2 RELATED WORK BID8 formulated GAN as a minimax game between two neural networks: generator G θ and discriminator D φ .",
"G θ takes a random noise vector z as input and generates sample G θ (z), while D φ identifies whether input sample is real or generated by the generator G θ .",
"Both G θ and D φ play a two-player minimax game with value function V (G, D): DISPLAYFORM0 where P r (x) is the real data distribution, and P(z) is arbitrary noise distribution (typically uniform or normal distribution).",
"In practice, training GANs using above formulation is highly unstable and requires careful balance of generator and discriminator updates.",
"BID19 proposed a class of CNNs called DCGANs (Deep Convolutional GANs) with certain architectural specifications, and demonstrated better image quality than non-convolutional vanilla GAN architecture.",
"BID6 used Laplacian pyramid framework for the generator, where a separate generative convnet model is trained using GAN approach at each level of pyramid, to generate images in coarse-to-fine fashion.Despite better architectures, GANs suffered from problems like unstable training, vanishing gradients of generator, mode collapse.",
"BID20 proposed several heuristics such as feature matching, minibatch discrimination, historical averaging, label smoothing, primarily to stabilize GAN training.",
"BID3 observed that GAN training can push probability mass in wrong direction, hence are prone to missing modes of data.",
"They proposed regularization techniques to stabilize GAN training and alleviate mode missing problem by fair distribution of probability mass across modes of the real data distribution.",
"BID1 provided theoretical analysis of training dynamics of GANs, and problems including instability and saturation.",
"They revealed fundamental problems with original GAN formulation and provided directions towards solving them.Several papers proposed alternative objective function of generator and discriminator.",
", BID9 proposed new loss function which approximately minimizes Wasserstein distance between real and generated data distribution instead of Jensen Shannon Divergence.",
"They claim their formulation does not require careful balance between generator and discriminator updates, thus lead to stable training without saturating the gradients.",
"They observed no evidence of mode collapse in their experiments.",
"BID15 used squared-loss instead of log-loss in original formulation, which provides generator with better non-vanishing gradients.",
"BID23 view discriminator as an energy function making it possible to use additional loss functions other than logistic output binary classifier, which was found to stabilize GAN training.",
"BID21 propose to train discriminator based on linear separability between hidden representation of real and generated samples and train generator based on decision hyperplanes between hidden representations computed using Linear Discriminant Analysis.For labelled datasets, BID16 , BID18 employed label conditioning in both generator and discriminator to generate discriminable and diverse samples across classes.",
"While this helps produce better samples for complex datasets, it requires the presence of labelled data.",
"In this paper we propose methods to improve performance of GANs on complex datasets without making use of labels.",
"We highlighted a major problem in training GANs on complex image datasets and introduced theoretical analysis for the problem of generation of unrealistic, conflated images in such cases.",
"We proposed the addition of discontinuity in latent noise space of the generator for covering disjoint and diverse modes of the data distribution, and augmented the loss functions to encourage diversity.",
"We showed improvements over existing models without much hyperparameter tuning.In future, we hope to perform an extensive exploration of the search space to obtain a set of hyperparameters along with better methods to introduce discontinuities in the generator that perform well on a variety of datasets, while significantly improving image quality."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0
] | [
0.2222222238779068,
0.2666666507720947,
0.1904761791229248,
0.05714285373687744,
0.0833333283662796,
0.1538461446762085,
0.2857142686843872,
0.11764705181121826,
0.14999999105930328,
0.2916666567325592,
0.051282044500112534,
0.1621621549129486,
0.2380952388048172,
0.20689654350280762,
0.2857142686843872,
0.2295081913471222,
0.19354838132858276,
0.12121211737394333,
0.1111111044883728,
0.09999999403953552,
0.1599999964237213,
0.24242423474788666,
0.19999998807907104,
0.277777761220932,
0.13333332538604736,
0.11764705181121826,
0.17777776718139648,
0.1621621549129486,
0.20689654350280762,
0.10810810327529907,
0.21212120354175568,
0.17777776718139648,
0.3684210479259491,
0.3125,
0.24242423474788666,
0.23076923191547394,
0.13636362552642822,
0.1111111044883728,
0.1666666567325592,
0.1395348757505417,
0.16129031777381897,
0.05405404791235924,
0.10526315122842789,
0.1904761791229248,
0.12903225421905518,
0.09756097197532654,
0.09999999403953552,
0.1463414579629898,
0.0714285671710968,
0.05882352590560913,
0.08888888359069824,
0.1666666567325592,
0.23529411852359772,
0.3888888955116272,
0.41860464215278625,
0.23255813121795654,
0.2222222238779068
] | HyDMX0l0Z | true | [
"We introduce theory to explain the failure of GANs on complex datasets and propose a solution to fix it."
] |
[
"To leverage crowd-sourced data to train multi-speaker text-to-speech (TTS) models that can synthesize clean speech for all speakers, it is essential to learn disentangled representations which can independently control the speaker identity and background noise in generated signals.",
"However, learning such representations can be challenging, due to the lack of labels describing the recording conditions of each training example, and the fact that speakers and recording conditions are often correlated, e.g. since users often make many recordings using the same equipment.",
"This paper proposes three components to address this problem by: (1) formulating a conditional generative model with factorized latent variables, (2) using data augmentation to add noise that is not correlated with speaker identity and whose label is known during training, and (3) using adversarial factorization to improve disentanglement.",
"Experimental results demonstrate that the proposed method can disentangle speaker and noise attributes even if they are correlated in the training data, and can be used to consistently synthesize clean speech for all speakers.",
"Ablation studies verify the importance of each proposed component.",
"Recent development of neural end-to-end TTS models BID26 BID1 enables control of both labelled and unlabelled speech attributes by conditioning synthesis on both text and learned attribute representations BID27 BID21 BID10 BID0 BID5 BID9 .",
"This opens the door to leveraging crowd-sourced speech recorded under various acoustic conditions BID18 to train a high-quality multi-speaker TTS model that is capable of consistently producing clean speech.",
"To achieve this, it is essential to learn disentangled representations that control speaker and acoustic conditions independently.",
"However, this can be challenging for two reasons.",
"First, the underlying acoustic conditions of an utterance, such as the type and level of background noise and reverberation, are difficult to annotate, and therefore such labels are often unavailable.",
"This hinders the use of direct conditioning on the acoustic condition labels in a way similar to conditioning on one-hot speaker labels BID1 .",
"Second, speaker identity can have strong correlations with recording conditions, since a speaker might make most of their recordings in the same location using the same device.",
"This makes it difficult to learn a disentangled representation by assuming statistical independence BID6 .We",
"address this scenario by introducing three components: a conditional generative model with factorized latent variables to control different attributes, data augmentation by adding background noise to training utterances in order to counteract the inherent speaker-noise correlation and to create ground truth noisy acoustic condition labels, and adversarial training based on the generated labels to encourage disentanglement between latent variables. We",
"utilize the VCTK speech synthesis dataset BID23 , and background noise signals from the CHiME-4 challenge BID24 to synthesize a dataset containing correlated speaker and background noise conditions for controlled experiments. We",
"extensively evaluate disentanglement performance on the learned latent representations as well as the synthesized samples. Experimental",
"results identify the contribution of each component, and demonstrate the ability of the proposed model to disentangle noise from speakers and consistently synthesize clean speech for all speakers, despite the strong correlation in the training data.",
"We build a neural network TTS model which incorporates conditional generative modeling, data augmentation, and adversarial training to learn disentangled representations of correlated and partially unlabeled attributes, which can be used to independently control different aspects of the synthesized speech.",
"Extensive studies on a synthetic dataset verify the effectiveness of each element of the proposed solution, and demonstrate the robustness to the choice of hyperparameters.The proposed methods for disentangling correlated attributes is general, and can potentially be applied to other pairs of correlated factors, such as reverberation and speaker, or to other modalities, such as controllable text-to-image generation.",
"In addition, for future work, we would also like to investigate the capability of the proposed method to disentangle pairs of attributes which are both unsupervised.6",
"Acknowledgement"
] | [
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.17543859779834747,
0.17543859779834747,
0.15625,
0.23076923191547394,
0.13333332538604736,
0.19230768084526062,
0.0416666604578495,
0.15789473056793213,
0.06896550953388214,
0.13333332538604736,
0.09999999403953552,
0.08888888359069824,
0,
0.14084506034851074,
0.2083333283662796,
0,
0.19607841968536377,
0.21052631735801697,
0.1846153736114502,
0.13333332538604736
] | Bkg9ZeBB37 | true | [
"Data augmentation and adversarial training are very effective for disentangling correlated speaker and noise, enabling independent control of each attribute for text-to-speech synthesis."
] |
[
"LSTM-based language models exhibit compositionality in their representations, but how this behavior emerges over the course of training has not been explored.",
"Analyzing synthetic data experiments with contextual decomposition, we find that LSTMs learn long-range dependencies compositionally by building them from shorter constituents during training.",
"Consider the process of backpropagation through time for a language model.",
"As an example, the language model should learn that an occurrence of \"either\" increases the later likelihood of \"or\".",
"To do so, it must backpropagate information from the occurrence of \"or\" through some intervening constituent, which we will refer to as a conduit because the association of either/or is carried through it to affect the representation of \"either\".",
"Perhaps it encounters a training example that uses a conduit that is predictable by being structured in familiar ways, here italicized: \"Either Socrates is mortal or not all men are mortal.\"",
"However, what if the conduit is unpredictable and the structure cannot be interpreted by the model, for example, if the conduit includes unknown tokens, as in: \"Either slithy toves gyre or mome raths outgrabe\"?",
"Which conduit will carry the gradient from \"or\" to \"either\" easily?Formally",
", as the gradient of the error e t at timestep t is backpropagated k timesteps through the hidden state h: DISPLAYFORM0 The backpropagated message is multiplied repeatedly by the gradients associated with each item in the conduit. If the",
"recurrence derivatives ∂h i+1 ∂h i are large at some parameter, the correspondingly larger backpropagated gradient ∂et ∂h t−k will accelerate descent in that direction.When we ask which conduit will carry the gradient message to learn a long-range dependency faster, the answer will depend on the magnitude and distribution of the recurrence gradients. If the",
"language model relies on linguistic structure in the conduit in order to pass the message effectively, then the more predictable conduit will facilitate learning a long-range pattern.In order to investigate whether long-range dependencies are built from short constituents, we train models on synthetic data which varies the predictability of short sequences. We find",
"that memorizing local patterns allows a language model to learn a long-range dependency faster but ultimately inhibits its ability to fully acquire longrange rules.",
"We confirm that the longer the span of a rule, the more examples are required for an LSTM model to effectively learn the rule.",
"We then find1 that a more predictable conduit between the rule symbols promotes the early learning of the rule, implying that the process by which an LSTM learns long-range rules is compositional.",
"However, the representation learned through the predictable conduit ultimately prevents the model from confidently learning these long-range connections."
] | [
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.2631579041481018,
0.6153846383094788,
0.14814814925193787,
0.1875,
0.1249999925494194,
0.09090908616781235,
0.08888888359069824,
0.14814814925193787,
0.1249999925494194,
0.13114753365516663,
0.1666666567325592,
0.10526315122842789,
0.1621621549129486,
0.1818181723356247,
0.1875
] | BJl3s1h9aV | true | [
"LSTMs learn long-range dependencies compositionally by building them from shorter constituents over the course of training."
] |
[
"Learning a deep neural network requires solving a challenging optimization problem: it is a high-dimensional, non-convex and non-smooth minimization problem with a large number of terms.",
"The current practice in neural network optimization is to rely on the stochastic gradient descent (SGD) algorithm or its adaptive variants.",
"However, SGD requires a hand-designed schedule for the learning rate.",
"In addition, its adaptive variants tend to produce solutions that generalize less well on unseen data than SGD with a hand-designed schedule.",
"We present an optimization method that offers empirically the best of both worlds: our algorithm yields good generalization performance while requiring only one hyper-parameter.",
"Our approach is based on a composite proximal framework, which exploits the compositional nature of deep neural networks and can leverage powerful convex optimization algorithms by design.",
"Specifically, we employ the Frank-Wolfe (FW) algorithm for SVM, which computes an optimal step-size in closed-form at each time-step.",
"We further show that the descent direction is given by a simple backward pass in the network, yielding the same computational cost per iteration as SGD.",
"We present experiments on the CIFAR and SNLI data sets, where we demonstrate the significant superiority of our method over Adam, Adagrad, as well as the recently proposed BPGrad and AMSGrad.",
"Furthermore, we compare our algorithm to SGD with a hand-designed learning rate schedule, and show that it provides similar generalization while often converging faster.",
"The code is publicly available at https://github.com/oval-group/dfw.",
"Since the introduction of back-propagation BID23 , stochastic gradient descent (SGD) has been the most commonly used optimization algorithm for deep neural networks.",
"While yielding remarkable performance on a variety of learning tasks, a downside of the SGD algorithm is that it requires a schedule for the decay of its learning rate.",
"In the convex setting, curvature properties of the objective function can be used to design schedules that are hyper-parameter free and guaranteed to converge to the optimal solution (Bubeck, 2015) .",
"However, there is no analogous result of practical interest for the non-convex optimization problem of a deep neural network.",
"An illustration of this issue is the diversity of learning rate schedules used to train deep convolutional networks with SGD: BID25 and He et al. (2016) adapt the learning rate according to the validation performance, while BID27 , BID3 and BID8 use pre-determined schedules, which are respectively piecewise constant, geometrically decaying, and cyclic with a cosine annealing.",
"While these protocols result in competitive or state-of-the-art results on their learning task, there does not seem to be a consistent methodology.",
"As a result, finding such a schedule for a new setting is a time-consuming and computationally expensive effort.To alleviate this issue, adaptive gradient methods have been developed BID36 BID4 BID21 , and borrowed from online convex optimization (Duchi et al., 2011) .",
"Typically, these methods only require the tuning of the initial learning rate, the other hyper-parameters being considered robust across applications.",
"However, it has been shown that such adaptive gradient methods obtain worse generalization than SGD BID32 .",
"This observation is corroborated by our experimental results.In order to bridge this performance gap between existing adaptive methods and SGD, we introduce a new optimization algorithm, called Deep Frank-Wolfe (DFW).",
"The DFW algorithm exploits the composite structure of deep neural networks to design an optimization algorithm that leverages efficient convex solvers.",
"In more detail, we consider a composite (nested) optimization problem, with the loss as the outer function and the function encoded by the neural network as the inner one.",
"At each iteration, we define a proximal problem with a first-order approximation of the neural network (linearized inner function), while keeping the loss function in its exact form (exact outer function).",
"When the loss is the hinge loss, each proximal problem created by our formulation is exactly a linear SVM.",
"This allows us to employ the powerful Frank-Wolfe (FW) algorithm as the workhorse of our procedure.There are two by-design advantages to our method compared to the SGD algorithm.",
"First, each iteration exploits more information about the learning objective, while preserving the same computational cost as SGD.",
"Second, an optimal step-size is computed in closed-form by using the FW algorithm in the dual (Frank & Wolfe, 1956 BID5 .",
"Consequently, we do not need a hand-designed schedule for the learning rate.",
"As a result, our algorithm is the first to provide competitive generalization error compared to SGD, all the while requiring a single hyper-parameter and often converging significantly faster.We present two additional improvements to customize the use of the DFW algorithm to deep neural networks.",
"First, we show how to smooth the loss function to avoid optimization difficulties arising from learning deep models with SVMs (Berrada et al., 2018) .",
"Second, we incorporate Nesterov momentum (Nesterov, 1983) to accelerate our algorithm.We demonstrate the efficacy of our method on image classification with the CIFAR data sets (Krizhevsky, 2009) using two architectures: wide residual networks BID35 and densely connected convolutional neural networks BID3 ; we also provide experiments on natural language inference with a Bi-LSTM on the SNLI corpus (Bowman et al., 2015) .",
"We show that the DFW algorithm often strongly outperforms previous methods based on adaptive learning rates.",
"Furthermore, it provides comparable or better accuracy to SGD with hand-designed learning rate schedules.In conclusion, our contributions can be summed up as follows:• We propose a proximal framework which preserves information from the loss function.•",
"For the first time for deep neural networks, we demonstrate how our formulation gives at each iteration (",
"i) an optimal step-size in closed form and",
"(ii) an update at the same computational cost as SGD.•",
"We design a novel smoothing scheme for the dual optimization of SVMs.•",
"To the best of our knowledge, the resulting DFW algorithm is the first to offer comparable or better generalization to SGD with a hand-designed schedule on the CIFAR data sets, all the while converging several times faster and requiring only a single hyperparameter.",
"Our empirical evidence indicates that the initial learning rate can be a crucial hyper-parameter for good generalization.",
"We have observed in our experiments that such a choice of high learning rate provides a consistent improvement for convolutional neural networks: accurate minimization of the training objective with large initial steps usually leads to good generalization.",
"Furthermore, as mentioned in the previous section, it is sometimes beneficial to even increase the batch-size in order to be able to train the model using large initial steps.In the case of recurrent neural networks, however, this effect is not as distinct.",
"Additional experiments on different recurrent architectures have showed variations in the impact of the learning rate and in the best-performing optimizer.",
"Further analysis would be required to understand the effects at play.",
"We have introduced DFW, an efficient algorithm to train deep neural networks.",
"DFW predominantly outperforms adaptive gradient methods, and obtains similar performance to SGD without requiring a hand-designed learning rate schedule.We emphasize the generality of our framework in Section 3, which enables the training of deep neural networks to benefit from any advance on optimization algorithms for linear SVMs.",
"This framework could also be applied to other loss functions that yield efficiently solvable proximal problems.",
"In particular, our algorithm already supports the use of structured prediction loss functions BID28 BID30 , which can be used, for instance, for image segmentation.We have mentioned the intricate relationship between optimization and generalization in deep learning.",
"This illustrates a major difficulty in the design of effective optimization algorithms for deep neural networks: the learning objective does not include all the regularization needed for good generalization.",
"We believe that in order to further advance optimization for deep neural networks, it is essential to alleviate this problem and expose a clear objective function to optimize."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.1463414579629898,
0.051282044500112534,
0.0714285671710968,
0.04999999329447746,
0.0476190410554409,
0.2222222238779068,
0.10810810327529907,
0.1904761791229248,
0.08888888359069824,
0.0952380895614624,
0.07692307233810425,
0.09999999403953552,
0.04878048226237297,
0.045454539358615875,
0.1111111044883728,
0.12121211737394333,
0.04999999329447746,
0.0714285671710968,
0,
0,
0.12244897335767746,
0.10526315122842789,
0.19512194395065308,
0.12765957415103912,
0.2857142686843872,
0,
0.11428570747375488,
0.10810810327529907,
0.06666666269302368,
0.1818181723356247,
0,
0.16438356041908264,
0.05882352590560913,
0.072727270424366,
0.2222222238779068,
0.07692307233810425,
0.06896550953388214,
0.12903225421905518,
0.072727270424366,
0.05714285373687744,
0.11320754140615463,
0.11320754140615463,
0.0555555522441864,
0.06896550953388214,
0.2666666507720947,
0.1904761791229248,
0,
0.07407406717538834,
0.09090908616781235,
0.1818181723356247
] | SyVU6s05K7 | true | [
"We train neural networks by locally linearizing them and using a linear SVM solver (Frank-Wolfe) at each iteration."
] |
[
"In this paper, we show how novel transfer reinforcement learning techniques can be applied to the complex task of target-driven navigation using the photorealisticAI2THOR simulator.",
"Specifically, we build on the concept of Universal SuccessorFeatures with an A3C agent.",
"We introduce the novel architectural1contribution of a Successor Feature Dependent Policy (SFDP) and adopt the concept of VariationalInformation Bottlenecks to achieve state of the art performance.VUSFA, our final architecture, is a straightforward approach that can be implemented using our open source repository.",
"Our approach is generalizable, showed greater stability in training, and outperformed recent approaches in terms of transfer learning ability.",
"The human's ability of navigating unknown spaces (e.g. a firefighter finding the fire hydrant very quickly) primarily relies on visual perception, as well as on previous experience and heavy training (Ramirez et al., 2009 ).",
"In robotics, we would like to mimic this human behaviour.",
"The advancement of visual navigation algorithms essentially contribute to the prevalence and mobility in robotics and therefore, many different approaches are being explored.",
"Previous research has studied map-based, map-building, and map-less approaches (Bonin-Font et al., 2008; Oriolo et al., 1995; Borenstein & Koren, 1991) .",
"In the past, map-based and map-building approaches have been favoured.",
"However, they heavily depend on an accurate mapping of the environment.",
"Also, it requires a carefully executed human-guided training phase which limits its generalizability (Filliat & Meyer, 2003) .",
"With recent advances in Deep Reinforcement Learning (DRL) (Mnih et al., 2015; , map-less navigation has experienced major advancements Mirowski et al., 2018) .",
"It has been demonstrated that DRL-based methods are now able to solve navigation tasks in a more human-like manner (Fan et al., 2018) .",
"Research has shown that DRL-based navigation, in particular target driven visual navigation, is still a challenging task especially when targets are represented in the form of visual information that is highly dynamic.",
"In previous navigation paradigms, the agent navigates to a target demonstrating specific properties (e.g. a yellow cone, such as in the case of Zhang et al. (2017) ), whose location may change over time.",
"In contrast, in target driven visual navigation, the agent should be able to learn to navigate in a persistent state space to a dynamic set of goals.",
"The agent is required to learn to navigate when both the goal and the current state are presented as visual images.",
"A current challenge for DRL algorithms is learning new tasks or goals that vary from what the agent was initially trained for.",
"This ability is called transfer learning.",
"There are two popular strategies for achieving transfer learning in DRL, either by using the concept of General Value Functions (GVF) (Sutton et al., 2011) or by using Successor Feature Approximation (SFA) (Dayan, 1993) .",
"For the task of target driven visual navigation, demonstrated that an A3C agent using the concept of GVF can improve the transfer learning ability.",
"GVF does not however allow us to easily see the underlining process of learning the dynamics of tasks and GVF agents also frequently struggle in complex environments (Sutton et al., 2018) .",
"The second strategy, applying SFA, enables us to capture the dynamics of the environment by attempting to learn future state visitations, although these also encounter limitations when facing multiple tasks.",
"Universal Successor Features Approximators (USFA) , which is an extension of SFA, is able to consider multiple tasks and can improve the transfer learning ability of the agent.",
"In summary, our research contribution is threefold:",
"• For the first time in the literature, we apply Universal Successor Feature Approximators (USFA) for the complex task of target driven visual navigation.",
"Our new approach provides a stable training mechanism and enhances the transfer reinforcement learning ability in complex environments.",
"• We introduce the concept of a Successor Feature Dependant Policy (SFDP), a novel architectural contribution in which the policy can directly make use of the information presented by USFA (an abstract map in our case).",
"This important add-on significantly improves the transfer learning ability of the DRL agent.",
"• Finally, we contribute Variational Universal Successor Feature Approximators (VUSFA), by adopting the concept of Variational Information Bottlenecks.",
"We show that this combination works stably with complex tasks such as target driven visual navigation in the photo-realistic AI2THOR environment .",
"Besides stable convergence, our approach shows possible ways in which transfer learning could be improved in the future.",
"Our second contribution is the addition of a Successor Feature Dependant Policy (SFDP) to the USFA implementation.",
"As mentioned before, ψ g (s t ) can be seen as an abstract representation of the cumulative sum of the future states the agent will visit by following an optimal policy (Dayan, 1993; Barreto et al., 2017) .",
"Traditionally, successor features are not directly consulted when determining an action (Ma et al., 2018b) .",
"However, we hypothesise that feeding the abstract map of future states could be useful in determining the next action.",
"USF can be described as representing the cumulutive sum of discounted future states the agent visits following an optimal policy.",
"This property by itself helps with transfer learning because eventhough different goals have different optimal paths, they can share some common sub-paths.",
"For example, when tasked with finding the microwave and sink in a kitchen, the initial steps of the agent in going to the kitchen will be similar for both tasks.",
"We hypothesised that if the policy has direct access to the USF (see Equation 7), the agent will be able to learn from these similar paths.",
"By directly concatenating ψ g with the final layer of the policy head naively results in ψ g being updated with gradients from the conventional bellman optimality Equation 3 and the policy gradients Figure 1: Proposed Network Architecture \"VUSFA\": The model's input is the current state of the agent s t and the goal location g as images.",
"These go through a shared simaese encoder E(z|s t ).",
"The reparametrized output z is used to train the ω vector.",
"The policy is conditioned on the USF vector (dotted line indicates gradients do not flow from policy to the USFA head).",
"The USFA ψ is trained with the temporal difference error using φ to give the expected future state occupancies.",
"The discounted episode return is used to train both ω and USFA vectors.",
"of the A3C agent.",
"This can harm the true USF representation and can reduce the transfer learning capabilities of the agent.",
"Therefore in the final model, we stopped the gradient flow from the policy head to the USF branch.",
"The stopping of policy gradients for the USF branch is illustrated in Figure 1 with dotted lines.",
"We proposed Variational Universal Successor Features Approximator (VUSFA) to solve rather complex tasks, such as target driven visual navigation in photorealistic environments using the AI2THOR simulator.",
"To our knowledge, this is the first time the Deep Variational Information Bottleneck theory has been applied with Universal Successor Features in Deep Reinforcement Learning.",
"Our results indicate that VUSFA is able to improve the transfer learning ability in respect to previous state-of-the-art GVF and USF-RL based research .",
"Our approach is generalizable and can be easily adapted to various tasks other than navigation.",
"For re-implementation, we provide the source code via our github repository 1 .",
"Our approach introduces a new perspective and should be considered in future research aiming to improve transfer learning for Deep Reinforcement Learning.",
"In particular, further research could look into exploration of the semantical impacts of φ , ω, and ψ."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.23255813121795654,
0.25,
0.178571417927742,
0.1621621549129486,
0.07407406717538834,
0,
0.09756097197532654,
0,
0.06896550953388214,
0.19999998807907104,
0.0555555522441864,
0,
0,
0.08695651590824127,
0.07692307233810425,
0.0952380895614624,
0.052631575614213943,
0.14999999105930328,
0.1599999964237213,
0.19230768084526062,
0.3499999940395355,
0.1666666567325592,
0.08510638028383255,
0.5,
0,
0.19512194395065308,
0.1621621549129486,
0.23999999463558197,
0.32258063554763794,
0.2222222238779068,
0.09999999403953552,
0.277777761220932,
0.17142856121063232,
0.14814814925193787,
0.05714285373687744,
0.10810810327529907,
0.21052631735801697,
0.14999999105930328,
0.08888888359069824,
0.0952380895614624,
0.0634920597076416,
0,
0.06666666269302368,
0.052631575614213943,
0.05405404791235924,
0,
0.17391304671764374,
0.3030303120613098,
0.05882352590560913,
0.1111111044883728,
0.2222222238779068,
0.1904761791229248,
0.24390242993831635,
0.05882352590560913,
0.06451612710952759,
0.1463414579629898,
0.1111111044883728
] | BygXY34FDr | true | [
"We present an improved version of Universal Successor Features based DRL method which can improve the transfer learning of agents."
] |
[
"A major challenge in learning image representations is the disentangling of the factors of variation underlying the image formation. ",
"This is typically achieved with an autoencoder architecture where a subset of the latent variables is constrained to correspond to specific factors, and the rest of them are considered nuisance variables.",
"This approach has an important drawback: as the dimension of the nuisance variables is increased, image reconstruction is improved, but the decoder has the flexibility to ignore the specified factors, thus losing the ability to condition the output on them. ",
"In this work, we propose to overcome this trade-off by progressively growing the dimension of the latent code, while constraining the Jacobian of the output image with respect to the disentangled variables to remain the same. ",
"As a result, the obtained models are effective at both disentangling and reconstruction. ",
"We demonstrate the applicability of this method in both unsupervised and supervised scenarios for learning disentangled representations.",
"In a facial attribute manipulation task, we obtain high quality image generation while smoothly controlling dozens of attributes with a single model.",
"This is an order of magnitude more disentangled factors than state-of-the-art methods, while obtaining visually similar or superior results, and avoiding adversarial training.",
"A desired characteristic of deep generative models is the ability to output realistic images while controlling one or more of the factors of variation underlying the image formation.",
"Moreover, when each unit in the model's internal image representation is sensitive to each of these factors, the model is said to obtain disentangled representations.",
"Learning such models has been approached in the past by training autoencoders where the latent variables (or a subset of them) are constrained to correspond to given factors of variation, which can be specified (supervised) or learned from the data (unsupervised) BID22 BID29 BID15 .",
"The remaining latent variables are typically considered nuisance variables and are used by the autoencoder to complete the reconstruction of the image.There exists one fundamental problem when learning disentangled representations using autoencoders, sometimes referred to as the \"shortcut problem\" BID29 .",
"If the dimension of the latent code is too large, the decoder ignores the latent variables associated to the specified factors of variation, and achieves the reconstruction by using the capacity available in the nuisance variables.",
"On the other hand, if the dimension of the latent code is small, the decoder is encouraged to use the specified variables, but is also limited in the amount of information it can use for reconstruction, so the reconstructed image is more distorted with respect to the autoencoder's input.",
"BID29 showed that this trade-off between reconstruction and disentangling can indeed be traversed by varying the dimension of the latent code.",
"However, no principled method exists to choose the optimal latent code dimension.The shortcut problem was also addressed by using additional mechanisms to make sure the decoder output is a function of the specified factors in the latent code.",
"One approach, for example, consists in swapping the specified part of the latent code between different samples, and using adversarial training to make sure the output distribution is indeed conditioned to the specified factors BID22 BID19 BID29 .",
"However, adversarial training remains a difficult and unstable optimization problem in practice.Based on these observations, we propose a method for avoiding the shortcut problem that requires no adversarial training and achieves good disentanglement and reconstruction at the same time.Our method consists in first training an autoencoder model, the teacher, where the dimension of the latent code is small, so that the autoencoder is able to effectively disentangle the factors of variation and condition its output on them.",
"These factors can be specified in a supervised manner or learned from the data in an unsupervised way, as we shall demonstrate.",
"After the teacher model is trained, we construct a student model that has a larger latent code dimension for the nuisance variables.",
"For the student, we optimize the reconstruction loss as well as an additional loss function that constrains the variation of the output with respect to the specified latent variables to be the same as the teacher's.In what follows, we consider autoencoder models (E, D), that receive an image x as input and produce a reconstructionx : D(E(x)) =x.",
"We consider that the latent code is always split into a specified factors part y ∈ R k and a nuisance variables part z ∈ R d : E(x) = (y, z), D (y, z) =x.Consider a teacher autoencoder (E T , D T ), with nuisance variables dimension d T , and a student DISPLAYFORM0 Because the dimension of the nuisance variables of the student is larger than in the teacher model, we expect a better reconstruction from it (i.e. ||x −x S || < ||x −x T ||, for some norm).At",
"the same time, we want the student model to maintain the same disentangling ability as the teacher as well as the conditioning of the output on the specified factors. A",
"first order approximation of this desired goal can be expressed as DISPLAYFORM1 where j ∈ {1...H",
"· W · C}, H, W and C are the dimensions of the output image, and i ∈ {1...k} indexes over the specified factors of variation.In this paper we propose a method to impose the first-order constraint in (1), which we term Jacobian supervision. We show two applications of this method. First, we propose an unsupervised algorithm that progressively disentangles the principal factors of variation in a dataset of images. Second, we use the Jacobian supervision to train an autoencoder model for images of faces, in which the factors of variation to be controlled are facial attributes. Our resulting model outperforms the state-of-theart in terms of both reconstruction quality and facial attribute manipulation ability.",
"A natural trade-off between disentanglement and reconstruction exists when learning image representations using autoencoder architectures.",
"In this work, we showed that it is possible to overcome this trade-off by first learning a teacher model that is good at disentangling and then imposing the Jacobian of this model with respect to the disentangled variables to a student model that is good at reconstruction.",
"The student model then becomes good at both disentangling and reconstruction.",
"We showed two example applications of this idea.",
"The first one was to progressively learn the principal factors of variation in a dataset, in an unsupervised manner.",
"The second application is a generative model that is able to manipulate facial attributes in human faces.",
"The resulting model is able to manipulate one order of magnitude more facial attributes than state-of-the-art methods, while obtaining similar or superior visual results, and requiring no adversarial training.",
"For the autoencoder utilized for experiments in Section 3, we used the following architecture.",
"For the encoder: DISPLAYFORM0 where F (I, O) indicates a fully connected layer with I inputs and O outputs.",
"For the first teacher model (k = 2, d = 0), we also used BatchNorm after the encoder output.The decoder is the exact symmetric of the encoder, with a Tanh layer appended at the end.We used Adam (Kingma & Ba, 2014 ) with a learning rate of 3e −4 , a batch size of 128 and weight decay coefficient 1e −6 ."
] | [
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.47058823704719543,
0.13636362552642822,
0.07999999821186066,
0.08695651590824127,
0.25,
0.4000000059604645,
0.10256409645080566,
0.19512194395065308,
0.2380952388048172,
0.1538461446762085,
0.10344827175140381,
0.22641508281230927,
0.13636362552642822,
0.1111111044883728,
0.21052631735801697,
0.11764705181121826,
0.1599999964237213,
0.21052631735801697,
0.051282044500112534,
0.10810810327529907,
0.158730149269104,
0.1190476194024086,
0.20512819290161133,
0.05714285373687744,
0.19565217196941376,
0.3030303120613098,
0.23529411852359772,
0.27586206793785095,
0.07692307233810425,
0.1666666567325592,
0.05882352590560913,
0.12765957415103912,
0.06451612710952759,
0.05405404791235924,
0.08571428060531616
] | Hkg4W2AcFm | true | [
"A method for learning image representations that are good for both disentangling factors of variation and obtaining faithful reconstructions."
] |
[
"We propose a new notion of 'non-linearity' of a network layer with respect to an input batch that is based on its proximity to a linear system, which is reflected in the non-negative rank of the activation matrix.\n",
"We measure this non-linearity by applying non-negative factorization to the activation matrix.\n",
"Considering batches of similar samples, we find that high non-linearity in deep layers is indicative of memorization.",
"Furthermore, by applying our approach layer-by-layer, we find that the mechanism for memorization consists of distinct phases.",
"We perform experiments on fully-connected and convolutional neural networks trained on several image and audio datasets.",
"Our results demonstrate that as an indicator for memorization, our technique can be used to perform early stopping.",
"A fundamental challenge in machine learning is balancing the bias-variance tradeoff, where overly simple learning models underfit the data (suboptimal performance on the training data) and overly complex models are expected to overfit or memorize the data (perfect training set performance, but suboptimal test set performance).",
"The latter direction of this tradeoff has come into question with the observation that deep neural networks do not memorize their training data despite having sufficient capacity to do so BID38 , the explanation of which is a matter of much interest.Due to their convenient gradient properties and excellent performance in practice, rectified-linear units (ReLU) have been widely adopted and are now ubiquitous in the field of deep learning.",
"In addition, the relative simplicity of this function (max(·, 0)) makes the analysis of ReLU networks more straight-forward than networks with other activation functions.We propose a new notion of 'non-linearity' of a ReLU layer with respect to an input batch.",
"We show that networks that generalize well have deep layers that are approximately linear with respect to batches of similar inputs.",
"In contrast, networks that memorize their training data are highly nonlinear with respect to similar inputs, even in deep layers.Our method is based on the fact that the main source of non-linearity in ReLU networks is the threshold at zero.",
"This thresholding determines the support of the resulting activation matrix, which plays an important role in the analysis of non-negative matrices.",
"As we discuss in Section 3, the non-negative rank of a matrix is constrained by the shape of the support, and is therefore indicative of the degree of non-linearity in a ReLU activation matrix with respect to the input.Although computing the non-negative rank is NP-hard (Vavasis, 2009), we can restrict it with approximate non-negative matrix factorization (NMF) BID20 .",
"Consequently, we propose to estimate the 'non-linearity' of a ReLU layer with respect to an input batch by performing NMF on a grid over the approximation rank k, and measuring the impact on network performance.",
"This procedure can be seen as measuring the robustness of a neural network to increasing compression of its activations.",
"We therefore compare our NMF-based approach to two additional dimensionality reduction techniques, namely principal component analysis (PCA) and random ablations.We informally define memorization as the implicit learning of a rule that associates a specific sample (i.e., with index",
"i) to a particular label (e.g., with index",
"j).",
"Such a rule does not benefit the network in terms of improving its performance on new data.We show that our NMF-based approach is extremely sensitive to memorization in neural networks.",
"We report results for a variety of neural network architectures trained on several image and audio datasets.",
"We conduct a layer-by-layer analysis and our results reveal interesting details on the internal mechanism of memorization in neural networks.",
"Finally, as an indicator for memorization, we use our proposed measure to perform early stopping.",
"We have introduced a notion of a ReLU layer's non-linearity with respect to an input batch, which is based on its proximity to a linear system.",
"We measure this property indirectly via NMF applied to deep activations of single-class batches.",
"While more analysis is required before definite guarantees could be given, we find that our approach is successful in detecting memorization and generalization across a variety of neural network architectures and datasets.",
"The exact architectures we used for each dataset are given in Table 1 .",
"We denote a linear or convolutional layer followed by a ReLU as Linear + and Conv + , respectively."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.30188679695129395,
0.29411762952804565,
0.05405404791235924,
0.10526315122842789,
0.11428570747375488,
0.051282044500112534,
0.06896550953388214,
0.1249999925494194,
0.25925925374031067,
0.19999998807907104,
0.1428571343421936,
0.25641024112701416,
0.32786884903907776,
0.2745097875595093,
0.20512819290161133,
0.23333333432674408,
0.12903225421905518,
0.19607841968536377,
0.21052631735801697,
0.24390242993831635,
0.1666666567325592,
0.22727271914482117,
0.17142856121063232,
0.15686273574829102,
0,
0.2631579041481018
] | HJeB0sC9Fm | true | [
"We use the non-negative rank of ReLU activation matrices as a complexity measure and show it (negatively) correlates with good generalization."
] |
[
"Deep neural networks have achieved state-of-the-art performance in various fields, but they have to be scaled down to be used for real-world applications.",
"As a means to reduce the size of a neural network while preserving its performance, knowledge transfer has brought a lot of attention.",
"One popular method of knowledge transfer is knowledge distillation (KD), where softened outputs of a pre-trained teacher network help train student networks.",
"Since KD, other transfer methods have been proposed, and they mainly focus on loss functions, activations of hidden layers, or additional modules to transfer knowledge well from teacher networks to student networks.",
"In this work, we focus on the structure of a teacher network to get the effect of multiple teacher networks without additional resources.",
"We propose changing the structure of a teacher network to have stochastic blocks and skip connections.",
"In doing so, a teacher network becomes the aggregate of a huge number of paths.",
"In the training phase, each sub-network is generated by dropping stochastic blocks randomly and used as a teacher network.",
"This allows training the student network with multiple teacher networks and further enhances the student network on the same resources in a single teacher network.",
"We verify that the proposed structure brings further improvement to student networks on benchmark datasets.",
"Deep neural networks (DNNs) have achieved state-of-theart performances on complex tasks like computer vision (He et al. 2016) , language modeling (Jozefowicz et al. 2016) , and machine translation .",
"Moreover, they surpass human ability in several fields including image classification (He et al. 2016) , the go game , voice generation (Oord et al. 2016) , and so on.",
"Despite their superior performance, it is difficult to use DNN-based models because of limited memory and computational resources in the embedded systems.",
"To deal with this problem, many studies have been done to make DNNs smaller but efficient to be applicable in resource limited cases.",
"One of them is knowledge transfer (KT), which train a smaller network with the information of large model's information.",
"Knowledge",
"The primary goal of this paper is to make a single teacher network to behave as multiple teacher networks.",
"Since multiple teacher networks provide various outputs on a given input, they can provide more extensive knowledge than a single teacher network does.",
"It has been shown that student networks improve further with multiple teacher networks which are used as an ensemble or separately (Hinton, Vinyals, and Dean 2015; You et al. 2017; Zhang et al. 2018) .",
"However, using multiple teacher networks is a resource burden and delays the training process.",
"In this work, we propose to add stochastic blocks and skip connections to a teacher network.",
"In doing so, we can get the effect of multiple teacher networks in the same resource of single teacher network.",
"A stochastic block is a block that falls with a fixed probability in the training phase and weighted by its survival probability in the inference phase .",
"Skip connections make huge number of paths in the network and function as memory which link the information of previous parts and later parts even if stochastic blocks drop.",
"In the training phase, different sub-networks are generated resulting from stochastic drop in the teacher network for each batch.",
"The sub-networks still have reliable performances since there still exist valid paths.",
"Each sub-network becomes a teacher network for each batch, so the student network is trained with multiple teacher networks in the entire training phase.",
"Figure 1 is example of sub-networks generated by dropping one block each from a network with the proposed structure.",
"The networks consists of 3 blocks and f i , Id represents the ith block of the network (i ∈ 1, 2, 3) and an identity block generated by a skip connection respectively.",
"Red arrows in the figure mean that the outputs of the blocks are 0.",
"In Figure 1 , even if one block drops, each subnetwork still has 4 valid paths of 8 total paths.",
"We observe that :",
"(i) multiple teacher networks are generated from a single teacher network with no more resources;",
"(ii) generated networks provide different knowledge to a student network;",
"(iii) the performances of student networks improve with the help of a teacher network of the proposed structure.",
"We succeeded in training the student network to perform better than the ones with the same architecture trained by the knowledge transfer methods (KD) (Hinton, Vinyals, and Dean 2015) , attention transfer (AT) (Zagoruyko and Komodakis 2016a) , and mutual learning (ML) (Zhang et al. 2018) ) over CIFAR-100 (Krizhevsky, Hinton, and others 2009 ) and tinyimageNet (Russakovsky et al. 2015) datasets.",
"The rest of this paper is organized as follows.",
"First, we review recent studies related to our work.",
"Then, we demonstrate the proposed scheme with details.",
"After this, we present experiments and discuss the results.",
"Finally, summary and concluding remarks are given in the conclusion.",
"In this work, we propose to change the structure of a teacher network to get the effect of multiple teacher networks in the same resource of one teacher network.",
"In our proposed structure, we obtain multiple teacher networks without additional resource so that compact networks improve further than those trained from conventional transfer methods.",
"The proposed structure can be easily applied to other transfer methods and tasks, e.g. segmentation or object detection."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.09999999403953552,
0.14999999105930328,
0.19999998807907104,
0.20408162474632263,
0.44999998807907104,
0.5,
0.1818181723356247,
0.3589743673801422,
0.25641024112701416,
0.17142856121063232,
0.08888888359069824,
0.08888888359069824,
0.2380952388048172,
0.0952380895614624,
0.1621621549129486,
0.5405405163764954,
0.14999999105930328,
0.15686273574829102,
0.3529411852359772,
0.4571428596973419,
0.37837836146354675,
0.25,
0.2666666507720947,
0.15789473056793213,
0.06451612710952759,
0.24390242993831635,
0.20512819290161133,
0.3265306055545807,
0.1875,
0.051282044500112534,
0,
0.1764705777168274,
0.13333332538604736,
0.23529411852359772,
0.11594202369451523,
0.3448275923728943,
0.06896550953388214,
0.0714285671710968,
0.13793103396892548,
0.13333332538604736,
0.4390243887901306,
0.13636362552642822,
0.1538461446762085
] | HklA93NYwS | true | [
"The goal of this paper is to get the effect of multiple teacher networks by exploiting stochastic blocks and skip connections."
] |
Subsets and Splits