source
sequence | source_labels
sequence | rouge_scores
sequence | paper_id
stringlengths 9
11
| ic
unknown | target
sequence |
---|---|---|---|---|---|
[
"This paper explores the use of self-ensembling for visual domain adaptation problems.",
"Our technique is derived from the mean teacher variant (Tarvainen et. al 2017) of temporal ensembling (Laine et al. 2017), a technique that achieved state of the art results in the area of semi-supervised learning.",
"We introduce a number of modifications to their approach for challenging domain adaptation scenarios and evaluate its effectiveness.",
"Our approach achieves state of the art results in a variety of benchmarks, including our winning entry in the VISDA-2017 visual domain adaptation challenge.",
"In small image benchmarks, our algorithm not only outperforms prior art, but can also achieve accuracy that is close to that of a classifier trained in a supervised fashion.",
"The strong performance of deep learning in computer vision tasks comes at the cost of requiring large datasets with corresponding ground truth labels for training.",
"Such datasets are often expensive to produce, owing to the cost of the human labour required to produce the ground truth labels.Semi-supervised learning is an active area of research that aims to reduce the quantity of ground truth labels required for training.",
"It is aimed at common practical scenarios in which only a small subset of a large dataset has corresponding ground truth labels.",
"Unsupervised domain adaptation is a closely related problem in which one attempts to transfer knowledge gained from a labeled source dataset to a distinct unlabeled target dataset, within the constraint that the objective (e.g.digit classification) must remain the same.",
"Domain adaptation offers the potential to train a model using labeled synthetic data -that is often abundantly available -and unlabeled real data.",
"The scale of the problem can be seen in the VisDA-17 domain adaptation challenge images shown in FIG3 .",
"We will present our winning solution in Section 4.2.Recent work BID28 ) has demonstrated the effectiveness of self-ensembling with random image augmentations to achieve state of the art performance in semi-supervised learning benchmarks.We have developed the approach proposed by BID28 to work in a domain adaptation scenario.",
"We will show that this can achieve excellent results in specific small image domain adaptation benchmarks.",
"More challenging scenarios, notably MNIST → SVHN and the VisDA-17 domain adaptation challenge required further modifications.",
"To this end, we developed confidence thresholding and class balancing that allowed us to achieve state of the art results in a variety of benchmarks, with some of our results coming close to those achieved by traditional supervised learning.",
"Our approach is sufficiently flexble to be applicable to a variety of network architectures, both randomly initialized and pre-trained.Our paper is organised as follows; in Section 2 we will discuss related work that provides context and forms the basis of our technique; our approach is described in Section 3 with our experiments and results in Section 4; and finally we present our conclusions in Section 5.",
"In this section we will cover self-ensembling based semi-supervised methods that form the basis of our approach and domain adaptation techniques to which our work can be compared.",
"We have presented an effective domain adaptation algorithm that has achieved state of the art results in a number of benchmarks and has achieved accuracies that are almost on par with traditional supervised learning on digit recognition benchmarks targeting the MNIST and SVHN datasets.",
"The Table 2 : VisDA-17 performance, presented as mean ± std-dev of 5 independent runs.",
"Full results are presented in TAB6 in Appendix C. resulting networks will exhibit strong performance on samples from both the source and target domains.",
"Our approach is sufficiently flexible to be usable for a variety of network architectures, including those based on randomly initialised and pre-trained networks.",
"stated that the self-ensembling methods presented by Laine & Aila (2017) -on which our algorithm is based -operate by label propagation.",
"This view is supported by our results, in particular our MNIST → SVHN experiment.",
"The latter requires additional intensity augmentation in order to sufficiently align the dataset distributions, after which good quality label predictions are propagated throughout the target dataset.",
"In cases where data augmentation is insufficient to align the dataset distributions, a pre-trained network may be used to bridge the gap, as in our solution to the VisDA-17 challenge.",
"This leads us to conclude that effective domain adaptation can be achieved by first aligning the distributions of the source and target datasets -the focus of much prior art in the field -and then refining their correspondance; a task to which self-ensembling is well suited.",
"The datasets used in this paper are described in Some of the experiments that involved datasets described in TAB3 required additional data preparation in order to match the resolution and format of the input samples and match the classification target.",
"These additional steps will now be described.MNIST ↔ USPS The USPS images were up-scaled using bilinear interpolation from 16 × 16 to 28 × 28 resolution to match that of MNIST.CIFAR-10 ↔ STL CIFAR-10 and STL are both 10-class image datasets.",
"The STL images were down-scaled to 32 × 32 resolution to match that of CIFAR-10.",
"The 'frog' class in CIFAR-10 and the 'monkey' class in STL were removed as they have no equivalent in the other dataset, resulting in a 9-class problem with 10% less samples in each dataset.Syn-Signs → GTSRB GTSRB is composed of images that vary in size and come with annotations that provide region of interest (bounding box around the sign) and ground truth classification.",
"We extracted the region of interest from each image and scaled them to a resolution of 40 × 40 to match those of Syn-Signs.MNIST ↔ SVHN The MNIST images were padded to 32 × 32 resolution and converted to RGB by replicating the greyscale channel into the three RGB channels to match the format of SVHN.B SMALL IMAGE EXPERIMENT TRAINING B.1",
"TRAINING PROCEDURE Our networks were trained for 300 epochs.",
"We used the Adam BID12 gradient descent algorithm with a learning rate of 0.001.",
"We trained using mini-batches composed of 256 samples, except in the Syn-digits → SVHN and Syn-signs → GTSRB experiments where we used 128 in order to reduce memory usage.",
"The self-ensembling loss was weighted by a factor of 3 and the class balancing loss was weighted by 0.005.",
"Our teacher network weights t i were updated so as to be an exponential moving average of those of the student s i using the formula t i = αt i−1 + (1 − α)s i , with a value of 0.99 for α.",
"A complete pass over the target dataset was considered to be one epoch in all experiments except the MNIST → USPS and CIFAR-10 → STL experiments due to the small size of the target datasets, in which case one epoch was considered to be a pass over the larger soure dataset.We found that using the proportion of samples that passed the confidence threshold can be used to drive early stopping BID18 ).",
"The final score was the target test set performance at the epoch at which the highest confidence threshold pass rate was obtained.C VISDA-17 C.1",
"HYPER-PARAMETERS Our training procedure was the same as that used in the small image experiments, except that we used 160 × 160 images, a batch size of 56 (reduced from 64 to fit within the memory of an nVidia 1080-Ti), a self-ensembling weight of 10 (instead of 3), a confidence threshold of 0.9 (instead of 0.968) and a class balancing weight of 0.01.",
"We used the Adam BID12 gradient descent algorithm with a learning rate of 10 −5 for the final two randomly initialized layers and 10 DISPLAYFORM0 for the pre-trained layers.",
"The first convolutional layer and the first group of convolutional layers (with 64 feature channels) of the pre-trained ResNet were left unmodified during training.Reduced data augmentation:• scale image so that its smallest dimension is 176 pixels, then randomly crop a 160 × 160 section from the scaled image • No random affine transformations as they increase confusion between the car and truck classes in the validation set • random uniform scaling in the range [0.75, 1.333]• horizontal flipping Competition data augmentation adds the following in addition to the above:• random intensity/brightness scaling in the range [0.75, 1.333]• random rotations, normally distributed with a standard deviation of 0.2π• random desaturation in which the colours in an image are randomly desaturated to greyscale by a factor between 0% and 100% • rotations in colour space, around a randomly chosen axes with a standard deviation of 0.05π• random offset in colour space, after standardisation using parameters specified by 10 × 10 × 192 Dropout, 50%10 × 10 × 192 Conv 3 × 3 × 384, pad 1, batch norm 10 × 10 × 384 Conv 3 × 3 × 384, pad 1, batch norm 10 × 10 × 384 Conv 3 × 3 × 384, pad 1, batch norm 10 × 10 × 384 Max-pool, 2x25 × 5 × 384 Dropout, 50%5 × 5 × 384 Global pooling layer 1 × 1 × 384 Fully connected, 43 units, softmax 43 Table 8 : Syn-signs → GTSRB architecture"
] | [
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.4000000059604645,
0.17391303181648254,
0.2222222238779068,
0.41025641560554504,
0.13333332538604736,
0.1428571343421936,
0.12244897335767746,
0.051282044500112534,
0.1111111044883728,
0.10256409645080566,
0.29411762952804565,
0.23728813230991364,
0.1764705777168274,
0.23529411852359772,
0.15094339847564697,
0.060606054961681366,
0.2222222238779068,
0.25925925374031067,
0.060606054961681366,
0.04878048226237297,
0.1463414579629898,
0.15789473056793213,
0.06451612710952759,
0.0476190410554409,
0.09090908616781235,
0.16949151456356049,
0.12765957415103912,
0.07692307233810425,
0.06451612710952759,
0.0882352888584137,
0.09677419066429138,
0.07407406717538834,
0.1818181723356247,
0.08888888359069824,
0.11764705181121826,
0.1071428507566452,
0.05970148742198944,
0.051282044500112534,
0.09090908616781235,
0.1904761791229248,
0.038709674030542374
] | rkpoTaxA- | true | [
"Self-ensembling based algorithm for visual domain adaptation, state of the art results, won VisDA-2017 image classification domain adaptation challenge."
] |
[
"It is easy for people to imagine what a man with pink hair looks like, even if they have never seen such a person before.",
"We call the ability to create images of novel semantic concepts visually grounded imagination.",
"In this paper, we show how we can modify variational auto-encoders to perform this task.",
"Our method uses a novel training objective, and a novel product-of-experts inference network, which can handle partially specified (abstract) concepts in a principled and efficient way.",
"We also propose a set of easy-to-compute evaluation metrics that capture our intuitive notions of what it means to have good visual imagination, namely correctness, coverage, and compositionality (the 3 C’s).",
"Finally, we perform a detailed comparison of our method with two existing joint image-attribute VAE methods (the JMVAE method of Suzuki et al., 2017 and the BiVCCA method of Wang et al., 2016) by applying them to two datasets: the MNIST-with-attributes dataset (which we introduce here), and the CelebA dataset (Liu et al., 2015).",
"Consider the following two-party communication game: a speaker thinks of a visual concept C, such as \"men with black hair\", and then generates a description y of this concept, which she sends to a listener; the listener interprets the description y, by creating an internal representation z, which captures its \"meaning\".",
"We can think of z as representing a set of \"mental images\" which depict the concept C. To test whether the listener has correctly \"understood\" the concept, we ask him to draw a set of real images S = {x s : s = 1 : S}, which depict the concept C. He then sends these back to the speaker, who checks to see if the images correctly match the concept C. We call this process visually grounded imagination.In this paper, we represent concept descriptions in terms of a fixed length vector of discrete attributes A. This allows us to specify an exponentially large set of concepts using a compact, combinatorial representation.",
"In particular, by specifying different subsets of attributes, we can generate concepts at different levels of granularity or abstraction.",
"We can arrange these concepts into a compositional abstraction hierarchy, as shown in Figure 1 .",
"This is a directed acyclic graph (DAG) in which nodes represent concepts, and an edge from a node to its parent is added whenever we drop one of the attributes from the child's concept definition.",
"Note that we dont make any assumptions about the order in which the attributes are dropped (that is, dropping the attribute \"smiling\" is just as valid as dropping \"female\" in Figure 1 ).",
"Thus, the tree shown in the figure is just a subset extracted from the full DAG of concepts, shown for illustration purposes.We can describe a concept by creating the attribute vector y O , in which we only specify the value of the attributes in the subset O ⊆ A; the remaining attributes are unspecified, and are assumed to take all possible legal values.",
"For example, consider the following concepts, in order of increasing abstraction: C msb = (male, smiling, blackhair), C * sb = ( * , smiling, blackhair), and C * * b = ( * , * , blackhair), where the attributes are gender, smiling or not, and hair color, and * represents \"don't care\".",
"A good model should be able to generate images from different levels of the abstraction hierarchy, as shown in Figure 1 .",
"(This is in contrast to most prior work on conditional generative models of images, which assume that all attributes are fully specified, which corresponds to sampling only from leaf nodes in the hierarchy.)",
"Figure 1 : A compositional abstraction hierarchy for faces, derived from 3 attributes: hair color, smiling or not, and gender.",
"We show a set of sample images generated by our model, when trained on CelebA, for different nodes in this hierarchy.In Section 2, we show how we can extend the variational autoencoder (VAE) framework of BID15 to create models which can perform this task.",
"The first extension is to modify the model to the \"multi-modal\" setting where we have both an image, x, and an attribute vector, y.",
"More precisely, we assume a joint generative model of the form p(x, y,",
"z) = p(z)p(x|z)p(y|z",
"), where p(z) is the prior",
"over the latent variable z, p(x|z) is our image decoder, and p(y|z) is our description decoder. We additionally",
"assume that the description decoder factorizes over the specified attributes in the description, so p(y O |z) = k∈O p(y k |z).We further extend",
"the VAE by devising a novel objective function, which we call the TELBO, for training the model from paired data, D = {(x n , y n )}. However, at test",
"time, we will allow unpaired data (either just a description or just an image). Hence we fit three",
"inference networks: q(z|x, y), q(z|x) and q(z|y",
"). This way we can embed",
"an image or a description into the same shared latent space (using q(z|x) and q(z|y), respectively); this lets us \"translate\" images into descriptions or vice versa, by computing p(y|x) = dz p(y|z)q(z|x) and p(x|y) = dz p(x|z)q(z|y).To handle abstract concepts",
"(i.e., partially observed attribute vectors), we use a method based on the product of experts (POE) BID8 . In particular, our inference",
"network for attributes has the form q(z|y O ) ∝ p(z) k∈O q(z|y k ). If no attributes",
"are specified, the",
"posterior is equal to the prior. As we condition on more attributes,",
"the posterior becomes narrower, which corresponds to specifying a more precise concept. This enables us to generate a more",
"diverse set of images to represent abstract concepts, and a less diverse set of images to represent concrete concepts, as we show below.Section 3 discusses how to evaluate the performance of our method in an objective way. Specifically, we first \"ground\" the",
"description by generating a set of images, S(y O ) = {x s ∼ p(x|y O ) : s = 1 : S}. We then check that all the sampled",
"images in S(y O ) are consistent with the specified attributes y O (we call this correctness). We also check that the set of images",
"\"spans\" the extension of the concept, by exhibiting suitable diversity (c.f. BID36 ). Concretely, we check that the attributes",
"that were not specified (e.g., gender in C * sb above) vary across the different images; we call this coverage. Finally, we want the set of images to have",
"high correctness and coverage even if the concept y O has a combination of attribute values that have not been seen in training. For example, if we train on C msb = (male,",
"smiling, blackhair), and C f nb = (female, notsmiling, blackhair), we should be able to test on C mnb = (male, notsmiling, blackhair), and C f sb = (female, smiling, blackhair). We will call this property compositionality",
". Being able to generate plausible images in",
"response to truly compositionally novel queries is the essence of imagination. Together, we call these criteria the 3 C's",
"of visual imagination.Section 5 reports experimental results on two different datasets. The first dataset is a modified version of",
"MNIST, which we call MNIST-with-attributes (or MNIST-A), in which we \"render\" modified versions of a single MNIST digit on a 64x64 canvas, varying its location, orientation and size. The second dataset is CelebA BID16 , which",
"consists of over 200k face images, annotated with 40 binary attributes. We show that our method outperforms previous",
"methods on these datasets.The contributions of this paper are threefold. First, we present a novel extension to VAEs",
"in the multimodal setting, introducing a principled new training objective (the TELBO), and deriving an interpretation of a previously proposed objective (JMVAE) BID31 as a valid alternative in Appendix A.1. Second, we present a novel way to handle missing",
"data in inference networks based on a product of experts. Third, we present novel criteria (the 3 C's) for",
"evaluating conditional generative models of images, that extends prior work by considering the notion of visual abstraction and imagination.",
"We have shown how to create generative models which can \"imagine\" compositionally novel concrete and abstract visual concepts.",
"In the future we would like to explore richer forms of description, beyond attribute vectors, such as natural language text, as well as compositional descriptions of scenes, which will require dealing with a variable number of objects."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0
] | [
0.0476190410554409,
0.25,
0.12903225421905518,
0.14999999105930328,
0.0416666604578495,
0.03389830142259598,
0.06557376682758331,
0.12371134012937546,
0.11428570747375488,
0.060606054961681366,
0.08163265138864517,
0.08695651590824127,
0.11940298229455948,
0.037735845893621445,
0.1538461446762085,
0.08163265138864517,
0.10526315122842789,
0.17241379618644714,
0.10256409645080566,
0,
0,
0,
0,
0,
0.08695651590824127,
0.05882352590560913,
0,
0.0833333283662796,
0.1111111044883728,
0.04878048226237297,
0,
0,
0.06666666269302368,
0.11764705181121826,
0.19999998807907104,
0,
0.04999999329447746,
0,
0.08510638028383255,
0.03999999538064003,
0.045454539358615875,
0.1599999964237213,
0.1111111044883728,
0,
0.03999999538064003,
0,
0.10810810327529907,
0.11320754140615463,
0.05405404791235924,
0,
0.3888888955116272,
0.11764705181121826
] | HkCsm6lRb | true | [
"A VAE-variant which can create diverse images corresponding to novel concrete or abstract \"concepts\" described using attribute vectors."
] |
[
"We introduce \"Search with Amortized Value Estimates\" (SAVE), an approach for combining model-free Q-learning with model-based Monte-Carlo Tree Search (MCTS).",
"In SAVE, a learned prior over state-action values is used to guide MCTS, which estimates an improved set of state-action values.",
"The new Q-estimates are then used in combination with real experience to update the prior.",
"This effectively amortizes the value computation performed by MCTS, resulting in a cooperative relationship between model-free learning and model-based search.",
"SAVE can be implemented on top of any Q-learning agent with access to a model, which we demonstrate by incorporating it into agents that perform challenging physical reasoning tasks and Atari.",
"SAVE consistently achieves higher rewards with fewer training steps, and---in contrast to typical model-based search approaches---yields strong performance with very small search budgets.",
"By combining real experience with information computed during search, SAVE demonstrates that it is possible to improve on both the performance of model-free learning and the computational cost of planning.",
"Model-based methods have been at the heart of reinforcement learning (RL) since its inception (Bellman, 1957) , and have recently seen a resurgence in the era of deep learning, with powerful function approximators inspiring a variety of effective new approaches Chua et al., 2018; Wang et al., 2019) .",
"Despite the success of model-free RL in reaching state-of-the-art performance in challenging domains (e.g. Kapturowski et al., 2018; Haarnoja et al., 2018) , model-based methods hold the promise of allowing agents to more flexibly adapt to new situations and efficiently reason about what will happen to avoid potentially bad outcomes.",
"The two key components of any such system are the model, which captures the dynamics of the world, and the planning algorithm, which chooses what computations to perform with the model in order to produce a decision or action (Sutton & Barto, 2018) .",
"Much recent work on model-based RL places an emphasis on model learning rather than planning, typically using generic off-the-shelf planners like Monte-Carlo rollouts or search (see ; Wang et al. (2019) for recent surveys).",
"Yet, with most generic planners, even a perfect model of the world may require large amounts of computation to be effective in high-dimensional, sparse reward settings.",
"For example, recent methods which use Monte-Carlo Tree Search (MCTS) require 100s or 1000s of model evaluations per action during training, and even upwards of a million simulations per time step at test time (Anthony et al., 2017; .",
"These large search budgets are required, in part, because much of the computation performed during planning-such as the estimation of action values-is coarsely summarized in behavioral traces such as visit counts (Anthony et al., 2017; , or discarded entirely after an action is selected (Bapst et al., 2019; Azizzadenesheli et al., 2018) .",
"However, large search budgets are a luxury that is not always available: many real-world simulators are expensive and may only be feasible to query a handful of times.",
"In this paper, we explore preserving the value estimates that were computed by search by amortizing them via a neural network and then using this network to guide future search, resulting in an approach which works well even with very small search budgets.",
"We propose a new method called \"Search with Amortized Value Estimates\" (SAVE) which uses a combination of real experience as well as the results of past searches to improve overall performance and reduce planning cost.",
"During training, SAVE uses MCTS to estimate the Q-values at encountered states.",
"These Q-values are used along with real experience to fit a Q-function, thus amortizing the computation required to estimate values during search.",
"The Q-function is then used as a prior for subsequent searches, resulting in a symbiotic relationship between model-free learning and MCTS.",
"At test time, SAVE uses MCTS guided by the learned prior to produce effective behavior, even with very small search budgets and in environments with tens of thousands of possible actions per state-settings which are very challenging for traditional planners.",
"Here we expand on the results presented in the main text and in Figure 3d and Figure C .2.",
"Cross-entropy vs. L2 loss While the L2 loss ( Figure C .2, orange) can result in equivalent performance as the cross-entropy loss (Figure C.2, green) , this is at the cost of higher variance across seeds and lower performance on average.",
"This is likely because the L2 loss encourages the Q-function to exactly match the Q-values estimated by search.",
"However, with a search budget of 10, those Qvalues will be very noisy.",
"In contrast, the cross-entropy loss only encourages the Q-function to match the overall distribution shape of the Q-values estimated by search.",
"This is a less strong constraint that allows the information acquired during search to be exploited while not relying on it too strongly.",
"Indeed, we can observe that the agent with L2 amortization loss actually performs worse than the agent that has no amortization loss at all ( Figure C .2, purple) when using a search budget of 10, suggesting that trying to match the Q-values during search too closely can harm performance.",
"Additionally, we can consider an interesting interaction between Q-learning and the amortization loss.",
"Due to the search locally avoiding poor actions, Q-learning will rarely actually operate on low-valued actions, meaning most of its computation is spent refining the estimates for high-valued actions.",
"The softmax cross entropy loss ensures that low-valued actions have lower values than high-valued actions, but does not force these values to be exact.",
"Thus, in this regime we should have good estimates of value for high-valued actions and worse estimates of value for low-valued actions.",
"In contrast, an L2 loss would require the values to be exact for both low and high valued actions.",
"By using cross entropy instead, we can allow the neural network to spend more of its capacity representing the high-valued actions and less capacity representing the low-valued actions, which we care less about in the first place anyway.",
"With vs. without Q-learning Without Q-learning ( Figure C.2, teal) , the SAVE agent's performance suffers dramatically.",
"As discussed in the previous section, the Q-values estimated during search are very noisy, meaning it is not necessarily a good idea to try to match them exactly.",
"Additionally, Q MCTS is on-policy experience and can become stale if Q θ changes too much between when Q MCTS was computed and when it is used for learning.",
"Thus, removing the Q-learning loss makes the learning algorithm much more on-policy and therefore susceptible to the issues that come with on-policy training.",
"Indeed, without the Q-learning loss, we can only rely on the Q-values estimated during search, resulting in much worse performance than when Q-learning is used.",
"UCT vs. PUCT Finally, we compared to a variant which utilizes prior knowledge by transforming the Q-values into a policy via a softmax and then using this policy as a prior with PUCT, rather than using it to initialize the Q-values (Figure C.2, brown) .",
"With large amounts of search, the initial setting of the Q-values should not matter much, but in the case of small search budgets (as seen here), the estimated Q-values do not change much from their initial values.",
"Thus, if the initial values are zero, then the final values will also be close to zero, which later results in the Q-function being regressed towards a nearly uniform distribution of value.",
"By initializing the Q-values with the Qfunction, the values that are regressed towards may be similar to the original Q-function but will not be uniform.",
"Thus, we can more effectively reuse knowledge across multiple searches by initializing the Q-values with UCT rather than incorporating prior knowledge via PUCT."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.4313725531101227,
0.07843136787414551,
0.12765957415103912,
0.19230768084526062,
0.1904761791229248,
0.30188679695129395,
0.23333333432674408,
0.08219177275896072,
0.07894736528396606,
0.11764705181121826,
0.0937499925494194,
0.07017543166875839,
0.14705881476402283,
0.05333332717418671,
0.13793103396892548,
0.2535211145877838,
0.5,
0,
0.18867923319339752,
0.07692307233810425,
0.23188404738903046,
0.04255318641662598,
0.060606054961681366,
0.0833333283662796,
0.17777776718139648,
0.07999999821186066,
0.1090909019112587,
0.10958903282880783,
0.08888888359069824,
0.06779660284519196,
0,
0.04081632196903229,
0.07843136787414551,
0.0634920597076416,
0.08163265138864517,
0.10344827175140381,
0.072727270424366,
0.11538460850715637,
0.072727270424366,
0.14705881476402283,
0.09836065024137497,
0.06666666269302368,
0.037735845893621445,
0.07407406717538834
] | SkeAaJrKDS | true | [
"We propose a model-based method called \"Search with Amortized Value Estimates\" (SAVE) which leverages both real and planned experience by combining Q-learning with Monte-Carlo Tree Search, achieving strong performance with very small search budgets."
] |
[
"Two main families of reinforcement learning algorithms, Q-learning and policy gradients, have recently been proven to be equivalent when using a softmax relaxation on one part, and an entropic regularization on the other.",
"We relate this result to the well-known convex duality of Shannon entropy and the softmax function.",
"Such a result is also known as the Donsker-Varadhan formula.",
"This provides a short proof of the equivalence.",
"We then interpret this duality further, and use ideas of convex analysis to prove a new policy inequality relative to soft Q-learning."
] | [
0,
0,
0,
1,
0
] | [
0.2380952388048172,
0.23076923191547394,
0.0952380895614624,
0.5263158082962036,
0.3125
] | HyY0Ff-AZ | false | [
"A short proof of the equivalence of soft Q-learning and policy gradients."
] |
[
"Computer simulation provides an automatic and safe way for training robotic control\n",
"policies to achieve complex tasks such as locomotion.",
"However, a policy\n",
"trained in simulation usually does not transfer directly to the real hardware due\n",
"to the differences between the two environments.",
"Transfer learning using domain\n",
"randomization is a promising approach, but it usually assumes that the target environment\n",
"is close to the distribution of the training environments, thus relying\n",
"heavily on accurate system identification.",
"In this paper, we present a different\n",
"approach that leverages domain randomization for transferring control policies to\n",
"unknown environments.",
"The key idea that, instead of learning a single policy in\n",
"the simulation, we simultaneously learn a family of policies that exhibit different\n",
"behaviors.",
"When tested in the target environment, we directly search for the best\n",
"policy in the family based on the task performance, without the need to identify\n",
"the dynamic parameters.",
"We evaluate our method on five simulated robotic control\n",
"problems with different discrepancies in the training and testing environment\n",
"and demonstrate that our method can overcome larger modeling errors compared\n",
"to training a robust policy or an adaptive policy.",
"Recent developments in Deep Reinforcement Learning (DRL) have shown the potential to learn complex robotic controllers in an automatic way with minimal human intervention.",
"However, due to the high sample complexity of DRL algorithms, directly training control policies on the hardware still remains largely impractical for agile tasks such as locomotion.A promising direction to address this issue is to use the idea of transfer learning which learns a model in a source environment and transfers it to a target environment of interest.",
"In the context of learning robotic control policies, we can consider the real world the target environment and the computer simulation the source environment.",
"Learning in simulated environment provides a safe and efficient way to explore large variety of different situations that a real robot might encounter.",
"However, due to the model discrepancy between physics simulation and the real-world environment, also known as the Reality Gap BID2 BID18 , the trained policy usually fails in the target environment.",
"Efforts have been made to analyze the cause of the Reality Gap BID20 and to develop more accurate computer simulation to improve the ability of a policy when transferred it to real hardware.",
"Orthogonal to improving the fidelity of the physics simulation, researchers have also attempted to cross the reality gap by training more capable policies that succeed in a large variety of simulated environments.",
"Our method falls into the second category.To develop a policy capable of performing in various environments with different governing dynamics, one can consider to train a robust policy or to train an adaptive policy.",
"In both cases, the policy is trained in environments with randomized dynamics.",
"A robust policy is trained under a range of dynamics without identifying the specific dynamic parameters.",
"Such a policy can only perform well if the simulation is a good approximation of the real world dynamics.",
"In addition, for more agile motor skills, robust policies may appear over-conservative due to the uncertainty in the training environments.",
"On the other hand, when an adaptive policy is used, it learns to first identify, implicitly or explicitly, the dynamics of its environment, and then selects the best action according to the identified dynamics.",
"Being able to act differently according to the dynamics allows the adaptive policy to achieve higher performance on a larger range of dynamic systems.",
"However, when the target dynamics is notably different from the training dynamics, it may still produce sub-optimal results for two reasons.",
"First, when a sequence of novel observations is presented, the learned identification model in an adaptive policy may produce inaccurate estimations.",
"Second, even when the identification model is perfect, the corresponding action may not be optimal for the new situation.In this work, we introduce a new method that enjoys the versatility of an adaptive policy, while avoiding the challenges of system identification.",
"Instead of relating the observations in the target environment to the similar experiences in the training environment, our method searches for the best policy directly based on the task performance in the target environment.Our algorithm can be divided to two stages.",
"The first stage trains a family of policies, each optimized for a particular vector of dynamic parameters.",
"The family of policies can be parameterized by the dynamic parameters in a continuous representation.",
"Each member of the family, referred to as a strategy, is a policy associated with particular dynamic parameters.",
"Using a locomotion controller as an example, a strategy associated with low friction coefficient may exhibit cautious walking motion, while a strategy associated with high friction coefficient may result in more aggressive running motion.",
"In the second stage we perform a search over the strategies in the target environment to find the one that achieves the highest task performance.We evaluate our method on three examples that demonstrate transfer of a policy learned in one simulator DART, to another simulator MuJoCo.",
"Due to the differences in the constraint solvers, these simulators can produce notably different simulation results.",
"A more detailed description of the differences between DART and MuJoCo is provided in Appendix A. We also add latency to the MuJoCo environment to mimic a real world scenario, which further increases the difficulty of the transfer.",
"In addition, we use a quadruped robot simulated in Bullet to demonstrate that our method can overcome actuator modeling errors.",
"Latency and actuator modeling have been found to be important for Sim-to-Real transfer of locomotion policies BID20 .",
"Finally, we transfer a policy learned for a robot composed of rigid bodies to a robot whose end-effector is deformable, demonstrating the possiblity of using our method to transfer to problems that are challenging to model faithfully.",
"We have demonstrated that our method, SO-CMA, can successfully transfer policies trained in one environment to a notably different one with a relatively low amount of samples.",
"One advantage of SO-CMA, compared to the baselines, is that it works consistently well across different examples, while none of the baseline methods achieve successful transfer for all the examples.We hypothesize that the large variance in the performance of the baseline methods is due to their sensitivity to the type of task being tested.",
"For example, if there exists a robust controller that works for a large range of different dynamic parameters µ in the task, such as a bipedal running motion in the Walker2d example, training a Robust policy may achieve good performance in transfer.",
"However, when the optimal controller is more sensitive to µ, Robust policies may learn to use overly-conservative strategies, leading to sub-optimal performance (e.g. in HalfCheetah) or fail to perform the task (e.g. in Hopper).",
"On the other hand, if the target environment is not significantly different from the training environments, UPOSI may achieve good performance, as in HalfCheetah.",
"However, as the reality gap becomes larger, the system identification model in UPOSI may fail to produce good estimates and result in non-optimal actions.",
"Furthermore, Hist did not achieve successful transfer in any of the examples, possibly due to two reasons:",
"1) it shares similar limitation to UPOSI when the reality gap is large and",
"2) it is in general more difficult to train Hist due to the larger input space, so that with a limited sample budget it is challenging to fine-tune Hist effectively.We also note that although in some examples certain baseline method may achieve successful transfer, the fine-tuning process of these methods relies on having a dense reward signal.",
"In practice, one may only have access to a sparse reward signal in the target environment, e.g. distance traveled before falling to the ground.",
"Our method, using an evolutionary algorithm (CMA), naturally handles sparse rewards and thus the performance gap between our method (SO-CMA) and the baseline methods will likely be large if a sparse reward is used.",
"We have proposed a policy transfer algorithm where we first learn a family of policies simultaneously in a source environment that exhibits different behaviors and then search directly for a policy in the family that performs the best in the target environment.",
"We show that our proposed method can overcome large modeling errors, including those commonly seen on real robotic platforms with relatively low amount of samples in the target environment.",
"These results suggest that our method has the potential to transfer policies trained in simulation to real hardware.There are a few interesting directions that merit further investigations.",
"First, it would be interesting to explore other approaches for learning a family of policies that exhibit different behaviors.",
"One such example is the method proposed by BID11 , where an agent learns diverse skills without a reward function in an unsupervised manner.",
"Another example is the HCP-I policy proposed by BID4 , which learns a latent representation of the environment variations implicitly.",
"Equipping our policy with memories is another interesting direction to investigate.",
"The addition of memory will extend our method to target environments that vary over time.",
"We have investigated in a few options for strategy optimization and found that CMA-ES works well for our examples.",
"However, it would be desired if we can find a way to further reduce the sample required in the target environment.",
"One possible direction is to warm-start the optimization using models learned in simulation, such as the calibration model in or the online system identification model in BID38 .A",
"DIFFERENCES BETWEEN DART AND MUJOCO DART BID19 and MuJoCo BID35 are both physically-based simulators that computes how the state of virtual character or robot evolves over time and interacts with other objects in a physical way. Both",
"of them have been demonstrated for transferring controllers learned for a simulated robot to a real hardware , and there has been work trying to transfer policies between DART and MuJoCo BID36 . The",
"two simulators are similar in many aspects, for example both of them uses generalized coordinates for representing the state of a robot. Despite",
"the many similarities between DART and MuJoCo, there are a few important differences between them that makes transferring a policy trained in one simulator to the other challenging. For the",
"examples of DART-to-MuJoCo transfer presented in this paper, there are three major differences as described below:1. Contact",
"Handling Contact modeling is important for robotic control applications, especially for locomotion tasks, where robots heavily rely on manipulating contacts between end-effector and the ground to move forward. In DART",
", contacts are handled by solving a linear complementarity problem (LCP) (Tan et al.) , which ensures that in the next timestep, the objects will not penetrate with each other, while satisfying the laws of physics. In MuJoCo",
", the contact dynamics is modeled using a complementarity-free formulation, which means the objects might penetrate with each other. The resulting",
"impulse will increase with the penetration depth and separate the penetrating objects eventually."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.0555555522441864,
0.1249999925494194,
0.14814814925193787,
0.1621621549129486,
0.06666666269302368,
0,
0.1621621549129486,
0.05882352590560913,
0.06896551698446274,
0.06451612710952759,
0.05882352590560913,
0.17142856121063232,
0.1666666567325592,
0.11428570747375488,
0.1666666567325592,
0.07407407462596893,
0.060606054961681366,
0.23529411852359772,
0.2857142686843872,
0.1249999925494194,
0.08510638028383255,
0.19178082048892975,
0.1395348757505417,
0.21739129722118378,
0.19607841968536377,
0.15686273574829102,
0.19230768084526062,
0.18518517911434174,
0.2222222238779068,
0.19999998807907104,
0.24390242993831635,
0.09302324801683426,
0.15094339847564697,
0.17777776718139648,
0.09090908616781235,
0.17777776718139648,
0.1355932205915451,
0.1818181723356247,
0.05128204822540283,
0.20512820780277252,
0.19512194395065308,
0.11999999731779099,
0.2295081913471222,
0.1538461446762085,
0.2181818187236786,
0.3181818127632141,
0.19512194395065308,
0.22641508281230927,
0.2448979616165161,
0.1875,
0.3050847351551056,
0.07547169178724289,
0.1304347813129425,
0.21739129722118378,
0.1463414579629898,
0.15789473056793213,
0.16438356041908264,
0.12765957415103912,
0.1818181723356247,
0.3272727131843567,
0.30188679695129395,
0.19999998807907104,
0.09302324801683426,
0.1702127605676651,
0.1395348757505417,
0.05714285373687744,
0.05128204822540283,
0.2380952388048172,
0.1818181723356247,
0.21276594698429108,
0.16949151456356049,
0.11538460850715637,
0.13333332538604736,
0.2800000011920929,
0.1463414579629898,
0.11320754140615463,
0.13333332538604736,
0.13636362552642822,
0.1111111044883728
] | H1g6osRcFQ | true | [
"We propose a policy transfer algorithm that can overcome large and challenging discrepancies in the system dynamics such as latency, actuator modeling error, etc."
] |
[
"We propose vq-wav2vec to learn discrete representations of audio segments through a wav2vec-style self-supervised context prediction task.",
"The algorithm uses either a gumbel softmax or online k-means clustering to quantize the dense representations.",
"Discretization enables the direct application of algorithms from the NLP community which require discrete inputs.",
"Experiments show that BERT pre-training achieves a new state of the art on TIMIT phoneme classification and WSJ speech recognition.",
"Learning discrete representations of speech has gathered much recent interest (Versteegh et al., 2016; Dunbar et al., 2019) .",
"A popular approach to discover discrete units is via autoencoding (Tjandra et al., 2019; Eloff et al., 2019; Chorowski et al., 2019) sometimes coupled with an autoregressive model .",
"Another line of research is to learn continuous speech representations in a self-supervised way via predicting context information (Chung & Glass, 2018; van den Oord et al., 2018; Schneider et al., 2019) .",
"In this paper, we combine these two lines of research by learning discrete representations of speech via a context prediction task instead of reconstructing the input.",
"This enables us to directly apply well performing NLP algorithms to speech data ( Figure 1a ).",
"The vq-wav2vec encoder maps raw audio (X ) to a dense representation (Z) which is quantized (q) toẐ and aggregated into context representations (C); training requires future time step prediction.",
"(b) Acoustic models are trained by quantizing the raw audio with vq-wav2vec, then applying BERT to the discretized sequence and feeding the resulting representations into the acoustic model to output transcriptions.",
"Our new discretization algorithm, vq-wav2vec, learns discrete representations of fixed length segments of audio signal by utilizing the wav2vec loss and architecture (Schneider et al, 2019; §2) .",
"To choose the discrete variables, we consider a Gumbel-Softmax approach (Jang et al., 2016) as well as online k-means clustering, similar to VQ-VAE (Oord et al., 2017; Eloff et al., 2019; §3) .",
"We then train a Deep Bidirectional Transformer (BERT; Devlin et al., 2018; on the discretized unlabeled speech data and input these representations to a standard acoustic model (Figure 1b; §4) .",
"Our experiments show that BERT representations perform better than log-mel filterbank inputs as well as dense wav2vec representations on both TIMIT and WSJ benchmarks.",
"Discretization of audio enables the direct application of a whole host of algorithms from the NLP literature to speech data.",
"For example, we show that a standard sequence to sequence model from the NLP literature can be used to perform speech recognition over discrete audio tokens ( §5, §6).",
"vq-wav2vec is a self-supervised algorithm that quantizes unlabeled audio data which makes it amenable to algorithms requiring discrete data.",
"This approach improves the state of the art on the WSJ and TIMIT benchmarks by leveraging BERT pre-training.",
"In future work, we plan to apply other algorithms requiring discrete inputs to audio data and to explore self-supervised pre-training algorithms which mask part of the continuous audio input.",
"Another future work avenue is to finetune the pre-trained model to output transcriptions instead of feeding the pre-trained features to a custom ASR model."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0
] | [
0.1764705777168274,
0.12121211737394333,
0.19354838132858276,
0.1621621549129486,
0.11764705181121826,
0.09756097197532654,
0.08510638028383255,
0.09756097197532654,
0.3030303120613098,
0.12765957415103912,
0.1818181723356247,
0.1860465109348297,
0.13333332538604736,
0.1702127605676651,
0.20512819290161133,
0.29411762952804565,
0.1818181723356247,
0.34285715222358704,
0.12121211737394333,
0.4285714328289032,
0.0555555522441864
] | rylwJxrYDS | true | [
"Learn how to quantize speech signal and apply algorithms requiring discrete inputs to audio data such as BERT."
] |
[
"Deep Reinforcement Learning algorithms lead to agents that can solve difficult decision making problems in complex environments.",
"However, many difficult multi-agent competitive games, especially real-time strategy games are still considered beyond the capability of current deep reinforcement learning algorithms, although there has been a recent effort to change this \\citep{openai_2017_dota, vinyals_2017_starcraft}.",
"Moreover, when the opponents in a competitive game are suboptimal, the current \\textit{Nash Equilibrium} seeking, self-play algorithms are often unable to generalize their strategies to opponents that play strategies vastly different from their own.",
"This suggests that a learning algorithm that is beyond conventional self-play is necessary.",
"We develop Hierarchical Agent with Self-play (HASP), a learning approach for obtaining hierarchically structured policies that can achieve higher performance than conventional self-play on competitive games through the use of a diverse pool of sub-policies we get from Counter Self-Play (CSP).",
"We demonstrate that the ensemble policy generated by HASP can achieve better performance while facing unseen opponents that use sub-optimal policies.",
"On a motivating iterated Rock-Paper-Scissor game and a partially observable real-time strategic game (http://generals.io/), we are led to the conclusion that HASP can perform better than conventional self-play as well as achieve 77% win rate against FloBot, an open-source agent which has ranked at position number 2 on the online leaderboards.",
"Deep reinforcement learning (RL) has achieved significant success on many complex sequential decision-making problems.",
"Most of the problems are in robotics domain or video games BID16 .",
"However, complex real-time strategic competitive games still pose a strong challenge to the current deep reinforcement learning method due to the requirement of the ability to handle long-term scheduling, partial observability and multi-agent collaboration/competition.",
"BID28 BID20 .",
"Competitive games such as Go, in which each player optimize their own interests by finding the best response to opponents' strategies, are usually studied mainly on finding the Nash Equilibrium solutions BID20 , namely a combination of players' strategies upon which neither player can obtain higher rewards by modifying their strategy unilaterally BID23 .",
"However, in the real-world, opponents can have a variety of strengths and play styles and do not always adopt the equilibrium solutions.",
"In fact, human players are often remarkably good at analyzing strategies, tendencies, and flaws in opponents' behavior and then exploiting the opponents even if the resulting exploiting strategies themselves are subject to exploitation.",
"Exploitation is a central component of sports and competitive games.",
"This is also applicable in other real-world competitive domains, including airport and network security, financial and energy trading, traffic control, routing, etc.",
"Therefore, exploring game-playing strategies that intentionally avoid the equilibrium solution and instead \"learn to exploit\" is a promising research direction toward more capable, adaptable, and ultimately more human-like artificial agents.",
"Hence, we develop a new algorithm Hierarchical Agent with Self-Play that learns to exploit the suboptimality of opponents in order to learn a wider variety of behaviors more in line with what humans might choose to display.In this work, we focus on two-player, symmetric, extensive form games of imperfect information, though generalization to more players and asymmetric games is feasible and relatively straightforward.",
"First, we adopt recent Proximal Policy Gradient (PPO)(?) methods in deep reinforcement learning (RL), which has been successful at handling complex games BID16 BID24 BID17 and many other fields BID20 ) Second, we aim to automatically acquire a strong strategy that generalizes against opponents that we have not seen in training.",
"Here we use self-play to gradually acquire more and more complex behaviors.",
"This technique has proven successful at solving backgammon BID27 , the game of Go , imperfect information games such as Poker BID8 BID9 , continuous control BID0 , and modern video games BID20 .In",
"this paper, we investigate a new method for learning strong policies on multi-player games. We",
"introduce Hierarchical Agent with Self-Play , our hierarchical learning algorithm that automatically learns several diverse, exploitable polices and combines them into an ensemble model that draws on the experience of the sub-policies to respond appropriately to different opponents. Then",
", we show the results of some experiments on two multiplayer games: iterated Rock-Paper-Scissors and a partially observable real-time strategy game based on a popular online game generals.io (http://generals.io/). We",
"show that compared to conventional self-play, our algorithm learns a more diverse set of strategies and obtains higher rewards against test opponents of different skill levels. Remarkably",
", it can achieve 77% win rate against the FloBot, the strongest open-sourced scripted bot on the generals.io online leaderboard.",
"In this paper, we investigate a novel learning approach Hierarchical Agent with Self-Play to learning strategies in competitive games and real-time strategic games by learning several opponent-dependent sub-policies.",
"We evaluate its performance on a popular online game, where we show that our approach generalizes better than conventional self-play approaches to unseen opponents.",
"We also show that our algorithm vastly outperforms conventional self-play when it comes to learning optimal mixed strategies in simpler matrix games.",
"Though our method has achieved good results, there are some areas which could be improved in future research.",
"In the future, we hope to also achieve good performance on larger versions of Generals, where games last longer and therefore learning is harder.",
"We would also like to investigate further the effects that our algorithm has on exploration with sparse reward."
] | [
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.08888888359069824,
0.16129031777381897,
0.1428571343421936,
0.25641024112701416,
0.746268630027771,
0.25,
0.2631579041481018,
0.0952380895614624,
0.04999999701976776,
0.20689654350280762,
0,
0.10666666179895401,
0.0833333283662796,
0,
0.15789473056793213,
0.04081632196903229,
0.0714285671710968,
0.19999998807907104,
0.10666666179895401,
0.05128204822540283,
0.03448275476694107,
0.3255814015865326,
0.1875,
0.14035087823867798,
0.1111111044883728,
0.12765957415103912,
0.37735849618911743,
0.3461538553237915,
0.23999999463558197,
0,
0.19230768084526062,
0.17391303181648254
] | HJz6QhR9YQ | true | [
"We develop Hierarchical Agent with Self-play (HASP), a learning approach for obtaining hierarchically structured policies that can achieve high performance than conventional self-play on competitive real-time strategic games."
] |
[
"We introduce adaptive input representations for neural language modeling which extend the adaptive softmax of Grave et al. (2017) to input representations of variable capacity.",
"There are several choices on how to factorize the input and output layers, and whether to model words, characters or sub-word units.",
"We perform a systematic comparison of popular choices for a self-attentional architecture.",
"Our experiments show that models equipped with adaptive embeddings are more than twice as fast to train than the popular character input CNN while having a lower number of parameters.",
"On the WikiText-103 benchmark we achieve 18.7 perplexity, an improvement of 10.5 perplexity compared to the previously best published result and on the Billion Word benchmark, we achieve 23.02 perplexity.",
"Language modeling is a basic task in natural language processing, with many applications such as speech recognition BID1 and statistical machine translation BID28 BID33 BID2 .",
"Recently, much progress has been made by neural methods BID3 BID20 ) based on LSTMs BID13 , gated convolutional networks BID7 and self-attentional networks BID0 .There",
"are different choices for the basic unit we wish to model, including full words BID3 , characters for the input BID15 , or also the output BID18 as well as sub-words BID4 BID19 . Word-based",
"models are particularly challenging since computing probabilities for all 800K words of the BILLION WORD benchmark is still a substantial part of the overall computation BID6 .A popular approach",
"to lower the computational burden is to structure the output vocabulary so that not all probabilities need to be computed. The hierarchical softmax",
"does this by introducing latent variables or clusters to simplify normalization BID8 BID22 BID21 . This has been further improved",
"by the adaptive softmax which introduces a variable capacity scheme for output word embeddings, assigning more parameters to frequent words and fewer parameters to rare words BID10 .In this paper, we introduce adaptive",
"input embeddings which extend the adaptive softmax to input word representations. This factorization assigns more capacity",
"to frequent words and reduces the capacity for less frequent words with the benefit of reducing overfitting to rare words. For a competitive setup on the BILLION WORD",
"benchmark, adaptive input embeddings reduce the number of parameters in the input and output layers by 23% while achieving higher accuracy over fixed size embeddings. When the adaptive input representations are",
"tied with an adaptive softmax in the output, then the number of parameters is reduced by a total of 61%.Our experiments compare models based on word",
"inputs, character inputs, as well as sub-word units using a self-attention architecture BID34 . We show that models with adaptive word representations",
"can outperform very strong character-based models while training more than twice as fast. We also substantially improve adaptive softmax by introducing",
"additional dropout regularization in the tail projection. On the WIKITEXT-103 benchmark we achieve a perplexity of 18.7",
", a DISPLAYFORM0 A m e 4 R X e n K n z 4 r w 7 H 8 v R k l P s n M I f O J 8 / t g + R 9 g = = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = \" q u R U N k T x I l J t y S 1 E 3 C q d 9 P h a 6 P U = \" > A DISPLAYFORM1 A m e 4 R X e n K n z 4 r w 7 H 8 v R k l P s n M I f O J 8 / t g + R 9 g = = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = \" q u R U N k T x I l J t y S 1 E 3 C q d 9 P h a 6 P U = \" > A DISPLAYFORM2 A m e 4 R X e n K n z 4 r w 7 H 8 v R k l P s n M I f O J 8 / t g + R 9 g = = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = \" q u R U N k T x I l J t y S 1 E 3 C q d 9 P h a 6 P U = \" > A DISPLAYFORM3 DISPLAYFORM4 DISPLAYFORM5 DISPLAYFORM6 DISPLAYFORM7 DISPLAYFORM8 DISPLAYFORM9 DISPLAYFORM10",
"Adaptive input embeddings vary the size of input word embeddings which can improve accuracy while drastically reducing the number of model parameters.",
"When sharing parameters with an adaptive softmax, the number of parameters can be further reduced which improves training speed.",
"We presented a comparison between different input and output layer factorizations including word inputs, character inputs and sub-word units in both the input and output.Our experiments show that models with adaptive input embeddings train faster compared to character input CNNs while achieving higher accuracy.",
"We achieve new state of the art results on and BILLION WORD.",
"In future work, we will apply variable sized input embeddings to other tasks."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.12121211737394333,
0.1875,
0,
0.09756097197532654,
0.20000000298023224,
0.05405404791235924,
0.10810810327529907,
0.04878048226237297,
0,
0,
0,
0.1463414579629898,
0.29629629850387573,
0.1764705777168274,
0.1621621549129486,
0.10810810327529907,
0.06451612710952759,
0,
0,
0,
0.19999998807907104,
0,
0.1599999964237213,
0.1666666567325592,
0.1599999964237213
] | ByxZX20qFQ | true | [
"Variable capacity input word embeddings and SOTA on WikiText-103, Billion Word benchmarks."
] |
[
"In this paper, we consider the problem of detecting object under occlusion.",
"Most object detectors formulate bounding box regression as a unimodal task (i.e., regressing a single set of bounding box coordinates independently).",
"However, we observe that the bounding box borders of an occluded object can have multiple plausible configurations.",
"Also, the occluded bounding box borders have correlations with visible ones.",
"Motivated by these two observations, we propose a deep multivariate mixture of Gaussians model for bounding box regression under occlusion.",
"The mixture components potentially learn different configurations of an occluded part, and the covariances between variates help to learn the relationship between the occluded parts and the visible ones.",
"Quantitatively, our model improves the AP of the baselines by 3.9% and 1.2% on CrowdHuman and MS-COCO respectively with almost no computational or memory overhead.",
"Qualitatively, our model enjoys explainability since we can interpret the resulting bounding boxes via the covariance matrices and the mixture components.",
"We propose a multivariate mixture of Gaussians model for object detection under occlusion.",
"Quantitatively, it demonstrates consistent improvements over the baselines among MS-COCO, PASCAL VOC 2007, CrowdHuman, and VehicleOcclusion.",
"Qualitatively, our model enjoys explainability as the detection results can be diagnosed via the covariance matrices and the mixture components."
] | [
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0
] | [
0.23999999463558197,
0.3030303120613098,
0.19999998807907104,
0.1666666567325592,
0.7878788113594055,
0.11428570747375488,
0.10526315122842789,
0.1875,
0.692307710647583,
0,
0.12903225421905518
] | H1lpE0VFPS | true | [
"a deep multivariate mixture of Gaussians model for bounding box regression under occlusion"
] |
[
"Adversarial examples are a pervasive phenomenon of machine learning models where seemingly imperceptible perturbations to the input lead to misclassifications for otherwise statistically accurate models. ",
"Adversarial training, one of the most successful empirical defenses to adversarial examples, refers to training on adversarial examples generated within a geometric constraint set.",
"The most commonly used geometric constraint is an $L_p$-ball of radius $\\epsilon$ in some norm.",
"We introduce adversarial training with Voronoi constraints, which replaces the $L_p$-ball constraint with the Voronoi cell for each point in the training set.",
"We show that adversarial training with Voronoi constraints produces robust models which significantly improve over the state-of-the-art on MNIST and are competitive on CIFAR-10.",
"Deep learning at scale has led to breakthroughs on important problems in computer vision (Krizhevsky et al. (2012) ), natural language processing (Wu et al. (2016) ), and robotics (Levine et al. (2015) ).",
"Shortly thereafter, the interesting phenomena of adversarial examples was observed.",
"A seemingly ubiquitous property of machine learning models where perturbations of the input that are imperceptible to humans reliably lead to confident incorrect classifications (Szegedy et al. (2013) ; Goodfellow et al. (2014) ).",
"What has ensued is a standard story from the security literature: a game of cat and mouse where defenses are proposed only to be quickly defeated by stronger attacks (Athalye et al. (2018) ).",
"This has led researchers to develop methods which are provably robust under specific attack models ; Sinha et al. (2018) ; Raghunathan et al. (2018) ; ) as well as empirically strong heuristics ; ).",
"As machine learning proliferates into society, including security-critical settings like health care Esteva et al. (2017) or autonomous vehicles Codevilla et al. (2018) , it is crucial to develop methods that allow us to understand the vulnerability of our models and design appropriate counter-measures.",
"Adversarial training has been one of the few heuristic methods which has not been defeated by stronger attacks.",
"In this paper, we propose a modification to the standard paradigm of adversarial training.",
"We replace the L p -ball constraint with the Voronoi cells of the training data, which have several advantages detailed in Section 3.",
"In particular, we need not set the maximum perturbation size as part of the training procedure.",
"The Voronoi cells adapt to the maximum allowable perturbation size locally on the data distribution.",
"We show how to construct adversarial examples within the Voronoi cells and how to incorporate Voronoi constraints into standard adversarial training.",
"In Section 5 we show that adversarial training with Voronoi constraints gives state-of-the-art robustness results on MNIST and competitive results on CIFAR-10.",
"The L p -ball constraint for describing adversarial perturbations has been a productive formalization for designing robust deep networks.",
"However, the use of L p -balls has significant drawbacks in highcodimension settings and leads to sub-optimal results in practice.",
"Adversarial training with Voronoi constraints improves robustness by giving the adversary the freedom to explore the neighborhood around the data distribution."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0
] | [
0.2380952388048172,
0.25,
0.12121211737394333,
0.3333333134651184,
0.3414634168148041,
0.04255318641662598,
0.1428571343421936,
0.1666666567325592,
0.11764705181121826,
0.13333332538604736,
0.1355932205915451,
0.1764705777168274,
0.25,
0.4615384638309479,
0.1818181723356247,
0.3125,
0.34285715222358704,
0.15789473056793213,
0.1111111044883728,
0.1621621549129486,
0.3333333134651184
] | HJeb9xSYwB | true | [
"We replace the Lp ball constraint with the Voronoi cells of the training data to produce more robust models. "
] |
[
"Recent research efforts enable study for natural language grounded navigation in photo-realistic environments, e.g., following natural language instructions or dialog.",
"However, existing methods tend to overfit training data in seen environments and fail to generalize well in previously unseen environments.",
"In order to close the gap between seen and unseen environments, we aim at learning a generalizable navigation model from two novel perspectives:\n",
"(1) we introduce a multitask navigation model that can be seamlessly trained on both Vision-Language Navigation (VLN) and Navigation from Dialog History (NDH) tasks, which benefits from richer natural language guidance and effectively transfers knowledge across tasks;\n",
"(2) we propose to learn environment-agnostic representations for navigation policy that are invariant among environments, thus generalizing better on unseen environments.\n",
"Extensive experiments show that our environment-agnostic multitask navigation model significantly reduces the performance gap between seen and unseen environments and outperforms the baselines on unseen environments by 16% (relative measure on success rate) on VLN and 120% (goal progress) on NDH, establishing the new state of the art for NDH task.",
"Navigation in visual environments by following natural language guidance (Hemachandra et al., 2015) is a fundamental capability of intelligent robots that simulate human behaviors, because humans can easily reason about the language guidance and navigate efficiently by interacting with the visual environments.",
"Recent efforts (Anderson et al., 2018b; Das et al., 2018; Thomason et al., 2019) empower large-scale learning of natural language grounded navigation that is situated in photorealistic simulation environments.",
"Nevertheless, the generalization problem commonly exists for these tasks, especially indoor navigation: the agent usually performs poorly on unknown environments that have never been seen during training.",
"One of the main causes for such behavior is data scarcity as it is expensive and time-consuming to extend either visual environments or natural language guidance.",
"The number of scanned houses for indoor navigation is limited due to high expense and privacy concerns.",
"Besides, unlike vision-only navigation tasks (Mirowski et al., 2018; Xia et al., 2018; Manolis Savva* et al., 2019; Kolve et al., 2017) where episodes can be exhaustively sampled in simulation, natural language grounded navigation is supported by human demonstrated interaction and communication in natural language.",
"It is impractical to fully collect and cover all the samples for individual tasks.",
"Therefore, it is essential though challenging to efficiently learn a more generalized policy for natural language grounded navigation tasks from existing data (Wu et al., 2018a; b) .",
"In this paper, we study how to resolve the generalization and data scarcity issues from two different angles.",
"First, previous methods are trained for one task at the time, so each new task requires training a brand new agent instance that can only solve the one task it was trained on.",
"In this work, we propose a generalized multitask model for natural language grounded navigation tasks such as Vision-Language Navigation (VLN) and Navigation from Dialog History (NDH), aiming at efficiently transferring knowledge across tasks and effectively solving both tasks with one agent simultaneously.",
"Moreover, although there are thousands of trajectories paired with language guidance, the underlying house scans are restricted.",
"For instance, the popular Matterport3D dataset (Chang et al., 2017) contains only 61 unique house scans in the training set.",
"The current models perform much better in seen environments by taking advantage of the knowledge of specific houses they have acquired over multiple task completions during training, but fail to generalize to houses not seen during training.",
"Hence we propose an environment-agnostic learning method to learn a visual representation that is invariant to specific environments but still able to support navigation.",
"Endowed with the learned environment-agnostic representations, the agent is further prevented from the overfitting issue and generalizes better on unseen environments.",
"To the best of our knowledge, we are the first to introduce natural language grounded multitask and environment-agnostic training regimes and validate their effectiveness on VLN and NDH tasks.",
"Extensive experiments demonstrate that our environment-agnostic multitask navigation model can not only efficiently execute different language guidance in indoor environments but also outperform the single-task baseline models by a large margin on both tasks.",
"Besides, the performance gap between seen and unseen environments is significantly reduced.",
"We also set a new state of the art on NDH with over 120% improvement in terms of goal progress.",
"In this work, we show that the model trained using environment-agnostic multitask learning approach learns a generalized policy for the two natural language grounded navigation tasks.",
"It closes down the gap between seen and unseen environments, learns more generalized environment representations and effectively transfers knowledge across tasks outperforming baselines on both the tasks simultaneously by a significant margin.",
"At the same time, the two approaches independently benefit the agent learning and are complementary to each other.",
"There are possible future extensions to our work-the MT-RCM can further be adapted to other language-grounded navigation datasets, such as those using Street View (e.g., Touchdown (Chen et al., 2019) Table 6 presents a more detailed ablation of Table 5 using different parts of dialog history.",
"The results prove that agents rewarded for getting closer to the goal room consistently outperform agents rewarded for getting closer to the exact goal location.",
"Table 7 presents a more detailed analysis from Table 3 with access to different parts of dialog history.",
"The models with shared language encoder consistently outperform those with separate encoders.",
"Figure 4: Visualizing performance gap between seen and unseen environments for VLN and NDH tasks.",
"For VLN, the plotted metric is agent's success rate while for NDH, the metric is agent's progress."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.2631579041481018,
0.05714285373687744,
0.19512194395065308,
0.19230768084526062,
0.3499999940395355,
0.1355932205915451,
0.1090909019112587,
0.22727271914482117,
0.045454539358615875,
0.1860465109348297,
0.17142856121063232,
0.18867924809455872,
0.1875,
0.52173912525177,
0.0555555522441864,
0.08888888359069824,
0.3571428656578064,
0.05882352590560913,
0,
0.03999999538064003,
0.3499999940395355,
0.05405404791235924,
0.3181818127632141,
0.23076923191547394,
0,
0.10810810327529907,
0.5581395030021667,
0.1702127605676651,
0.11764705181121826,
0.12903225421905518,
0.11428570747375488,
0.17142856121063232,
0.06896550953388214,
0.1249999925494194,
0.06451612710952759
] | HkxzNpNtDS | true | [
"We propose to learn a more generalized policy for natural language grounded navigation tasks via environment-agnostic multitask learning."
] |
[
"In this paper we propose to perform model ensembling in a multiclass or a multilabel learning setting using Wasserstein (W.) barycenters.",
"Optimal transport metrics, such as the Wasserstein distance, allow incorporating semantic side information such as word embeddings.",
"Using W. barycenters to find the consensus between models allows us to balance confidence and semantics in finding the agreement between the models.",
"We show applications of Wasserstein ensembling in attribute-based classification, multilabel learning and image captioning generation.",
"These results show that the W. ensembling is a viable alternative to the basic geometric or arithmetic mean ensembling.",
"Model ensembling consists in combining many models into a stronger, more robust and more accurate model.",
"Ensembling is ubiquitous in machine learning and yields improved accuracies across multiple prediction tasks such as multi-class or multi-label classification.",
"For instance in deep learning, output layers of Deep Neural Networks(DNNs), such as softmaxes or sigmoids, are usually combined using a simple arithmetic or geometric mean.",
"The arithmetic mean rewards confidence of the models while the geometric means seeks the consensus across models.What is missing in the current approaches to models ensembling, is the ability to incorporate side information such as class relationships represented by a graph or via an embedding space.",
"For example a semantic class can be represented with a finite dimensional vector in a pretrained word embedding space such as GloVe BID28 .",
"The models' predictions can be seen as defining a distribution in this label space defined by word embeddings: if we denote p i to be the confidence of a model on a bin corresponding to a word having an embedding x i , the distribution on the label space is therefore p = i p i δ xi .",
"In order to find the consensus between many models predictions, we propose to achieve this consensus within this representation in the label space.",
"In contrast to arithmetic and geometric averaging, which are limited to the independent bins' confidence, this has the advantage of carrying the semantics to model averaging via the word embeddings.",
"More generally this semantic information can be encoded via cost a matrix C, where C ij encodes the dissimilarity between semantic classes i and j, and C defines a ground metric on the label space.To achieve this goal, we propose to combine model predictions via Wasserstein (W.) barycenters BID0 , which enables us to balance the confidence of the models and the semantic side information in finding a consensus between the models.",
"Wasserstein distances are a naturally good fit for such a task, since they are defined with respect to a ground metric in the label space of the models, which carry such semantic information.",
"Moreover they enable the possiblity of ensembling predictions defined on different label sets, since the Wasserstein distance allows to align and compare those different predictions.",
"Since their introduction in BID0 W. barycenter computations were facilitated by entropic regularization BID6 and iterative algorithms that rely on iterative Bregman projections BID2 .",
"Many applications have used W. barycenters in Natural Language Processing (NLP), clustering and graphics.",
"We show in this paper that W. barycenters are effective in model ensembling and in finding a semantic consensus, and can be applied to a wide range of problems in machine learning (Table 1) .The",
"paper is organized as follows: In Section 2 we revisit geometric and arithmetic means from a geometric viewpoint, showing that they are 2 and Kullback Leibler divergence KL (extended KL divergence) barycenters respectively. We",
"give a brief overview of optimal transport metric and W. barycenters in Section 3.",
"We highlight the advantages of W. barycenter ensembling in terms of semantic smoothness and diversity in Section 4.",
"Related work on W. barycenters in Machine learning are presented in Section 5.",
"Finally we show applications of Wasserstein ensembling on attribute based classification, multi-label learning and image captioning in Section 6.",
"We showed in this paper that W. barycenters are effective in model ensembling in machine learning.",
"In the unbalanced case we showed their effectiveness in attribute based classification, as well as in improving the accuracy of multi-label classification.",
"In the balanced case, we showed that they promote diversity and improve natural language generation by incorporating the knowledge of synonyms or word embeddings.",
"Table 8 : Sample output (top 20 words) of barycenter for different similarity matrices K based on GloVe (columns titles denote the distance of K from identity K − I F and corresponding .).",
"Each column shows a word and its corresponding probability over the vocabulary.",
"Note that the last column coincides with the output from geometric mean.",
"Table 8 shows the effect of entropic regularization ε on the resulting distribution of the words of W. barycenter using GloVe embedding matrix.",
"As K moves closer to the identity matrix, the entropy of barycenter decreases, leading to outputs that are close/identical to the geometric mean.",
"On the other hand, with a large entropic regularization, matrix K moves away from identity, becoming an uninformative matrix of all 1's.",
"This eventually leads to a uniform distribution which spreads the probability mass equally across all the words.",
"This can be also visualized with a histogram in Figure 5 , where the histograms on the bottom represent distributions that are close to uniform, which can be considered as failure cases of W. barycenter, since the image captioner in this case can only generate meaningless, gibberish captions."
] | [
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.4516128897666931,
0.1599999964237213,
0.1428571343421936,
0.1599999964237213,
0.14814814925193787,
0.1599999964237213,
0,
0,
0.04081632196903229,
0.06451612710952759,
0.11764705926179886,
0.20689654350280762,
0.11428570747375488,
0.21212120354175568,
0.21052631735801697,
0.1875,
0,
0.0833333283662796,
0.25,
0.09999999403953552,
0.0833333283662796,
0.1538461446762085,
0.09090908616781235,
0.20689654350280762,
0.25,
0.06896550953388214,
0.060606054961681366,
0.04878048226237297,
0,
0,
0,
0.06896550953388214,
0,
0.07692307233810425,
0.038461536169052124
] | H1g4k309F7 | true | [
"we propose to use Wasserstein barycenters for semantic model ensembling"
] |
[
"While deep learning has been incredibly successful in modeling tasks with large, carefully curated labeled datasets, its application to problems with limited labeled data remains a challenge.",
"The aim of the present work is to improve the label efficiency of large neural networks operating on audio data through a combination of multitask learning and self-supervised learning on unlabeled data.",
"We trained an end-to-end audio feature extractor based on WaveNet that feeds into simple, yet versatile task-specific neural networks.",
"We describe several easily implemented self-supervised learning tasks that can operate on any large, unlabeled audio corpus.",
"We demonstrate that, in scenarios with limited labeled training data, one can significantly improve the performance of three different supervised classification tasks individually by up to 6% through simultaneous training with these additional self-supervised tasks.",
"We also show that incorporating data augmentation into our multitask setting leads to even further gains in performance.",
"Deep neural networks (DNNs) are the bedrock of state-of-the-art approaches to modeling and classifying auditory data BID0 ; van den BID20 ; Li et al. (2017) ).",
"However, these data-hungry neural architectures are not always matched to the available training resources, and the creation of large-scale corpora of audio training data is costly and time-consuming.",
"This problem is exacerbated when training directly on the acoustic waveform, where input is highdimensional and noisy.",
"While labeled datasets are quite scarce, we have access to virtually infinite sources of unlabeled data, which makes effective unsupervised learning an enticing research direction.",
"Here we aim to develop a technique that enables models to generalize better by incorporating auxiliary self-supervised auditory tasks during model training BID4 ).Our",
"main contributions in this paper are twofold: the successful identification of appropriate selfsupervised audio-related tasks and the demonstration that they can be trained jointly with supervised tasks in order to significantly improve performance. We",
"also show how to use WaveNet as a general feature extractor capable of providing rich audio representations using raw waveform data as input. We",
"hypothesize that by learning multi-scale hierarchical representations from raw audio, WaveNetbased models are capable of adapting to subtle variations within tasks in an efficient and robust manner. We",
"explore this framework on three supervised classification tasks -audio tagging, speaker identification and speech command recognition -and demonstrate that one can leverage unlabeled data to improve performance on each task. We",
"further show that these results pair well with more common data augmentation techniques, and that our proposed self-supervised tasks can also be used as a pre-training stage to provide performance improvements through transfer learning.These authors contributed equally to this work."
] | [
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.060606058686971664,
0.1764705777168274,
0.07407406717538834,
0.1599999964237213,
0.04999999701976776,
0,
0.05882352590560913,
0.1249999925494194,
0.0833333283662796,
0.060606058686971664,
0,
0.05128204822540283,
0.06451612710952759,
0.1111111044883728,
0.10526315122842789,
0.08510638028383255
] | BkecJjCEuN | true | [
"Label-efficient audio classification via multi-task learning and self-supervision"
] |
[
"Recent neural network and language models have begun to rely on softmax distributions with an extremely large number of categories.",
"In this context calculating the softmax normalizing constant is prohibitively expensive.",
"This has spurred a growing literature of efficiently computable but biased estimates of the softmax.",
"In this paper we present the first two unbiased algorithms for maximizing the softmax likelihood whose work per iteration is independent of the number of classes and datapoints (and does not require extra work at the end of each epoch).",
"We compare our unbiased methods' empirical performance to the state-of-the-art on seven real world datasets, where they comprehensively outperform all competitors.",
"Under the softmax model 1 the probability that a random variable y takes on the label ∈ {1, ..., K}, is given by p(y = |x; W ) = e where x ∈ R D is the covariate, w k ∈ R D is the vector of parameters for the k-th class, and W = [w 1 , w 2 , ..., w K ] ∈ R D×K is the parameter matrix.",
"Given a dataset of N label-covariate pairs D = {(y i , x i )} N i=1 , the ridge-regularized maximum log-likelihood problem is given by DISPLAYFORM0 where W 2 denotes the Frobenius norm.This paper focusses on how to maximize (2) when N, K, D are all large.",
"Having large N, K, D is increasingly common in modern applications such as natural language processing and recommendation systems, where N, K, D can each be on the order of millions or billions BID15 BID6 BID4 .A",
"natural approach to maximizing L(W ) with large N, K, D is to use Stochastic Gradient Descent (SGD), sampling a mini-batch of datapoints each iteration. However",
"if K, D are large then the O(KD) cost of calculating the normalizing sum K k=1 e x i w k in the stochastic gradients can still be prohibitively expensive. Several",
"approximations that avoid calculating the normalizing sum have been proposed to address this difficulty. These include",
"tree-structured methods BID2 BID7 BID9 , sampling methods BID1 BID14 BID10 and self-normalization BID0 . Alternative models",
"such as the spherical family of losses that do not require normalization have been proposed to sidestep the issue entirely BID13 . BID11 avoid calculating",
"the sum using a maximization-majorization approach based on lower-bounding the eigenvalues of the Hessian matrix. All 2 of these approximations",
"are computationally tractable for large N, K, D, but are unsatisfactory in that they are biased and do not converge to the optimal W * = argmax L(W ).Recently BID16 managed to recast",
"(2) as a double-sum over N and K. This formulation is amenable to SGD that samples both a datapoint and class each iteration, reducing the per iteration cost to O(D). The problem is that vanilla SGD",
"when applied to this formulation is unstable, in that the gradients suffer from high variance and are susceptible to computational overflow. BID16 deal with this instability",
"by occasionally calculating the normalizing sum for all datapoints at a cost of O(N KD). Although this achieves stability",
", its high cost nullifies the benefit of the cheap O(D) per iteration cost.The goal of this paper is to develop robust SGD algorithms for optimizing double-sum formulations of the softmax likelihood. We develop two such algorithms.",
"The first is a new SGD method called",
"U-max, which is guaranteed to have bounded gradients and converge to the optimal solution of (2) for all sufficiently small learning rates. The second is an implementation of Implicit",
"SGD, a stochastic gradient method that is known to be more stable than vanilla SGD and yet has similar convergence properties BID18 . We show that the Implicit SGD updates for the",
"doublesum formulation can be efficiently computed and has a bounded step size, guaranteeing its stability.We compare the performance of U-max and Implicit SGD to the (biased) state-of-the-art methods for maximizing the softmax likelihood which cost O(D) per iteration. Both U-max and Implicit SGD outperform all other",
"methods. Implicit SGD has the best performance with an average",
"log-loss 4.29 times lower than the previous state-of-the-art.In summary, our contributions in this paper are that we:1. Provide a simple derivation of the softmax double-sum",
"formulation and identify why vanilla SGD is unstable when applied to this formulation (Section 2). 2. Propose the U-max algorithm to stabilize the SGD updates",
"and prove its convergence (Section 3.1). 3. Derive an efficient Implicit SGD implementation, analyze",
"its runtime and bound its step size (Section 3.2). 4. Conduct experiments showing that both U-max and Implicit",
"SGD outperform the previous state-of-the-art, with Implicit SGD having the best performance (Section 4).",
"In this paper we have presented the U-max and Implicit SGD algorithms for optimizing the softmax likelihood.",
"These are the first algorithms that require only O(D) computation per iteration (without extra work at the end of each epoch) that converge to the optimal softmax MLE.",
"Implicit SGD can be efficiently implemented and clearly out-performs the previous state-of-the-art on seven real world datasets.",
"The result is a new method that enables optimizing the softmax for extremely large number of samples and classes.So far Implicit SGD has only been applied to the simple softmax, but could also be applied to any neural network where the final layer is the softmax.",
"Applying Implicit SGD to word2vec type models, which can be viewed as softmaxes where both x and w are parameters to be fit, might be particularly fruitful.",
"10 The learning rates η = 10 3,4 are not displayed in the FIG2 for visualization purposes.",
"It had similar behavior as η = 10 2 ."
] | [
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.24390242993831635,
0.1249999925494194,
0.17142856121063232,
0.3272727131843567,
0.0952380895614624,
0.14492753148078918,
0.0923076868057251,
0.145454540848732,
0.1304347813129425,
0.11999999731779099,
0.05405404791235924,
0.0555555522441864,
0.09090908616781235,
0.21052631735801697,
0.07692307233810425,
0.039215680211782455,
0.08888888359069824,
0.19999998807907104,
0.19230768084526062,
0.06896550953388214,
0.12765957415103912,
0.16326530277729034,
0.16393442451953888,
0.19354838132858276,
0.12765957415103912,
0.0952380895614624,
0,
0.052631575614213943,
0.12121211737394333,
0.21621620655059814,
0.17391303181648254,
0.10526315122842789,
0.2295081913471222,
0,
0.10810810327529907,
0
] | H1bhRHeA- | true | [
"Propose first methods for exactly optimizing the softmax distribution using stochastic gradient with runtime independent on the number of classes or datapoints."
] |
[
"Crafting adversarial examples on discrete inputs like text sequences is fundamentally different from generating such examples for continuous inputs like images.",
"This paper tries to answer the question: under a black-box setting, can we create adversarial examples automatically to effectively fool deep learning classifiers on texts by making imperceptible changes?",
"Our answer is a firm yes.",
"Previous efforts mostly replied on using gradient evidence, and they are less effective either due to finding the nearest neighbor word (wrt meaning) automatically is difficult or relying heavily on hand-crafted linguistic rules.",
"We, instead, use Monte Carlo tree search (MCTS) for finding the most important few words to perturb and perform homoglyph attack by replacing one character in each selected word with a symbol of identical shape. ",
"Our novel algorithm, we call MCTSBug, is black-box and extremely effective at the same time.",
"Our experimental results indicate that MCTSBug can fool deep learning classifiers at the success rates of 95% on seven large-scale benchmark datasets, by perturbing only a few characters. ",
"Surprisingly, MCTSBug, without relying on gradient information at all, is more effective than the gradient-based white-box baseline.",
"Thanks to the nature of homoglyph attack, the generated adversarial perturbations are almost imperceptible to human eyes.",
"Recent studies have shown that adding small modifications to continuous inputs can fool state-ofthe-art deep classifiers, resulting in incorrect classifications BID33 BID15 .",
"This phenomenon was first formulated as adding tiny and often imperceptible perturbations on image pixel values that could fool deep learning models to make wrong predictions.",
"It raises concerns about the robustness of deep learning systems, considering that they have become core components of many safety-sensitive applications.",
"For a given classifier F and a test sample x, recent literature defined such perturbations as vector ∆x and the resulting sample x = x + ∆x as an adversarial example BID15 .Deep",
"learning has achieved remarkable results in the field of natural language processing (NLP), including sentiment analysis, relation extraction, and machine translation BID35 BID26 BID34 . In contrast",
"to the large body of research on adversarial examples for image classification BID15 BID33 BID29 BID2 , less attention has been paid on generating adversarial examples on texts. A few recent",
"studies defined adversarial perturbations on deep RNN-based text classifiers BID30 BID32 . BID30 first",
"chose the word at a random position in a text input, then used a projected Fast Gradient Sign Method to perturb the word's embedding vector. The perturbed",
"vector is then projected to the nearest word vector in the word embedding space, resulting in an adversarial sequence (adversarial examples in the text case; we adopt the name in this paper). This procedure",
"may, however, replace words in an input sequence with totally irrelevant words since there is no hard guarantee that words close in the embedding space are semantically similar. BID32 used the",
"\"saliency map\" of input words and complicated linguistic strategies to generate adversarial sequences that are semantically meaningful to humans. However, this",
"strategy is difficult to perform automatically. BID12 proposed",
"greedy scoring strategies to rank words in a text sequence and then applied simple character-level Figure 1 : An example of MCTSBug generated black-box adversarial sequence. x shows an original",
"text sample and x shows an adversarial sequence generated from x. From x to x , only",
"two characters are modified. This fools the deep",
"classifier to return a wrong classification of sentiment (from positive to negative).transformations like",
"swapping to fool deep classifiers. Its central idea, minimizing",
"the edit distance of the perturbation makes sense. However, the perturbations are",
"quite visible and the empirical effectiveness needs improvements. We provide more discussions in",
"Section 5.1.Crafting adversarial examples on discrete text sequences is fundamentally different from creating them on continuous inputs like images or audio signals. Continuous input such as images",
"can be naturally represented as points in a continuous R d space (d denotes the total number of pixels in an image). Using an L p -norm based distance",
"metric to limit the modification ∆x on images appears natural and intuitive. However, for text inputs searching",
"for small text modifications is difficult, because it is hard to define the distance between two discrete sequences. Three possible choices exist:• Because",
"deep learning NLP models usually use an embedding layer to map discrete inputs to a continuous space (gradients on the embedding layer is calculable). Therefore we can measure the distance",
"among text inputs in the continuous space defined by the word2vec embedding BID31 . However, state-of-the-art embedding models",
"are still unsatisfactory especially in terms of providing nearest neighbors among words. FIG3 shows a few examples we chose based on",
"the GloVe BID31 . Two words close to each other in the GloVe",
"embedding cannot guarantee they are similar, for example, they can be antonyms. For instance, the word \"loved\" is the nearest",
"neighbor of the word \"hated\".• We can use the huge body of linguistic knowledge",
"to measure the distance between two text inputs. However, this strategy is hard to generalize and is",
"difficult to extend to other discrete spaces.• Shown in Figure 1 , we can also use the edit distance",
"between text x and text x being defined as the minimal edit operations that are required to change x to x . We focus on this distance in the paper. Intuitively we",
"want to find imperceptible perturbations",
"on a text input (with respect to human eyes) to evade deep learning classifiers.The second major concern we consider in this paper is the black-box setup. An adversary may have various degrees of knowledge about",
"a deep-learning classifier it tries to fool, ranging from no information to complete information. FIG4 depicts the Perspective API BID19 from Google, which",
"is a deep learning based text classification system that predicts whether a message is toxic or not. This service can be accessed directly from the API website",
"that makes querying the model uncomplicated and widely accessible. The setting is a black-box scenario as the model is run on",
"cloud servers and its structure and parameters are not available. Many state-of-the-art deep learning applications have the",
"similar system design: the learning model is deployed on the cloud servers, and users can access the model via an app through a terminal machine (frequently a mobile device). In such cases, a user could not examine or retrieve the inner",
"structure of the models. We believe that the black-box attack is generally more realistic",
"than the white-box. Previous efforts about adversarial text sequences mostly replied",
"on using gradient evidence. We instead assume attackers cannot access the structure, parameters",
"or gradient of the target model.Considering the vast search space of possible changes (among all words/characters changes) from x to x , we design a search strategy based on Monte Carlo tree search (MCTS) for finding the most important few words to perturb. The search is conducted as a sequential decision process and aims to",
"make small edit operations such that a human would consider the generated x (almost) the same as the original sequence. Inspired by the homoglyph attack BID11 that attacks characters with",
"symbols of identical shapes, we replace a character in those important words found by MCTS with its homoglyph character (of identical shape). This simple strategy can effectively forces a deep classifier to a",
"wrong decision by perturbing only a few characters in a text input.Contributions: This paper presents an effective algorithm, MCTSBug , that can generate adversarial sequences of natural language inputs to evade deep-learning classifiers. The techniques we explore here may shed light on discovering the vulnerability",
"of using deep learning on other discrete inputs. Our novel algorithm has the following properties:• Black-box: Previous methods",
"require knowledge of the model structure and parameters of the word embedding layer, while our method can work in a black-box setting.• Effective: on seven real-world text classification tasks, our MCTSBug can fool",
"two stateof-the-art deep RNN models with the success rate of 95% on average (see FIG2 ).• Simple: MCTSBug uses simple character-level transformations to generate adversarial",
"sequences, in contrast to previous works that use projected gradient or multiple linguisticdriven steps.• Almost imperceptible perturbations to human observers: MCTSBug can generate adversarial",
"sequences that visually identical to seed sequences.Att: For the rest of the paper, we denote samples in the form of pair (x, y), where x = x 1 x 2 x 3 ...x n is an input text sequence including n tokens (each token could be either a word or a character in different models) and y set including {1, ..., K} is a label of K classes. A deep learning classifier is represented as F : X → Y, a function mapping from the input",
"space to the label space.",
"Due to the combinatorial nature of a large and discrete input space, searching adversarial text sequences is not a straightforward extension from image-based techniques generating adversarial examples.",
"In this paper, we present a novel framework, MCTSBug which can generate imperceptible adversarial text sequences in a black-box manner.",
"By combining homoglyph attack and MCTS, the proposed method can successfully fool two deep RNN-based text classifiers across seven large-scale benchmarks.The key idea we utilized in MCTSBug is that words with one character changed to its homoglyph pair are usually viewed as \"unknown\" by the deep-learning models.",
"The shape change is almost invisible to humans, however, it is a huge change to a deep-learning model.",
"More fundamentally this is caused by the fact that NLP training datasets normally only cover a very small portion of the huge space built by the combination of possible NLP letters.",
"Then the question is: should the deep-learning classifiers model those words having identical shapes to human eyes as similar representations?",
"We think the answer should be Yes."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.1818181723356247,
0.1395348757505417,
0,
0.12765957415103912,
0.11764705181121826,
0.06666666269302368,
0.045454539358615875,
0.0624999962747097,
0.13333332538604736,
0.05405404791235924,
0.1463414579629898,
0,
0.0952380895614624,
0.04878048226237297,
0.1428571343421936,
0.2222222238779068,
0.10256409645080566,
0.1463414579629898,
0,
0.22857142984867096,
0.08695651590824127,
0.1818181723356247,
0.27586206793785095,
0,
0.0714285671710968,
0.0833333283662796,
0,
0.0714285671710968,
0.1428571343421936,
0,
0.25,
0.10810810327529907,
0.09756097197532654,
0.06666666269302368,
0.0555555522441864,
0.07999999821186066,
0,
0,
0.19999998807907104,
0.05882352590560913,
0.19999998807907104,
0.09999999403953552,
0.11999999731779099,
0.05714285373687744,
0.04999999701976776,
0.12121211737394333,
0.06451612710952759,
0.07999999821186066,
0,
0.1538461446762085,
0.0714285671710968,
0.13114753365516663,
0,
0.045454539358615875,
0.1666666567325592,
0.060606054961681366,
0.13333332538604736,
0.19512194395065308,
0.1538461446762085,
0.09638553857803345,
0.10526315122842789,
0.19999998807907104,
0.1764705777168274,
0.09836065024137497,
0.06896550953388214,
0,
0.05882352590560913,
0
] | SJxiHnCqKQ | true | [
"Use Monte carlo Tree Search and Homoglyphs to generate indistinguishable adversarial samples on text data"
] |
[
" We introduce a model that learns to convert simple hand drawings\n into graphics programs written in a subset of \\LaTeX.~",
"The model\n combines techniques from deep learning and program synthesis. ",
"We\n learn a convolutional neural network that proposes plausible drawing\n primitives that explain an image.",
"These drawing primitives are like\n a trace of the set of primitive commands issued by a graphics\n program.",
"We learn a model that uses program synthesis techniques to\n recover a graphics program from that trace.",
"These programs have\n constructs like variable bindings, iterative loops, or simple kinds\n of conditionals.",
"With a graphics program in hand, we can correct\n errors made by the deep network and extrapolate drawings. ",
"Taken\n together these results are a step towards agents that induce useful,\n human-readable programs from perceptual input.",
"How can an agent convert noisy, high-dimensional perceptual input to a symbolic, abstract object, such as a computer program?",
"Here we consider this problem within a graphics program synthesis domain.",
"We develop an approach for converting hand drawings into executable source code for drawing the original image.",
"The graphics programs in our domain draw simple figures like those found in machine learning papers (see FIG0 ).",
"The key observation behind our work is that generating a programmatic representation from an image of a diagram involves two distinct steps that require different technical approaches.",
"The first step involves identifying the components such as rectangles, lines and arrows that make up the image.",
"The second step involves identifying the high-level structure in how the components were drawn.",
"In FIG0 , it means identifying a pattern in how the circles and rectangles are being drawn that is best described with two nested loops, and which can easily be extrapolated to a bigger diagram.We present a hybrid architecture for inferring graphics programs that is structured around these two steps.",
"For the first step, a deep network to infers a set of primitive shape-drawing commands.",
"We refer FIG8 : Both the paper and the system pipeline are structured around the trace hypothesisThe new contributions of this work are: (1) The trace hypothesis: a framework for going from perception to programs, which connects this work to other trace-based models, like the Neural Program Interpreter BID17 ; BID26 A model based on the trace hypothesis that converts sketches to high-level programs: in contrast to converting images to vectors or low-level parses BID11 BID14 BID24 BID1 BID2 .",
"FORMULA8 A generic algorithm for learning a policy for efficiently searching for programs, building on Levin search BID13 and recent work like DeepCoder BID0 .",
"Even with the high-level idea of a trace set, going from hand drawings to programs remains difficult.",
"We address these challenges: (1) Inferring trace sets from images requires domain-specific design choices from the deep learning and computer vision toolkits (Sec.",
"2) .",
"FORMULA4 Generalizing to noisy hand drawings, we will show, requires learning a domain-specific noise model that is invariant to the variations across hand drawings (Sec. 2.1).",
"(3) Discovering good programs requires solving a difficult combinatorial search problem, because the programs are often long and complicated (e.g., 9 lines of code, with nested loops and conditionals).",
"We give a domain-general framework for learning a search policy that quickly guides program synthesizers toward the target programs (Sec. 3.1)."
] | [
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.3333333432674408,
0.0952380895614624,
0.0833333283662796,
0.1538461446762085,
0.25,
0,
0.13793103396892548,
0.07407406717538834,
0.2142857164144516,
0.1904761791229248,
0.1538461446762085,
0,
0.05714285373687744,
0,
0.17391303181648254,
0.1090909093618393,
0.1666666567325592,
0.07792207598686218,
0.0624999962747097,
0.29629629850387573,
0,
0.17142856121063232,
0.05128204822540283,
0.12903225421905518
] | H1DJFybC- | true | [
"Learn to convert a hand drawn sketch into a high-level program"
] |
[
"Adversarial examples remain an issue for contemporary neural networks.",
"This paper draws on Background Check (Perello-Nieto et al., 2016), a technique in model calibration, to assist two-class neural networks in detecting adversarial examples, using the one dimensional difference between logit values as the underlying measure.",
"This method interestingly tends to achieve the highest average recall on image sets that are generated with large perturbation vectors, which is unlike the existing literature on adversarial attacks (Cubuk et al., 2017).",
"The proposed method does not need knowledge of the attack parameters or methods at training time, unlike a great deal of the literature that uses deep learning based methods to detect adversarial examples, such as Metzen et al. (2017), imbuing the proposed method with additional flexibility.",
"Adversarial examples are specially crafted input instances generated by adversarial attacks.",
"The term was introduced by BID23 in the context of image classification.",
"These attacks generate, or manipulate data, to achieve poor performance when classified by neural networks, which poses existential questions about their usage in high stakes security critical applications.",
"Since they were introduced, there have been many papers that have introduced novel attack methods and other papers that attempt to combat those attacks.",
"For instance, BID5 introduces the fast gradient sign method (FGSM), and BID20 proposes a method based on modifying the gradient of the softmax function as a defence.Adversarial attacks can be identified into various classes such as white box and black box, where in the former, the attack has full knowledge of all model parameters.",
"Examples created by these attacks can be false positives or false negatives.",
"In the case of images, they can be nonsensical data (e.g. noise classified as a road sign) or clear cut (e.g. a visually clear cat, classified as a road sign).",
"These attacks can be non-targeted or targeted such that the classifier chooses a specific class for the adversarial example.",
"Various adversarial defences exist, some based on deep learning techniques and others on purely distributional techniques.",
"Similar work on adversarial defences has been done by BID6 , in which the network is trained on specific attack types and parameters with an additional outlier class for adversarial examples.",
"A multi-dimensional statistical test over the maximum mean discrepancy and the energy distance on input features is then used to classify instances as adversarial.",
"Other work has been done by BID0 , where Gaussian Processes are placed on top of conventional convolutional neural network architectures, with radial basis kernels, imbuing the neural network with a way of understanding its own perceptual limits.",
"The authors find that the network becomes more resistant to adversarial attack.",
"The work that follows continues in a similar vein to both of these methods.",
"Some methods such as BID14 use sub-units of deep learning architectures to detect adversarial instances.Calibration is a technique of converting model scores, normally, through application of a post processing function, to probability estimates.",
"Background Check is a method to yield probability estimates, via a set of explicit assumptions, in regions of space where no data has been observed.",
"In this work, Background Check is useful in producing calibrated probabilities for adversarial data that often exists in regions where no training and test data has been seen.",
"Reliable probability estimates can then be measured by calibration and refinement loss.",
"Various calibrating procedures exist such as binning, logistic regression, isotonic regression and softmax.",
"BID8 demonstrates the logistic function is optimal when the class-conditional densities are Gaussians with unit variance.",
"Softmax extends this to multi-variate Gaussian densities with unit variance.Calibration of neural network models has been performed by BID7 , using a method called Temperature Scaling, that modifies the gradient of the softmax function allowing softmax to calibrate densities with non-unit variance.",
"The authors perform this calibration after noticing that calibration loss for neural networks has increased in recent years.",
"When adversarial attacks against neural networks are brought into perspective, a problem arises for existing calibration techniques, which is the question of mapping adversarial logit scores to reliable probability estimates (which should be zero for a successful adversarial attack).",
"In this work, a method is demonstrated that uses Background Check to identify adversarial attacks.",
"A novel approach to defending neural networks against adversarial attacks has been established.",
"This approach intersects two previously unrelated fields of machine learning, calibration and adversarial defences, using the principles underlying Background Check.",
"This work demonstrates that adversarial attacks, produced as a result of large perturbations of various forms, can be detected and assigned to an adversarial class.",
"The larger the perturbation, the easier it was for the attacks to be detected."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0
] | [
0.06451612710952759,
0.31578946113586426,
0.2222222238779068,
0.22580644488334656,
0.12121211737394333,
0.1764705777168274,
0.1599999964237213,
0.09302324801683426,
0.17910447716712952,
0.060606054961681366,
0.13333332538604736,
0.19999998807907104,
0.1666666567325592,
0.19607841968536377,
0.17777776718139648,
0.2142857164144516,
0.23529411852359772,
0.2222222238779068,
0.19230768084526062,
0.17777776718139648,
0.0833333283662796,
0.05882352590560913,
0,
0.05405404791235924,
0.20689654350280762,
0.1538461446762085,
0.31578946113586426,
0.2702702581882477,
0.2857142686843872,
0.3333333134651184,
0.2222222238779068,
0.1764705777168274
] | Bkxdqj0cFQ | true | [
"This paper uses principles from the field of calibration in machine learning on the logits of a neural network to defend against adversarial attacks"
] |
[
"We present a novel multi-task training approach to learning multilingual distributed representations of text.",
"Our system learns word and sentence embeddings jointly by training a multilingual skip-gram model together with a cross-lingual sentence similarity model.",
"We construct sentence embeddings by processing word embeddings with an LSTM and by taking an average of the outputs.",
"Our architecture can transparently use both monolingual and sentence aligned bilingual corpora to learn multilingual embeddings, thus covering a vocabulary significantly larger than the vocabulary of the bilingual corpora alone.",
"Our model shows competitive performance in a standard cross-lingual document classification task.",
"We also show the effectiveness of our method in a low-resource scenario.",
"Learning distributed representations of text, whether it be at the level of words BID26 ; BID28 , phrases BID32 ; BID30 , sentences BID18 or documents BID22 , has been one of the most widely researched subjects in natural language processing in recent years.",
"Word/sentence/document embeddings, as they are now commonly referred to, have quickly become essential ingredients of larger and more complex NLP systems BID4 ; BID25 BID8 ; BID1 ; BID6 looking to leverage the rich semantic and linguistic information present in distributed representations.One of the exciting avenues of research that has been taking place in the context of distributed text representations, which is also the subject of this paper, is learning multilingual text representations shared across languages BID11 ; BID3 ; BID24 .",
"Multilingual embeddings open up the possibility of transferring knowledge across languages and building complex NLP systems even for languages with limited amount of supervised resources BID0 ; BID17 .",
"By far the most popular approach to learning multilingual embeddings is to train a multilingual word embedding model that is then used to derive representations for sentences and documents by composition BID14 .",
"These models are typically trained solely on word or sentence aligned corpora and the composition models are usually simple predefined functions like averages over word embeddings BID14 ; BID27 or parametric coposition models learned along with the word embeddings.In this work we learn word and sentence embeddings jointly by training a multilingual skip-gram model BID24 together with a cross-lingual sentence similarity model.",
"The multilingual skip-gram model transparently consumes (word, context word) pairs constructed from monolingual as well as sentence aligned bilingual corpora.",
"We use a parametric composition model to construct sentence embeddings from word embeddings.",
"We process word embeddings with a Bi-directional LSTM and then take an average of the LSTM outputs, which can be viewed as context dependent word embeddings.",
"Since our multilingual skip-gram and cross-lingual sentence similarity models are trained jointly, they can inform each other through the shared word embedding layer and promote the compositionality of learned word embeddings at training time.",
"Further, the gradients flowing back from the sentence similarity model can affect the embeddings learned for words outside the vocabulary of the parallel corpora.",
"We hypothesize these two aspects of our model lead to more robust sentence embeddings.Our contributions are as follows :• Scalable approach: We show that our approach performs better as more languages are added, since represent the extended lexicon in a suitable manner.•",
"Ability to perform well in low-resource scenario: Our approach produces representations comparable with the state-of-art multilingual sentence embeddings using a limited amount of parallel data. Our",
"sentence embedding model is trained end-to-end on a vocabulary significantly larger than the vocabulary of the parallel corpora used for learning crosslingual sentence similarity.• Amenable",
"to Multi-task modeling: Our model can be trained jointly with proxy tasks, such as sentiment classification, to produce more robust embeddings for downstream tasks.",
"Our results suggest that using a parametric composition model to derive sentence embeddings from word embeddings and joint multi-task learning of multilingual word and sentence embeddings are promising directions.",
"This paper is a snapshot of our current efforts and w e believe that our sentence embedding models can be improved further with straightforward modifications to the model architecture, for instance by using stacked LSTMs, and we plan to explore these directions in future work.In our exploration of architectures for the sentence encoding model, we also tried using a selfattention layer following the intuition that not all words are equally important for the meaning of a sentence.",
"However, we later realized that the cross lingual sentence similarity objective is at odds with what we want the attention layer to learn.",
"When we used self attention instead of simple averaging of word embeddings, the attention layer learns to give the entire weight to a single word in both the source and the target language since that makes optimizing cross lingual sentence similarity objective easier.Even though they are related tasks, multilingual skip-gram and cross-lingual sentence similarity models are always in a conflict to modify the shared word embeddings according to their objectives.",
"This conflict, to some extent, can be eased by careful choice of hyper-parameters.",
"This dependency on hyper-parameters suggests that better hyper-parameters can lead to better results in the multi-task learning scenario.",
"We have not yet tried a full sweep of the hyperparameters of our current models but we believe there may be easy gains to be had from such a sweep especially in the multi-task learning scenario."
] | [
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.25641024112701416,
0.4651162624359131,
0.24390242993831635,
0.2745097875595093,
0.21621620655059814,
0.2702702581882477,
0.06451612710952759,
0.15555554628372192,
0.11764705181121826,
0.3396226465702057,
0.3333333432674408,
0.22727271914482117,
0.3243243098258972,
0.2083333283662796,
0.2857142686843872,
0.2222222238779068,
0.28125,
0.35999998450279236,
0.2083333283662796,
0.1666666567325592,
0.3265306055545807,
0.18823528289794922,
0.260869562625885,
0.3037974536418915,
0.052631575614213943,
0.24390242993831635,
0.2142857164144516
] | SJsb_xTSM | true | [
"We jointly train a multilingual skip-gram model and a cross-lingual sentence similarity model to learn high quality multilingual text embeddings that perform well in the low resource scenario."
] |
[
"Generative Adversarial Networks (GANs) are a very powerful framework for generative modeling.",
"However, they are often hard to train, and learning of GANs often becomes unstable.",
"Wasserstein GAN (WGAN) is a promising framework to deal with the instability problem as it has a good convergence property.",
"One drawback of the WGAN is that it evaluates the Wasserstein distance in the dual domain, which requires some approximation, so that it may fail to optimize the true Wasserstein distance.",
"In this paper, we propose evaluating the exact empirical optimal transport cost efficiently in the primal domain and performing gradient descent with respect to its derivative to train the generator network.",
"Experiments on the MNIST dataset show that our method is significantly stable to converge, and achieves the lowest Wasserstein distance among the WGAN variants at the cost of some sharpness of generated images.",
"Experiments on the 8-Gaussian toy dataset show that better gradients for the generator are obtained in our method.",
"In addition, the proposed method enables more flexible generative modeling than WGAN.",
"Generative Adversarial Networks (GANs) BID2 are a powerful framework of generative modeling which is formulated as a minimax game between two networks: A generator network generates fake-data from some noise source and a discriminator network discriminates between fake-data and real-data.",
"GANs can generate much more realistic images than other generative models like variational autoencoder BID10 or autoregressive models BID14 , and have been widely used in high-resolution image generation BID8 , image inpainting BID18 , image-to-image translation BID7 , to mention a few.",
"However, GANs are often hard to train, and various ways to stabilize training have been proposed by many recent works.",
"Nonetheless, consistently stable training of GANs remains an open problem.GANs employ the Jensen-Shannon (JS) divergence to measure the distance between the distributions of real-data and fake-data BID2 .",
"provided an analysis of various distances and divergence measures between two probability distributions in view of their use as loss functions of GANs, and proposed Wasserstein GAN (WGAN) which has better theoretical properties than the original GANs.",
"WGAN requires that the discriminator (called the critic in ) must lie within the space of 1-Lipschitz functions to evaluate the Wasserstein distance via the Kantorovich-Rubinstein dual formulation.",
"further proposed implementing the critic with a deep neural network and applied weight clipping in order to ensure that the critic satisfies the Lipschitz condition.",
"However, weight clipping limits the critic's function space and can cause gradients in the critic to explode or vanish if the clipping parameters are not carefully chosen BID3 .",
"WGAN-GP BID3 and Spectral Normalization (SN) BID12 apply regularization and normalization, respectively, on the critic trying to make the critic 1-Lipschitz, but they fail to optimize the true Wasserstein distance.In the latest work, BID11 proposed a new WGAN variant to evaluate the exact empirical Wasserstein distance.",
"They evaluate the empirical Wasserstein distance between the empirical distributions of real-data and fake-data in the discrete case of the Kantorovich-Rubinstein dual for-mulation, which can be solved efficiently because the dual problem becomes a finite-dimensional linear-programming problem.",
"The generator network is trained using the critic network learnt to approximate the solution of the dual problem.",
"However, the problem of approximation error by the critic network remains.",
"In this paper, we propose a new generative model without the critic, which learns by directly evaluating gradient of the exact empirical optimal transport cost in the primal domain.",
"The proposed method corresponds to stochastic gradient descent of the optimal transport cost.",
"argued that JS divergences are potentially not continuous with respect to the generator's parameters, leading to GANs training difficulty.",
"They proposed instead using the Wasserstein-1 distance W 1 (q, p), which is defined as the minimum cost of transporting mass in order to transform the distribution q into the distribution p.",
"Under mild assumptions, W 1 (q, p) is continuous everywhere and differentiable almost everywhere.",
"We have proposed a new generative model that learns by directly minimizing exact empirical Wasserstein distance between the real-data distribution and the generator distribution.",
"Since the proposed method does not suffer from the constraints on the transport cost and the 1-Lipschitz constraint imposed on WGAN by solving the optimal transport problem in the primal domain instead of the dual domain, one can construct more flexible generative modeling.",
"The proposed method provides the generator with better gradient information to minimize the Wasserstein distance (Section 5.2) and achieved smaller empirical Wasserstein distance with lower computational cost (Section 5.1) than any other compared variants of WGAN.",
"In the future work, we would like to investigate the behavior of the proposed method when transport cost is defined in the feature space embedded by an appropriate inception model."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0
] | [
0.13793103396892548,
0,
0.1111111044883728,
0.1463414579629898,
0.08888888359069824,
0.1304347813129425,
0.05882352590560913,
0.20689654350280762,
0.07843136787414551,
0.1090909019112587,
0.1666666567325592,
0.04878048226237297,
0.07843136787414551,
0.1463414579629898,
0.1538461446762085,
0,
0.2222222238779068,
0.17391303181648254,
0,
0.07407406717538834,
0.3636363446712494,
0.06666666269302368,
0.05714285373687744,
0.08888888359069824,
0,
0.7692307829856873,
0.1538461446762085,
0.16326530277729034,
0.13636362552642822
] | BJgTZ3C5FX | true | [
"We have proposed a flexible generative model that learns stably by directly minimizing exact empirical Wasserstein distance."
] |
[
"Neural Architecture Search (NAS) is an exciting new field which promises to be as much as a game-changer as Convolutional Neural Networks were in 2012.",
"Despite many great works leading to substantial improvements on a variety of tasks, comparison between different methods is still very much an open issue.",
"While most algorithms are tested on the same datasets, there is no shared experimental protocol followed by all.",
"As such, and due to the under-use of ablation studies, there is a lack of clarity regarding why certain methods are more effective than others.",
"Our first contribution is a benchmark of 8 NAS methods on 5 datasets.",
"To overcome the hurdle of comparing methods with different search spaces, we propose using a method’s relative improvement over the randomly sampled average architecture, which effectively removes advantages arising from expertly engineered search spaces or training protocols.",
"Surprisingly, we find that many NAS techniques struggle to significantly beat the average architecture baseline.",
"We perform further experiments with the commonly used DARTS search space in order to understand the contribution of each component in the NAS pipeline.",
"These experiments highlight that:",
"(i) the use of tricks in the evaluation protocol has a predominant impact on the reported performance of architectures;",
"(ii) the cell-based search space has a very narrow accuracy range, such that the seed has a considerable impact on architecture rankings;",
"(iii) the hand-designed macrostructure (cells) is more important than the searched micro-structure (operations); and",
"(iv) the depth-gap is a real phenomenon, evidenced by the change in rankings between 8 and 20 cell architectures.",
"To conclude, we suggest best practices, that we hope will prove useful for the community and help mitigate current NAS pitfalls, e.g. difficulties in reproducibility and comparison of search methods.",
"The\n",
"code used is available at https://github.com/antoyang/NAS-Benchmark.",
"As the deep learning revolution helped us move away from hand crafted features (Krizhevsky et al., 2012) and reach new heights (He et al., 2016; Szegedy et al., 2017) , so does Neural Architecture Search (NAS) hold the promise of freeing us from hand-crafted architectures, which requires tedious and expensive tuning for each new task or dataset.",
"Identifying the optimal architecture is indeed a key pillar of any Automated Machine Learning (AutoML) pipeline.",
"Research in the last two years has proceeded at a rapid pace and many search strategies have been proposed, from Reinforcement Learning (Zoph & Le, 2017; Pham et al., 2018) , to Evolutionary Algorithms (Real et al., 2017) , to Gradient-based methods Liang et al., 2019) .",
"Still, it remains unclear which approach and search algorithm is preferable.",
"Typically, methods have been evaluated on accuracy alone, even though accuracy is influenced by many other factors besides the search algorithm.",
"Comparison between published search algorithms for NAS is therefore either very difficult (complex training protocols with no code available) or simply impossible (different search spaces), as previously pointed out (Li & Talwalkar, 2019; Sciuto et al., 2019; Lindauer & Hutter, 2019) .",
"NAS methods have been typically decomposed into three components (Elsken et al., 2019; Li & Talwalkar, 2019) : search space, search strategy and model evaluation strategy.",
"This division is important to keep in mind, as an improvement in any of these elements will lead to a better final performance.",
"But is a method with a more (manually) tuned search space a better AutoML algorithm?",
"If the key idea behind NAS is to find the optimal architecture, without human intervention, why are we devoting so much energy to infuse expert knowledge into the pipeline?",
"Furthermore, the lack of ablation studies in most works makes it harder to pinpoint which components are instrumental to the final performance, which can easily lead to Hypothesizing After the Results are Known (HARKing; Gencoglu et al., 2019) .",
"Paradoxically, the huge effort invested in finding better search spaces and training protocols, has led to a situation in which any randomly sampled architecture performs almost as well as those obtained by the search strategies.",
"Our findings suggest that most of the gains in accuracy in recent contributions to NAS have come from manual improvements in the training protocol, not in the search algorithms.",
"As a step towards understanding which methods are more effective, we have collected code for 8 reasonably fast (search time of less than 4 days) NAS algorithms, and benchmarked them on 5 well known CV datasets.",
"Using a simple metric-the relative improvement over the average architecture of the search space-we find that most NAS methods perform very similarly and rarely substantially above this baseline.",
"The methods used are DARTS, StacNAS, PDARTS, MANAS, CNAS, NSGANET, ENAS and NAO.",
"The datasets used are CIFAR10, CIFAR100, SPORT8, MIT67 and FLOWERS102.",
"Through a number of additional experiments on the widely used DARTS search space , we will show that:",
"(a) how you train your model has a much bigger impact than the actual architecture chosen;",
"(b) different architectures from the same search space perform very similarly, so much so that",
"(c) hyperparameters, like the number of cells, or the seed itself have a very significant effect on the ranking; and",
"(d) the specific operations themselves have less impact on the final accuracy than the hand-designed macro-structure of the network.",
"Notably, we find that the 200+ architectures sampled from this search space (available from the link in the abstract) are all within a range of one percentage point (top-1 accuracy) after a standard full training on CIFAR10.",
"Finally, we include some observations on how to foster reproducibility and a discussion on how to potentially avoid some of the encountered pitfalls.",
"In this section we offer some suggestions on how to mitigate the issues in NAS research.",
"Augmention tricks: while achieving higher accuracies is clearly a desirable goal, we have shown in section 4, that using well engineered training protocols can hide the contribution of the search algorithm.",
"We therefore suggest that both results, with and without training tricks, should be reported.",
"An example of best practice is found in Hundt et al. (2019) .",
"Search Space: it is difficult to evaluate the effectiveness of any given proposed method without a measure of how good randomly sampled architectures are.",
"This is not the same thing as performing a random search which is a search strategy in itself; random sampling is simply used to establish how good the average model is.",
"A simple approach to measure the variability of any new given search space could be to randomly sample k architectures and report mean and standard deviation.",
"We hope that future works will attempt to develop more expressive search spaces, capable of producing both good and bad network designs.",
"Restricted search spaces, while guaranteeing good performance and quick results, will inevitably be constrained by the bounds of expert knowledge (local optima) and will be incapable of reaching more truly innovative solutions (closer to the global optima).",
"As our findings in section 5.2 suggest, the overall wiring (the macro-structure) is an extremely influential component in the final performance.",
"As such, future research could investigate the optimal wiring at a global level: an interesting work in this direction is Xie et al. (2019a) .",
"Multiple datasets: as the true goal of AutoML is to minimize the need for human experts, focusing the research efforts on a single dataset will inevitably lead to algorithmic overfitting and/or methods heavily dependent on hyperparameter tuning.",
"The best solution for this is likely to test NAS algorithms on a battery of datasets, with different characteristics: image sizes, number of samples, class granularity and learning task.",
"Investigating hidden components: as our experiments in Sections 4 and 5.2 show, the DARTS search space is not only effective due to specific operations that are being chosen, but in greater part due to the overall macro-structure and the training protocol used.",
"We suggest that proper ablation studies can lead to better understanding of the contributions of each element of the pipeline.",
"The importance of reproducibility: reproducibility is of extreme relevance in all sciences.",
"To this end, it is very important that authors release not only their best found architecture but also the corresponding seed (if they did not average over multiple ones), as well as the code and the detailed training protocol (including hyperparameters).",
"To this end, NAS-Bench-101 (Ying et al., 2019) , a dataset mapping architectures to their accuracy, can be extremely useful, as it allows the quality of search strategies to be assessed in isolation from other NAS components (e.g. search space, training protocol) in a quick and reproducible fashion.",
"The code for this paper is open-source (link in the abstract).",
"We also open-source the 200+ trained architectures used in Section 5.",
"Hyperparameter tuning cost: tuning hyperparameters in NAS is an extremely costly component.",
"Therefore, we argue that either (i) hyperparameters are general enough so that they do not require tuning for further tasks, or (2) the cost is included in the search budget.",
"AutoML, and NAS in particular, have the potential to truly democratize the use of machine learning for all, and could bring forth very notable improvements on a variety of tasks.",
"To truly step forward, a principled approach, with a focus on fairness and reproducibility is needed.",
"In this paper we have shown that, for many NAS methods, the search space has been engineered such that all architectures perform similarly well and that their relative ranking can easily shift.",
"We have furthermore showed that the training protocol itself has a higher impact on the final accuracy than the actual network.",
"Finally, we have provided some suggestions on how to make future research more robust to these issues.",
"We hope that our findings will help the community focus their efforts towards a more general approach to automated neural architecture design.",
"Only then can we expect to learn from NASgenerated architectures as opposed to the current paradigm where search spaces are heavily influenced by our current (human) expert knowledge.",
"A APPENDIX This section details the datasets and the hyperparameters used for each method on each dataset.",
"Search spaces were naturally left unchanged.",
"Hyperparameters were chosen as close as possible to the original paper and occasionally updated to more recent implementations.",
"The network size was tuned similarly for all methods for SPORT8, MIT67 and FLOWERS102.",
"All experiments were run on NVIDIA Tesla V100 GPUs."
] | [
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.13636362552642822,
0.260869562625885,
0.09999999403953552,
0.21739129722118378,
0.514285683631897,
0.17543859779834747,
0.1621621549129486,
0.2790697515010834,
0,
0.2631579041481018,
0.19512194395065308,
0.05714285373687744,
0.19999998807907104,
0.19607841968536377,
0,
0.05714285373687744,
0.21052631735801697,
0.16129031777381897,
0,
0.1904761791229248,
0.03278687968850136,
0.12765957415103912,
0.23255813121795654,
0.05714285373687744,
0.1249999925494194,
0.2181818187236786,
0.15094339847564697,
0.260869562625885,
0.27586206793785095,
0.20408162474632263,
0.05714285373687744,
0.0624999962747097,
0.19999998807907104,
0.15789473056793213,
0.1111111044883728,
0.19999998807907104,
0.2631579041481018,
0.1818181723356247,
0.2926829159259796,
0.31578946113586426,
0.1538461446762085,
0,
0.11764705181121826,
0.2222222238779068,
0.21739129722118378,
0.17391303181648254,
0.09090908616781235,
0.11320754140615463,
0.1904761791229248,
0.1304347813129425,
0.2181818187236786,
0.23999999463558197,
0.1355932205915451,
0.20512819290161133,
0.12121211737394333,
0.03389830142259598,
0.21212120354175568,
0.12121211737394333,
0.1818181723356247,
0.12121211737394333,
0.07999999821186066,
0.2857142686843872,
0.10810810327529907,
0.07547169178724289,
0.24390242993831635,
0.15789473056793213,
0.13636362552642822,
0.0833333283662796,
0.21621620655059814,
0,
0.10526315122842789,
0.05714285373687744,
0.06451612710952759
] | HygrdpVKvr | true | [
"A study of how different components in the NAS pipeline contribute to the final accuracy. Also, a benchmark of 8 methods on 5 datasets."
] |
[
"Identifying analogies across domains without supervision is a key task for artificial intelligence.",
"Recent advances in cross domain image mapping have concentrated on translating images across domains.",
"Although the progress made is impressive, the visual fidelity many times does not suffice for identifying the matching sample from the other domain.",
"In this paper, we tackle this very task of finding exact analogies between datasets i.e. for every image from domain A find an analogous image in domain B. We present a matching-by-synthesis approach: AN-GAN, and show that it outperforms current techniques.",
"We further show that the cross-domain mapping task can be broken into two parts: domain alignment and learning the mapping function.",
"The tasks can be iteratively solved, and as the alignment is improved, the unsupervised translation function reaches quality comparable to full supervision.",
"Humans are remarkable in their ability to enter an unseen domain and make analogies to the previously seen domain without prior supervision (\"This dinosaur looks just like my dog Fluffy\").",
"This ability is important for using previous knowledge in order to obtain strong priors on the new situation, which makes identifying analogies between multiple domains an important problem for Artificial Intelligence.",
"Much of the recent success of AI has been in supervised problems, i.e., when explicit correspondences between the input and output were specified on a training set.",
"Analogy identification is different in that no explicit example analogies are given in advance, as the new domain is unseen.Recently several approaches were proposed for unsupervised mapping between domains.",
"The approaches take as input sets of images from two different domains A and B without explicit correspondences between the images in each set, e.g. Domain A: a set of aerial photos and Domain B: a set of Google-Maps images.",
"The methods learn a mapping function T AB that takes an image in one domain and maps it to its likely appearance in the other domain, e.g. map an aerial photo to a Google-Maps image.",
"This is achieved by utilizing two constraints: (i) Distributional constraint: the distributions of mapped A domain images (T AB (x)) and images of the target domain B must be indistinguishable, and (ii) Cycle constraint: an image mapped to the other domain and back must be unchanged, i.e., T BA (T AB (x)) = x.In this paper the task of analogy identification refers to finding pairs of examples in the two domains that are related by a fixed non-linear transformation.",
"Although the two above constraints have been found effective for training a mapping function that is able to translate between the domains, the translated images are often not of high enough visual fidelity to be able to perform exact matching.",
"We hypothesize that it is caused due to not having exemplar-based constraints but rather constraints on the distributions and the inversion property.In this work we tackle the problem of analogy identification.",
"We find that although current methods are not designed for this task, it is possible to add exemplar-based constraints in order to recover high performance in visual analogy identification.",
"We show that our method is effective also when only some of the sample images in A and B have exact analogies whereas the rest do not have exact analogies in the sample sets.",
"We also show that it is able to find correspondences between sets when no exact correspondences are available at all.",
"In the latter case, since the method retrieves rather than maps examples, it naturally yields far better visual quality than the mapping function.",
"Using the domain alignment described above, it is now possible to perform a two step approach for training a domain mapping function, which is more accurate than the results provided by previous unsupervised mapping approaches:1.",
"Find the analogies between the A and B domain, using our method.2.",
"Once the domains are aligned, fit a translation function T AB between the domains y mi = T AB (x i ) using a fully supervised method.",
"For the supervised network, larger architectures and non-adversarial loss functions can be used.",
"We presented an algorithm for performing cross domain matching in an unsupervised way.",
"Previous work focused on mapping between images across domains, often resulting in mapped images that were too inaccurate to find their exact matches.",
"In this work we introduced the exemplar constraint, specifically designed to improve match performance.",
"Our method was evaluated on several public datasets for full and partial exact matching and has significantly outperformed baseline methods.",
"It has been shown to work well even in cases where exact matches are not available.",
"This paper presents an alternative view of domain translation.",
"Instead of performing the full operation end-toend it is possible to",
"(i) align the domains, and",
"(ii) train a fully supervised mapping function between the aligned domains.Future work is needed to explore matching between different modalities such as images, speech and text.",
"As current distribution matching algorithms are insufficient for this challenging scenario, new ones would need to be developed in order to achieve this goal."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.0952380895614624,
0.09090908616781235,
0,
0.042553190141916275,
0,
0,
0,
0.10810810327529907,
0.11428570747375488,
0.1111111044883728,
0.1463414579629898,
0,
0.05882352590560913,
0.04651162400841713,
0,
0,
0,
0.14814814925193787,
0,
0.052631575614213943,
0.09999999403953552,
0.13333332538604736,
0,
0.09999999403953552,
0.06666666269302368,
0,
0,
0,
0,
0.10526315122842789,
0,
0.11764705181121826,
0
] | BkN_r2lR- | true | [
"Finding correspondences between domains by performing matching/mapping iterations"
] |
[
"Effectively inferring discriminative and coherent latent topics of short texts is a critical task for many real world applications.",
"Nevertheless, the task has been proven to be a great challenge for traditional topic models due to the data sparsity problem induced by the characteristics of short texts.",
"Moreover, the complex inference algorithm also become a bottleneck for these traditional models to rapidly explore variations.",
"In this paper, we propose a novel model called Neural Variational Sparse Topic Model (NVSTM) based on a sparsity-enhanced topic model named Sparse Topical Coding (STC).",
"In the model, the auxiliary word embeddings are utilized to improve the generation of representations.",
"The Variational Autoencoder (VAE) approach is applied to inference the model efficiently, which makes the model easy to explore extensions for its black-box inference process.",
"Experimental results onWeb Snippets, 20Newsgroups, BBC and Biomedical datasets show the effectiveness and efficiency of the model.",
"With the great popularity of social networks and Q&A networks, short texts have been the prevalent information format on the Internet.",
"Uncovering latent topics from huge volume of short texts is fundamental to many real world applications such as emergencies detection BID18 , user interest modeling BID19 , and automatic query-reply BID16 .",
"However, short texts are characteristic of short document length, a very large vocabulary, a broad range of topics, and snarled noise, leading to much sparse word co-occurrence information.",
"Thus, the task has been proven to be a great challenge to traditional topic models.",
"Moreover, the complex inference algorithm also become a bottleneck for these traditional models to rapidly explore variations.To address the aforementioned issue, there are many previous works introducing new techniques such as word embeddings and neural variational inference to topic models.",
"Word embeddings are the low-dimensional real-valued vectors for words.",
"It have proven to be effective at capturing syntactic and semantic information of words.",
"Recently, many works have tried to incorporate word embeddings into topic models to enrich topic modeling BID5 BID7 BID22 .",
"Yet these models general rely on computationally expensive inference procedures like Markov Chain Monte Carlo, which makes them hard to rapidly explore extensions.",
"Even minor changes to model assumptions requires a re-deduction of the inference algorithms, which is mathematic challenging and time consuming.",
"With the advent of deep neural networks, the neural variational inference has emerged as a powerful approach to unsupervised learning of complicated distributions BID8 BID17 BID14 .",
"It approximates the posterior of a generative model with a variational distribution parameterized by a neural network, which allows back-propagation based function approximations in generative models.",
"The variational autoencoder (VAE) BID8 , one of the most popular deep generative models, has shown great promise in modeling complicated data.",
"Motivated by the promising potential of VAE in building generative models with black-box inference process, there are many works devoting to inference topic models with VAE BID20 BID13 BID4 .",
"However, these methods yield the same poor performance in short texts as LDA.Based on the analysis above, we propose a Neural Variational Sparse Topic Model (NVSTM) based on a sparsity-enhanced topic model STC for short texts.",
"The model is parameterized with neural networks and trained with VAE.",
"It still follows the probabilistic characteristics of STC.",
"Thus, the model inherit s the advantages of both sparse topic models and deep neural networks.",
"Additionally, we exploit the auxiliary word embeddings to improve the generation of short text representations.1.",
"We propose a novel Neural Variational Sparse Topic Model (NVSTM) to learn sparse representations of short texts.",
"The VAE is utilized to inference the model effectively.",
"2. The general word semantic information is introduced to improve the sparse representations of short texts via word embeddings.",
"3. We conduct experiments on four datasets.",
"Experimental results demonstrate our model's superiority in topic coherence and text classification accuracy.The rest of this paper is organized as follows.",
"First, we reviews related work.",
"Then, we present the details of the proposed NVSTM, followed by the experimental results.",
"Finally, we draw our conclusions.",
"We propose a neural sparsity-enhanced topic model NVSTM, which is the first effort in introducing effective VAE inference algorithm to STC as far as we know.",
"We take advantage of VAE to simplify the inference process, which require no model-specific algorithm derivations.",
"With the employing of word embeddings and neural network framework, NVSTM is able to generate clearer and semanticenriched representations for short texts.",
"The evaluation results demonstrate the effectiveness and efficiency of our model.",
"Future work can include extending our model with other deep generative models, such as generative adversarial network (GAN)."
] | [
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.07407406717538834,
0.12121211737394333,
0.07999999821186066,
0.3870967626571655,
0,
0.06896550953388214,
0.08695651590824127,
0.07407406717538834,
0,
0.060606058686971664,
0.1818181723356247,
0.13333332538604736,
0,
0,
0.07999999821186066,
0.06451612710952759,
0.1428571343421936,
0.12903225421905518,
0.25806450843811035,
0,
0.12121211737394333,
0.29999998211860657,
0.3333333432674408,
0,
0.260869562625885,
0,
0.07999999821186066,
0.23529411852359772,
0,
0.13333332538604736,
0.06666666269302368,
0,
0,
0,
0.3636363744735718,
0.0833333283662796,
0.06896550953388214,
0.10526315122842789,
0.07999999821186066
] | ryQz_ZfHz | true | [
"a neural sparsity-enhanced topic model based on VAE"
] |
[
"Neural Tangents is a library for working with infinite-width neural networks.",
"It provides a high-level API for specifying complex and hierarchical neural network architectures.",
"These networks can then be trained and evaluated either at finite-width as usual, or in their infinite-width limit.",
"For the infinite-width networks, Neural Tangents performs exact inference either via Bayes' rule or gradient descent, and generates the corresponding Neural Network Gaussian Process and Neural Tangent kernels.",
"Additionally, Neural Tangents provides tools to study gradient descent training dynamics of wide but finite networks. \n\n",
"The entire library runs out-of-the-box on CPU, GPU, or TPU.",
"All computations can be automatically distributed over multiple accelerators with near-linear scaling in the number of devices. \n\n",
"In addition to the repository below, we provide an accompanying interactive Colab notebook at\n",
"https://colab.sandbox.google.com/github/google/neural-tangents/blob/master/notebooks/neural_tangents_cookbook.ipynb\n"
] | [
1,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.375,
0.2222222238779068,
0.08695651590824127,
0,
0.09090908616781235,
0,
0,
0
] | SklD9yrFPS | false | [
"Keras for infinite neural networks."
] |
[
"Symbolic logic allows practitioners to build systems that perform rule-based reasoning which is interpretable and which can easily be augmented with prior knowledge.",
"However, such systems are traditionally difficult to apply to problems involving natural language due to the large linguistic variability of language.",
"Currently, most work in natural language processing focuses on neural networks which learn distributed representations of words and their composition, thereby performing well in the presence of large linguistic variability.",
"We propose to reap the benefits of both approaches by applying a combination of neural networks and logic programming to natural language question answering.",
"We propose to employ an external, non-differentiable Prolog prover which utilizes a similarity function over pretrained sentence encoders.",
"We fine-tune these representations via Evolution Strategies with the goal of multi-hop reasoning on natural language. ",
"This allows us to create a system that can apply rule-based reasoning to natural language and induce domain-specific natural language rules from training data.",
"We evaluate the proposed system on two different question answering tasks, showing that it complements two very strong baselines – BIDAF (Seo et al., 2016a) and FASTQA (Weissenborn et al.,2017) – and outperforms both when used in an ensemble.",
"We consider the problem of multi-hop reasoning on natural language input.",
"For instance, consider the statements Socrates was born in Athens and Athens belongs to Greece, together with the question Where was Socrates born?",
"There are two obvious answers following from the given statements: Athens and Greece.",
"While Athens follows directly from the single statement Socrates was born in Athens, deducing Greece requires a reader to combine both provided statements using the knowledge that a person that was born in a city, which is part of a country, was also born in the respective country.Most recent work that addresses such challenges leverages deep learning based methods BID41 BID29 BID38 BID30 BID18 BID21 BID17 BID8 , capable of dealing with the linguistic variability and ambiguity of natural language text.",
"However, the black-box nature of neural networks makes it hard to interpret the exact reasoning steps leading to a prediction (local interpretation), as well as the induced model (global interpretation).Logic",
"programming languages like Prolog BID46 , on the other hand, are built on the idea of using symbolic rules to reason about entities, which makes them highly interpretable both locally and globally. The capability",
"to use user-defined logic rules allows users to incorporate external knowledge in a straightforward manner. Unfortunately,",
"because of their reliance on symbolic logic, systems built on logic programming need extensive preprocessing to account for the linguistic variability that comes with natural language BID23 .We introduce NLPROLOG",
", a system which combines a symbolic reasoner and a rule-learning method with pretrained sentence representations to perform rule-based multi-hop reasoning on natural language input.1 Like inductive logic",
"programming methods, it facilitates both global as well as local interpretation, and allows for straightforward integration of prior knowledge. Similarly to deep learning",
"based approaches, it can be applied to natural language text without the need to transforming it to formal logic.At the core of the proposed method is an external non-differentiable theorem prover which can take similarities between symbols into account. Specifically, we modify a",
"Prolog interpreter to support weak-unification as proposed by BID39 . To obtain similarities between",
"symbols, we utilize sentence encoders initialized with pretrained sentence embeddings BID28 and then fine-tune these for a downstream question answering task via gradient-based optimization methods. Since the resulting system contains",
"non-differentiable components, we propose using Evolution Strategies (ES) BID9 as a gradient estimator BID47 for training the systemenabling us to fine-tune the sentence encoders and to learn domain-specific logic rules (e.g. that the relation is in is transitive) from natural language training data. This results in a system where training",
"can be trivially parallelized, and which allows to change the logic formalism by simply exchanging the external prover without the need for an intricate re-implementation as an end-to-end differentiable function.In summary, our main contributions are: a) we show how Prolog-like reasoning can",
"be applied to natural language input by employing a combination of pretrained sentence embeddings, an external logic prover, and fine-tuning using Evolution Strategies, b) we extend a Prolog interpreter with weak",
"unification based on distributed representations, c) we present Gradual Rule Learning (GRL),",
"a training algorithm that allows the proposed system to learn First-Order Logic (FOL) rules from entailment, and d) we evaluate the proposed system on two",
"different Question Answering (QA) datasets and demonstrate that its performance is on par with state-of-the-art neural QA models in many cases, while having different failure modes. This allows to build an ensemble of NLPROLOG",
"and a neural QA model that outperforms all individual models.",
"We have developed NLPROLOG, a system that is able to perform rule-based reasoning on natural language input, and can learn domain-specific natural language rules from training data.",
"To this end, Figure 3 : Example proof trees generated by NLPROLOG.",
"Each of the two trees shows an application of a transitive rule, the first for the predicate developer and the second for the predicate country.",
"The rule templates are displayed with the most similar predicate.",
"Note the noise introduced by the Open IE process, e.g. QUANT_0_1 and that entities and predicates do not need to match exactly.we have proposed to combine a symbolic prover with pretrained sentence embeddings and to train the resulting system with Evolution Strategies.",
"We have evaluated NLPROLOG on two different QA tasks, showing that it can learn domain-specific rules and produce predictions which complement those of the two strong baselines BIDAF and FASTQA.",
"This allows to build an ensemble of a baseline and NLPROLOG which outperforms all single models.While we have focused on a subset of First Order Logic in this work, the expressiveness of NL-PROLOG could be extended by incorporating a different symbolic prover.",
"For instance, a prover for temporal logic BID27 ) would allow to model temporal dynamics in natural language and enable us to evaluate NLPROLOG on the full set of BABI tasks.",
"We are also interested in incorporating future improvements of symbolic provers, Open IE systems and pretrained sentence representations to further enhance the performance of NLPROLOG.",
"To study the performance of the proposed method without the noise introduced by the Open IE step, it would be useful to evaluate it on tasks like knowledge graph reasoning.",
"Additionally, it would be interesting to study the behavior of NLPROLOG in the presence of multiple WIKIHOP query predicates.",
"else if x is f (x 1 , . . . , x n ), y is f (y 1 , . . . , y n ), and f ∼ f ≥ λ then S := S ∧ f ∼ f return unify(x 1 :: . . . :: x n , y 1 :: . . . :: y n , θ, S ) end else if x is p(x 1 , . . . , x n ), y is p (y 1 , . . . , y n ), and p ∼ p ≥ λ then S := S ∧ f ∼ f return unify(x 1 :: . . . :: x n , y 1 :: . . . :: y n , θ, S ) end else if x is x 1 :: . . . :: x n and y is y 1 :: . . . :: y n then (θ , S ) := unify(x 1 , y 1 , θ, S) return unify(x 2 :: . . . :: x n , y 2 :: . . . :: y n , θ , S ) end else if x is empty list and y is empty list then return (θ, S) else return (failure, 0) fun unify_var (v, o, θ, S) if {v/val} ∈ θ then return unify(val, o, θ, S) else if {o/val} ∈ θ then return unify(var, val, θ, S) else return ({v/o} + θ, S) Algorithm 1: The weak unification algorithm in Spyrolog without occurs check A.2",
"RUNTIME OF PROOF SEARCHThe worst case complexity vanilla logic programming is exponential in the depth of the proof BID34 .",
"However, in our case this is a particular problem because weak unification requires the prover to attempt unification between all entity/predicate symbols.To keep things tractable, NLPROLOG only attempts to unify symbols with a similarity greater than some user-defined threshold λ.",
"Furthermore, in the search step for one statement q, for the rest of the search, λ is set to λ := max(λ, S) whenever a proof for q with success score S is found.",
"Due to the monotonicity of the employed aggregation functions, this allows to prune the search tree without losing the guarantee to find the proof yielding the maximum success score.",
"We found this optimization to be crucial to make the proof search scale for the studied wikihop predicates."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.23076923191547394,
0.1666666567325592,
0.13793103396892548,
0.26923075318336487,
0.2083333283662796,
0.3404255211353302,
0.3921568691730499,
0.21212120354175568,
0.2926829159259796,
0.12244897335767746,
0.09302324801683426,
0.1458333283662796,
0.14035087823867798,
0.09677419066429138,
0.08888888359069824,
0.2711864411830902,
0.45614033937454224,
0.11538460850715637,
0.14705881476402283,
0.09302324801683426,
0.23728813230991364,
0.24657534062862396,
0.11594202369451523,
0.4000000059604645,
0.0476190447807312,
0.26923075318336487,
0.2222222238779068,
0.14999999105930328,
0.4000000059604645,
0.0476190447807312,
0.1249999925494194,
0.04999999701976776,
0.3283582031726837,
0.20689654350280762,
0.14492753148078918,
0.23728813230991364,
0.18518517911434174,
0.2142857164144516,
0.08510638028383255,
0.02173912525177002,
0,
0.08955223113298416,
0.10344827175140381,
0.038461532443761826,
0.08695651590824127
] | ByfXe2C5tm | true | [
"We introduce NLProlog, a system that performs rule-based reasoning on natural language by leveraging pretrained sentence embeddings and fine-tuning with Evolution Strategies, and apply it to two multi-hop Question Answering tasks."
] |
[
"Training labels are expensive to obtain and may be of varying quality, as some may be from trusted expert labelers while others might be from heuristics or other sources of weak supervision such as crowd-sourcing.",
"This creates a fundamental quality-versus-quantity trade-off in the learning process. ",
"Do we learn from the small amount of high-quality data or the potentially large amount of weakly-labeled data?",
"We argue that if the learner could somehow know and take the label-quality into account, we could get the best of both worlds. ",
"To this end, we introduce “fidelity-weighted learning” (FWL), a semi-supervised student-teacher approach for training deep neural networks using weakly-labeled data.",
"FWL modulates the parameter updates to a student network, trained on the task we care about on a per-sample basis according to the posterior confidence of its label-quality estimated by a teacher, who has access to limited samples with high-quality labels.",
"\"All samples are equal, but some samples are more equal than others.\" -Inspired by George Orwell quote, Animal Farm, 1945 The success of deep neural networks to date depends strongly on the availability of labeled data and usually it is much easier and cheaper to obtain small quantities of high-quality labeled data and large quantities of unlabeled data.",
"For a large class of tasks, it is also easy to define one or more so-called \"weak annotators\" BID10 , additional (albeit noisy) sources of weak supervision based on heuristics or weaker, biased classifiers trained on e.g. non-expert crowd-sourced data or data from different domains that are related.",
"While easy and cheap to generate, it is not immediately clear if and how these additional weakly-labeled data can be used to train a stronger classifier for the task we care about.Assuming we can obtain a large set of weakly-labeled data in addition to a much smaller training set of \"strong\" labels, the simplest approach is to expand the training set simply by including the weakly-supervised samples (all samples are equal).",
"Alternatively, one may pretrain on the weak data and then fine-tune on strong data, which is one of the common practices in semi-supervised learning.",
"We argue that treating weakly-labeled samples uniformly (i.e. each weak sample contributes equally to the final classifier) ignores potentially valuable information of the label quality.",
"Instead, we introduce Fidelity-Weighted Learning (FWL), a Bayesian semi-supervised approach that leverages a small amount of data with true labels to generate a larger training set with confidence-weighted weakly-labeled samples, which can then be used to modulate the fine-tuning process based on the fidelity (or quality) of each weak sample.",
"By directly modeling the inaccuracies introduced by the weak annotator in this way, we can control the extent to which we make use of this additional source of weak supervision: more for confidently-labeled weak samples close to the true observed data, and less for uncertain samples further away from the observed data.",
"Training neural networks using large amounts of weakly annotated data is an attractive approach in scenarios where an adequate amount of data with true labels is not available, a situation which often arises in practice.",
"In this paper, we introduced fidelity-weighted learning (FWL), a new student-teacher framework for semi-supervised learning in the presence of weakly labeled data.",
"We applied FWL to document ranking and empirically verified that FWL speeds up the training process and improves over state-of-the-art semi-supervised alternatives."
] | [
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0,
0.07692307233810425,
0.13333332538604736,
0.0555555522441864,
0.5714285373687744,
0.04081632196903229,
0.09677419066429138,
0.06779660284519196,
0.1764705777168274,
0.1111111044883728,
0.09999999403953552,
0.20338982343673706,
0.07407406717538834,
0.2666666507720947,
0.2222222238779068,
0.17142856121063232
] | SkxwBgpmDE | true | [
"We propose Fidelity-weighted Learning, a semi-supervised teacher-student approach for training neural networks using weakly-labeled data."
] |
[
"This paper is focused on investigating and demystifying an intriguing robustness phenomena in over-parameterized neural network training.",
"In particular we provide empirical and theoretical evidence that first order methods such as gradient descent are provably robust to noise/corruption on a constant fraction of the labels despite over-parameterization under a rich dataset model.",
"In particular:",
"i) First, we show that in the first few iterations where the updates are still in the vicinity of the initialization these algorithms only fit to the correct labels essentially ignoring the noisy labels.",
"ii) Secondly, we prove that to start to overfit to the noisy labels these algorithms must stray rather far from from the initial model which can only occur after many more iterations.",
"Together, these show that gradient descent with early stopping is provably robust to label noise and shed light on empirical robustness of deep networks as well as commonly adopted early-stopping heuristics."
] | [
0,
1,
0,
0,
0,
0
] | [
0.05882352590560913,
0.47058823704719543,
0.09090908616781235,
0.17777776718139648,
0.2978723347187042
] | H1eWPNr224 | false | [
"We prove that gradient descent is robust to label corruption despite over-parameterization under a rich dataset model."
] |
[
"Weight pruning has been introduced as an efficient model compression technique.",
"Even though pruning removes significant amount of weights in a network, memory requirement reduction was limited since conventional sparse matrix formats require significant amount of memory to store index-related information.",
"Moreover, computations associated with such sparse matrix formats are slow because sequential sparse matrix decoding process does not utilize highly parallel computing systems efficiently.",
"As an attempt to compress index information while keeping the decoding process parallelizable, Viterbi-based pruning was suggested.",
"Decoding non-zero weights, however, is still sequential in Viterbi-based pruning.",
"In this paper, we propose a new sparse matrix format in order to enable a highly parallel decoding process of the entire sparse matrix.",
"The proposed sparse matrix is constructed by combining pruning and weight quantization.",
"For the latest RNN models on PTB and WikiText-2 corpus, LSTM parameter storage requirement is compressed 19x using the proposed sparse matrix format compared to the baseline model.",
"Compressed weight and indices can be reconstructed into a dense matrix fast using Viterbi encoders.",
"Simulation results show that the proposed scheme can feed parameters to processing elements 20 % to 106 % faster than the case where the dense matrix values directly come from DRAM.",
"Deep neural networks (DNNs) require significant amounts of memory and computation as the number of training data and the complexity of task increases BID0 .",
"To reduce the memory burden, pruning and quantization have been actively studied.",
"Pruning removes redundant connections of DNNs without accuracy degradation BID6 .",
"The pruned results are usually stored in a sparse matrix format such as compressed sparse row (CSR) format or compressed sparse column (CSC) format, which consists of non-zero values and indices that represent the location of non-zeros.",
"In the sparse matrix formats, the memory requirement for the indices is not negligible.Viterbi-based pruning BID14 significantly reduces the memory footprint of sparse matrix format by compressing the indices of sparse matrices using the Viterbi algorithm BID3 .",
"Although Viterbi-based pruning compresses the index component considerably, weight compression can be further improved in two directions.",
"First, the non-zero values in the sparse matrix can be compressed with quantization.",
"Second, sparse-to-dense matrix conversion in Viterbi-based pruning is relatively slow because assigning non-zero values to the corresponding indices requires sequential processes while indices can be reconstructed in parallel using a Viterbi Decompressor (VD).Various",
"quantization techniques can be applied to compress the non-zero values, but they still cannot reconstruct the dense weight matrix quickly because it takes time to locate non-zero values to the corresponding locations in the dense matrix. These open",
"questions motivate us to find a non-zero value compression method, which also allows parallel sparse-to-dense matrix construction. The contribution",
"of this paper is as follows.(a) To reduce the",
"memory footprint of neural networks further, we propose to combine the Viterbibased pruning BID14 ) with a novel weight-encoding scheme, which also uses the Viterbi-based approach to encode the quantized non-zero values. (b) We suggest two",
"main properties of the weight matrix that increase the probability of finding \"good\" Viterbi encoded weights. First, the weight",
"matrix with equal composition ratio of '0' and '1' for each bit is desired. Second, using the",
"pruned parameters as \"Don't Care\" terms increases the probability of finding desired Viterbi weight encoding. (c) We demonstrate",
"that the proposed method can be applied to Recurrent Neural Networks (RNNs) and Convolutional Neural Networks (CNNs) with various sizes and depths. (d) We show that using",
"the same Viterbi-based approach to compress both indices and non-zero values allows us to build a highly parallel sparse-to-dense reconstruction architecture. Using a custom cycle-simulator",
", we demonstrate that the reconstruction can be done fast.",
"We proposed a DNN model compression technique with high compression rate and fast dense matrix reconstruction process.",
"We adopted the Viterbi-based pruning and alternating multi-bit quantization technique to reduce the memory requirement for both non-zeros and indices of sparse matrices.",
"Then, we encoded the quantized binary weight codes using Viterbi algorithm once more.",
"As the non-zero values and the corresponding indices are generated in parallel by multiple Viterbi encoders, the sparse-to-dense matrix conversion can be done very fast.",
"We also demonstrated that the proposed scheme significantly reduces the memory requirements of the parameters for both RNN and CNN.",
"A APPENDIX A.1",
"PRUNING USING THE VITERBI ALGORITHM In Viterbi-based pruning scheme, the binary outputs generated by a Viterbi Decompressor (VD) are used as the index matrix that indicates whether a weight element is pruned ('0') or not ('1').",
"Suppose the number of elements in a target weight matrix is q, and the number of outputs generated by a VD at each time step is N ind , then only 2 q/N ind binary matrices can be generated by the VD among all 2 q binary matrices.",
"The index matrix which minimizes the accuracy loss should be selected among binary matrix candidates which VD can generate in this pruning scheme, and the Viterbi algorithm is used for this purpose.The overall pruning process is similar to the binary weight encoding process using the Viterbi algorithm in Section 3.3.",
"First, Trellis diagram ( FIG3 ) of the VD which is used for pruning is constructed, and then the cost function is computed by using the path metric and the branch metric.",
"The same path metric shown in Equation 1 in Section 3.3 is used to select the branch which maximizes the path metric between two connected branches from the previous states.",
"On the other hand, a different branch metric λ i,j t is used for pruning, which is expressed as: DISPLAYFORM0 where W i,j,m t is the magnitude of a parameter at the m th VD output and time index t, normalized by the maximum absolute value of all elements in target weight matrix, and TH p is the pruning threshold value determined heuristically.",
"As β i,j,m t gives additional points (penalties) to the parameters with large magnitude to survive (be pruned), the possibility to prune small-magnitude parameters is maximized.",
"S 1 and S 2 are the scaling factors which is empirically determined.",
"BID14 uses 5.0 and 10 4 each).",
"After computing the cost function through the whole time steps, the state with the maximum path metric is chosen, and we trace the previous state by selecting the surviving branch and corresponding indices backward until the first state is reached.The ideal pruning rate of the Viterbi-based pruning is 50 %, because the VD structures act like random number generator and the probability to generate '0' or '1' is 50 % each.",
"For various pruning rates, comparators and comparator threshold value, TH c , are used.",
"A N C -bit comparator receives N c VD outputs and generates 1-bit result whether the value made by the combination of the received VD outputs (e.g. {out 1 , out 2 , · · · , out N ind } where out i indicates the i th VD output) is greater than TH c or not.",
"For example, suppose a 4-bit comparator is used to the VD in Figure 1 and TH c = 3, then the probability for the comparator to generate '1' is 25%(= (3 + 1)/2 4 ) and this percentage is the target pruning rate.",
"Comparators and TH c control the value of pruning rates and the index compression ratio decreases by 1/N c times.It is reported that a low N ind is desired to prune weights of convolutional layers while high N ind can be used to prune the weights of fully-connected layers because of the trade-off between the index compression ratio and the accuracy BID14 .",
"Thus, in our paper, we use N ind = 50 and N c = 5 to prune weights of LSTMs and fully-connected layers in VGG-6 on CIFAR-10.",
"On the other hand, we use N ind = 10 and N c = 5 to prune weights of convolutional layers in VGG-6 on CIFAR-10."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.0714285671710968,
0.09302324801683426,
0.05128204822540283,
0,
0,
0.15789473056793213,
0.20689654350280762,
0.09302324801683426,
0.3125,
0.09090908616781235,
0.05405404791235924,
0.06896550953388214,
0,
0.16326530277729034,
0.045454539358615875,
0.11764705181121826,
0.06896550953388214,
0.16326530277729034,
0.08510638028383255,
0.277777761220932,
0,
0.11999999731779099,
0.1249999925494194,
0.1764705777168274,
0.17142856121063232,
0.09999999403953552,
0.14999999105930328,
0.07407406717538834,
0.42424240708351135,
0.10526315122842789,
0.06666666269302368,
0.25,
0.17142856121063232,
0,
0.11764705181121826,
0.1538461446762085,
0.18518517911434174,
0.0952380895614624,
0.0476190410554409,
0.11764705181121826,
0,
0.13793103396892548,
0.07999999821186066,
0.02816900983452797,
0.06451612710952759,
0.03389830142259598,
0.07692307233810425,
0.16949151456356049,
0.04999999329447746,
0.04999999329447746
] | HkfYOoCcYX | true | [
"We present a new weight encoding scheme which enables high compression ratio and fast sparse-to-dense matrix conversion."
] |
[
"The use of deep learning models as priors for compressive sensing tasks presents new potential for inexpensive seismic data acquisition.",
"An appropriately designed Wasserstein generative adversarial network is designed based on a generative adversarial network architecture trained on several historical surveys, capable of learning the statistical properties of the seismic wavelets.",
"The usage of validating and performance testing of compressive sensing are three steps.",
"First, the existence of a sparse representation with different compression rates for seismic surveys is studied.",
"Then, non-uniform samplings are studied, using the proposed methodology.",
"Finally, recommendations for non-uniform seismic survey grid, based on the evaluation of reconstructed seismic images and metrics, is proposed.",
"The primary goal of the proposed deep learning model is to provide the foundations of an optimal design for seismic acquisition, with less loss in imaging quality.",
"Along these lines, a compressive sensing design of a non-uniform grid over an asset in Gulf of Mexico, versus a traditional seismic survey grid which collects data uniformly at every few feet, is suggested, leveraging the proposed method.",
"Conventional computational recovery is suffered from undesired artifacts such as over-smoothing, image size limitations and high computational cost.",
"The use of deep generative network (GAN) models offers a very promising alternative approach for inexpensive seismic data acquisition, which improved quality and revealing finer details when compared to conventional approaches or pixel-wise deep learning models.",
"As one of the pioneers to apply a pixel inpainting GAN on large, real seismic compressed image recovery, we contributes the following points:",
"1) Introduction of a GAN based inpainting model for compressed image recovery, under uniform or nonuniform sampling, capable to recover the heavily sampled data efficiently and reliably.",
"2) Superior model for compressive sensing on uniform sampling, that performs better than the originial network and the state-of-the-art interpolation method for uniform sampling.",
"3) Introduction of an effective, non-uniform, sampling survey recommendation, leveraging the GIN uniform sampling reconstructions and a hierarchical selection scheme.",
"(2) (1) We designed and implemented a modification of the GIN model, the GIN-CS, and successfully tested its performance on uniform samplings with compression rates ×2, ×4, ×8, ×16.",
"GIN-CS demonstrates superior reconstruction performance relatively to both the original GIN and the conventional biharmonic method.",
"More precisely, we show that seismic imaging can be successfully recovered by filling the missing traces, revealing finer details, even in high compression rate cases.",
"In addition, the proposed method runs approximately 300 times faster than the conventional method.",
"Finally, a strategy for constructing a recommendation of non-uniform survey is proposed for a field dataset from Gulf of Mexico, based on our results from a combination of limited amount of uniform sampling experiments."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0
] | [
0.16326530277729034,
0.14814814925193787,
0.1428571343421936,
0.1304347813129425,
0.05128204822540283,
0.25,
0.1090909019112587,
0.21875,
0.12765957415103912,
0.21875,
0.307692289352417,
0.31578946113586426,
0.23529411852359772,
0.16326530277729034,
0.07017543166875839,
0.08888888359069824,
0.1090909019112587,
0,
0.2142857164144516
] | Hyleh7hqUH | true | [
"Improved a GAN based pixel inpainting network for compressed seismic image recovery andproposed a non-uniform sampling survey recommendatio, which can be easily applied to medical and other domains for compressive sensing technique."
] |
[
"In recent years we have seen fast progress on a number of benchmark problems in AI, with modern methods achieving near or super human performance in Go, Poker and Dota.",
"One common aspect of all of these challenges is that they are by design adversarial or, technically speaking, zero-sum.",
"In contrast to these settings, success in the real world commonly requires humans to collaborate and communicate with others, in settings that are, at least partially, cooperative.",
"In the last year, the card game Hanabi has been established as a new benchmark environment for AI to fill this gap.",
"In particular, Hanabi is interesting to humans since it is entirely focused on theory of mind, i.e. the ability to effectively reason over the intentions, beliefs and point of view of other agents when observing their actions.",
"Learning to be informative when observed by others is an interesting challenge for Reinforcement Learning (RL): Fundamentally, RL requires agents to explore in order to discover good policies.",
"However, when done naively, this randomness will inherently make their actions less informative to others during training.",
"We present a new deep multi-agent RL method, the Simplified Action Decoder (SAD), which resolves this contradiction exploiting the centralized training phase.",
"During training SAD allows other agents to not only observe the (exploratory) action chosen, but agents instead also observe the greedy action of their team mates.",
"By combining this simple intuition with an auxiliary task for state prediction and best practices for multi-agent learning, SAD establishes a new state of the art for 2-5 players on the self-play part of the Hanabi challenge.",
"Humans are highly social creatures and spend vast amounts of time coordinating, collaborating and communicating with others.",
"In contrast to these, at least partially, most progress on AI in games has been in zero-sum games where agents compete against each other, typically rendering communication futile.",
"This includes examples such as Go 2018) , poker (Brown & Sandholm, 2017; Moravčík et al., 2017; Brown & Sandholm, 2019) and chess (Campbell et al., 2002) .",
"This narrow focus is unfortunate, since communication and coordination require unique abilities.",
"In order to enable smooth and efficient social interactions of groups of people, it is commonly required to reason over the intents, points of views and beliefs of other agents from observing their actions.",
"For example, a driver can reasonably infer that if a truck in front of them is slowing down when approaching an intersection, then there is likely an obstacle ahead.",
"Furthermore, humans are both able to interpret the actions of others and can act in a way that is informative when their actions are being observed by others, capabilities that are commonly called theory of Mind (ToM), .",
"Importantly, in order to carry out this kind of reasoning, an agent needs to consider why a given action is taken and what this decision indicates about the state of the world.",
"Simply observing what other agents are doing is not sufficient.",
"While these abilities are particularly relevant in partially observable fully cooperative multi-agent settings, ToM reasoning clearly matters in a variety of real world scenarios.",
"For example, autonomous cars will likely need to understand the point of view, intents and beliefs of other traffic participants in order to deal with highly interactive settings such as 4-way crossing or dense traffic in cities.",
"Hanabi is a fully cooperative partially-observable card game that has recently been proposed as a new benchmark challenge problem for AI research (Bard et al., 2019) to fill the gap around ToM.",
"In Hanabi, players need to find conventions that allow them to effectively exchange information from their local observations through their actions, taking advantage of the fact that actions are observed by all team mates.",
"Most prior state-of-the-art agents for Hanabi were developed using handcrafted algorithms, which beat off-the-shelf deep multi-agent RL methods by a large margin.",
"This makes intuitive sense: Beyond the \"standard\" multi-agent challenges of credit assignment, non-stationarity and joint exploration, learning an informative policy presents an additional fundamentally new conflict.",
"On the one hand, an RL agent needs to explore in order to discover good policies through trial and error.",
"On the other hand, when carried out naively, this exploration will add noise to the policy of the agent during the training process, making their actions strictly less informative to their team mates.",
"One possible solution to this is to explore in the space of deterministic partial policies, rather than actions, and sample these policies from a distribution that conditions on a common knowledge Bayesian belief.",
"This is successfully carried out in the Bayesian Action Decoder (BAD) , the only previous Deep RL method to achieve a state of the art in Hanabi.",
"While this is a notable accomplishment, it comes at the cost of simplicity and generality.",
"For a start, BAD requires an explicit common knowledge Bayesian belief to be tracked, which not only adds computational burden due to the required sampling steps, but also uses expert knowledge regarding the game dynamics.",
"Furthermore, BAD, as presented, is trained using actor-critic methods which are sample inefficient and suffer from local optima.",
"In order to get around this, BAD uses population based training, further increasing the number of samples required.",
"Lastly, BAD's explicit reliance on common knowledge limits the generality of the method.",
"In this paper we propose the Simplified Action Decoder (SAD), a method that achieves a similar goal to BAD, but addresses all of the issues mentioned above.",
"At the core of SAD is a different approach towards resolving the conflict between exploration and being interpretable, which, like BAD, relies on the centralized training with decentralized control (CT/DC) regime.",
"Under CT/DC information can be exchanged freely amongst all agents during centralized training, as long as the final policies are compatible with decentralized execution.",
"The key insight is that, during training we do not have to chose between being informative, by taking greedy actions, and exploring, by taking random actions.",
"To be informative, the greedy actions do not need to be executed by the environment, but only need to be observed by the team mates.",
"Thus in SAD each agent takes two different actions at each time step: One greedy action, which is not presented to the environment but observed by the team mates at the next time step as an additional input, and the \"standard\" (exploratory) action that gets executed by the environment and is observed by the team mates as part of the environment dynamics.",
"Importantly, during greedy execution the observed environment action can be used instead of centralized information for the additional input, since now the agent has stopped exploring.",
"Furthermore, to ensure that these greedy actions and observations get decoded into a meaningful representation, we can optionally train an auxiliary task that predicts key hidden game properties from the action-observation trajectories.",
"While we note that this idea is in principle compatible with any kind of model-free deep RL method with minimal modifications to the core algorithm, we use a distributed version of recurrent DQN in order to improve sample efficiency, account for partial observability and reduce the risk of local optima.",
"We also train a joint-action Q-function that consists of the sum of per-agent Q-values to allow for off-policy learning in this multi-agent setting using Value Decomposition Networks (VDN) (Sunehag et al., 2017) .",
"Using SAD we establish a new SOTA for 2-5 players in Hanabi, with a method that not only requires less expert knowledge and compute, but is also more general than previous approaches.",
"In order to ensure that our results can be easily verified and extended, we also evaluate our method on a proof-of-principle matrix game and plan to open-source our training code and agents.",
"Beyond enabling more research into the self-play aspect of Hanabi, we believe these resources will provide a much needed starting point for the ad-hoc teamwork part of the Hanabi challenge.",
"In this paper we presented the Simplified Action Decoder (SAD), a novel deep multi-agent RL algorithm that allows agents to learn communication protocols in settings where no cheap-talk channel is available.",
"On the challenging benchmark Hanabi our work substantially improves the SOTA for an RL method for all numbers of players.",
"For two players SAD establishes a new high-score across any method.",
"Furthermore we accomplish all of this with a method that is both simpler and requires less compute than previous advances.",
"While these are encouraging steps, there is clearly more work to do.",
"In particular, there remains a large performance gap between the numbers achieved by SAD and the known performance of hat-coding strategies (Wu, 2018) for 3-5 players.",
"One possible reason is that SAD does not undertake any explicit exploration in the space of possible conventions.",
"Another promising route for future work is to integrate search with RL, since this has produced SOTA results in a number of different domains including Poker, Go and backgammon.",
"A NETWORK ARCHITECTURE AND HYPER-PAMAMETERS FOR HANABI Our Hanabi agent uses dueling network architecture (Wang et al., 2015) .",
"The main body of the network consists of 1 fully connected layer of 512 units and 2 LSTM (Hochreiter & Schmidhuber, 1997) layers of 512 units, followed by two output heads for value and advantages respectively.",
"The same network configuration is used across all Hanabi experiments.",
"We take the default featurization of HLE and replace the card knowledge section with the V0-Belief proposed by .",
"The maximum length of an episode is capped at 80 steps and the entire episode is stored in the replay buffer as one training sample.",
"This avoids the \"slate hidden states\" problem as described in Kapturowski et al. (2019) so that we can simply initialize the hidden states of LSTM as zero during training.",
"For exploration and experience prioritization, we follow the simple strategy as in Horgan et al. (2018) and Kapturowski et al. (2019) .",
"Each actor executes an i -greedy policy where",
".., N − 1} but with a smaller = 0.1 and α = 7. For simplicity, all players of a game use the same epsilon. The per time-step priority δ t is the TD error and per episode priority is computed following δ e = η max t δ i + (1 − η)δ where η = 0.9. Priority exponent is set to 0.9 and importance sampling exponent is set to 0.6. We use n-step return (Sutton, 1988) and double Q-learning (van Hasselt et al., 2015) for target computation during training. The discount factor γ is set to 0.999. The network is updated using Adam optimizer (Kingma & Ba, 2014) with learning rate = 6.25 × 10 −5 and = 1.5 × 10 −5 . Trainer sends its network weights to all actors every 10 updates and target network is synchronized with online network every 2500 updates. These hyper-parameters are fixed across all experiments.",
"In the baseline, we use Independent Q-Learning where each player estimates the Q value and selects action independently at each time-step.",
"Note that all players need to operate on the observations in order to update their recurrent hidden states while only the current player has non-trivial legal moves and other players can only select 'pass'.",
"Each player then writes its own version of the episode into the prioritized replay buffer and they are sampled independently during training.",
"The prioritized replay buffer contains 2 17 (131072) episodes.",
"We warm up the replay buffer with 10,000 episodes before training starts.",
"Batch size during training is 128 for games of different numbers of players.",
"As mentioned in Section 4, the SAD agent is built on top of joint Q-function where the Q value is the sum of the individual Q value of all players given their own actions.",
"One episode produces only one training sample with an extra dimension for the number of players.",
"The replay buffer size is reduced to 2 16 for 2-player and 3-player games and 2 15 for 4-player and 5-player games.",
"The batch sizes for 2-, 3-, 4-, 5-players are 64, 43, 32, 26 respectively to account for the fact that each sample contains more data.",
"Auxiliary task can be added to the agent to help it decode the greedy action more effectively.",
"In Hanabi, the natural choice is the predict the card of player's own hand.",
"In our experiments, the auxiliary task is to predict the status of a card, which can be playable, discardable, or unknown.",
"The loss is the average cross entropy loss per card and is simply added to the TD-error of reinforcement learning during training.",
"Figure 3 shows learning curves of different algorithms averaged over 13 seeds per algorithm per player setting.",
"Shading is error of the mean."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.07692307233810425,
0.09756097197532654,
0.0833333283662796,
0.13636362552642822,
0.1071428507566452,
0.0833333283662796,
0.04999999329447746,
0.1818181723356247,
0.04444443807005882,
0.14814814925193787,
0,
0.12244897335767746,
0,
0,
0.038461532443761826,
0.08163265138864517,
0.145454540848732,
0.07843136787414551,
0,
0.04347825422883034,
0.0357142798602581,
0.145454540848732,
0.1111111044883728,
0.17777776718139648,
0,
0.0476190410554409,
0.039215680211782455,
0.14814814925193787,
0.21276594698429108,
0.052631575614213943,
0.072727270424366,
0,
0.04878048226237297,
0.05714285373687744,
0.2083333283662796,
0.07692307233810425,
0,
0.08510638028383255,
0.09756097197532654,
0.09090908616781235,
0,
0.1111111044883728,
0.09090908616781235,
0.145454540848732,
0.14814814925193787,
0.1599999964237213,
0.07999999821186066,
0.2222222238779068,
0.09756097197532654,
0.11764705181121826,
0.1395348757505417,
0.05714285373687744,
0.08510638028383255,
0.04999999329447746,
0.11538460850715637,
0.0476190410554409,
0.03703703358769417,
0.12121211737394333,
0.10256409645080566,
0,
0.04081632196903229,
0.04878048226237297,
0,
0.0624999962747097,
0,
0.11320754140615463,
0,
0,
0.05714285373687744,
0.05714285373687744,
0.04081632196903229,
0,
0.14999999105930328,
0.08510638028383255,
0.052631575614213943,
0,
0.09302324801683426,
0.0476190410554409,
0.05128204822540283,
0
] | B1xm3RVtwB | true | [
"We develop Simplified Action Decoder, a simple MARL algorithm that beats previous SOTA on Hanabi by a big margin across 2- to 5-player games."
] |
[
"We present an end-to-end trainable approach for optical character recognition (OCR) on printed documents.",
"It is based on predicting a two-dimensional character grid ('chargrid') representation of a document image as a semantic segmentation task.\n",
"To identify individual character instances from the chargrid, we regard characters as objects and use object detection techniques from computer vision.\n",
"We demonstrate experimentally that our method outperforms previous state-of-the-art approaches in accuracy while being easily parallelizable on GPU (thereby being significantly faster), as well as easier to train.",
"Optical Character Recognition (OCR) on documents is an age-old problem for which numerous open-source (e.g. [14] ) as well as proprietary solutions exist.",
"Especially in the sub-domain of printed documents, it is often regarded as being solved.",
"However, current state-of-the-art document-level OCR solutions (as far as the published research goes) consist of a complicated pipeline of steps, each one either a hand-optimized heuristic or requiring intermediate data and annotations to train.",
"Deep neural networks have been proven very successful in object detection tasks [8] .",
"In this work, we build on top of these developments and treat OCR as a semantic segmentation and object detection task for detecting and recognizing character instances on a page.",
"2 We introduce a new end-toend trainable OCR pipeline for (but not limited to) printed documents that is based on deep fully convolutional neural networks.",
"Our main contribution is to frame the OCR problem as an ultra-dense instance-segmentation task [5] for characters over the full input document image.",
"We do not rely on any pre-processing stages like binarization, deskewing, layout analysis.",
"Instead, our model learns directly from the raw document pixel data.",
"At the core of our method, we predict a chargrid representation [6] of the input document -a 1-hot encoded grid of characters.",
"Thus, we call our method Chargrid-OCR.",
"Additionally, we introduce two novel post-processing steps, both of which are crucial to performing fast and accurate dense OCR.",
"We show that our method can outperform line-based pipelines like e.g. Tesseract 4 [13] that rely on a combination of deep convolutional and recurrent networks with CTC loss [14, 1] while being significantly simpler to train.",
"* Equal contribution 2 A related task of recognizing text in natural images, referred to as Scene Text Recognition (STR), has been faster in adopting techniques from object detection in computer vision [3] .",
"However, compared to STR, document OCR deals with much denser text and very high accuracy requirements [2] .",
"2 Chargrid-OCR: OCR as an ultra-dense object detection task Chargrid-OCR method is a lexicon-free (only character-based), end-to-end trainable approach for OCR.",
"Given a document image, chargrid-OCR predicts character segmentation mask together with object bounding boxes for characters in one single step (see Fig 1) .",
"Both, semantic segmentation and object detection are common tasks in computer vision, e.g. [11, 8, 7] .",
"The character segmentation mask classifies each pixel into a character class and the character bounding box detects a bounding box around each character.",
"Both, our semantic segmentation and box detection (sub-)networks are fully convolutional and consist of only a single stage (like [8] and unlike [9] ).",
"Being single-stage is especially important as there may be thousands of characters (i.e. objects) on a single page which yields an ultra-dense object detection task.",
"We presented a new end-to-end trainable optical character recognition pipeline that is based on state-of-the-art computer vision approaches using object detection and semantic segmentation.",
"Our pipeline is significantly simpler compared to other sequential and line-based approaches, especially those used for document-level optical character recognition such as Tesseract 4.",
"We empirically show that our model outperforms Tesseract 4 on a number of diverse evaluation datasets by a large margin both in terms of accuracy and run-time."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0
] | [
0.1818181723356247,
0.16326530277729034,
0.15686273574829102,
0.1428571343421936,
0.15094339847564697,
0.13636362552642822,
0.12903225421905518,
0.04651162400841713,
0.178571417927742,
0.1818181723356247,
0,
0.04651162400841713,
0,
0.12244897335767746,
0.0555555522441864,
0.16326530277729034,
0.12121211737394333,
0.16393442451953888,
0.08510638028383255,
0.11999999731779099,
0.07547169178724289,
0.12765957415103912,
0.08695651590824127,
0.11538460850715637,
0.1071428507566452,
0.3333333432674408,
0.03703703358769417,
0.3272727131843567
] | SkxhzT5qIB | true | [
"End-to-end trainable Optical Character Recognition on printed documents; we achieve state-of-the-art results, beating Tesseract4 on benchmark datasets both in terms of accuracy and runtime, using a purely computer vision based approach."
] |
[
"We propose NovoGrad, an adaptive stochastic gradient descent method with layer-wise gradient normalization and decoupled weight decay.",
"In our experiments on neural networks for image classification, speech recognition, machine translation, and language modeling, it performs on par or better than well tuned SGD with momentum and Adam/AdamW. \n",
"Additionally, NovoGrad (1) is robust to the choice of learning rate and weight initialization, (2) works well in a large batch setting, and (3) has two times smaller memory footprint than Adam.",
"The most popular algorithms for training Neural Networks (NNs) are Stochastic Gradient Descent (SGD) with momentum (Polyak, 1964; Sutskever et al., 2013) and Adam (Kingma & Ba, 2015) .",
"SGD with momentum is the preferred algorithm for computer vision, while Adam is the most commonly used for natural language processing (NLP) and speech problems.",
"Compared to SGD, Adam is perceived as safer and more robust to weight initialization and learning rate.",
"However, Adam has certain drawbacks.",
"First, as noted in the original paper (Kingma & Ba, 2015) , the second moment can vanish or explode, especially during the initial phase of training.",
"To alleviate this problem, a learning rate (LR) warmup (Goyal et al., 2017 ) is typically used.",
"Adam often leads to solutions that generalize worse than SGD (Wilson et al., 2017) , and to improve Adam regularization, Loshchilov & Hutter (2019) proposed AdamW with decoupled weight decay.",
"Our motivation for this work was to find an algorithm which: (1) performs equally well for image classification, speech recognition, machine translation, and language modeling, (2) is robust to learning rate and weight initialization, (3) has strong regularization properties.",
"We start with Adam, and then (1) replace the element-wise second moment with the layer-wise moment, (2) compute the first moment using gradients normalized by layer-wise second moment, (3) and decouple weight decay (similar to AdamW) from normalized gradients.",
"The resulting algorithm, NovoGrad, combines SGD's and Adam's strengths.",
"We applied NovoGrad to a variety of large scale problems -image classification, neural machine translation, language modeling, and speech recognition -and found that in all cases, it performs as well or better than Adam/AdamW and SGD with momentum."
] | [
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.7096773982048035,
0.1818181723356247,
0.1304347813129425,
0.09090908616781235,
0.1621621549129486,
0.13333332538604736,
0,
0,
0,
0.2790697515010834,
0.11764705181121826,
0.22727271914482117,
0.0833333283662796,
0.1538461446762085
] | BJepq2VtDB | true | [
"NovoGrad - an adaptive SGD method with layer-wise gradient normalization and decoupled weight decay. "
] |
[
"Most generative models of audio directly generate samples in one of two domains: time or frequency.",
"While sufficient to express any signal, these representations are inefficient, as they do not utilize existing knowledge of how sound is generated and perceived.",
"A third approach (vocoders/synthesizers) successfully incorporates strong domain knowledge of signal processing and perception, but has been less actively researched due to limited expressivity and difficulty integrating with modern auto-differentiation-based machine learning methods.",
"In this paper, we introduce the Differentiable Digital Signal Processing (DDSP) library, which enables direct integration of classic signal processing elements with deep learning methods.",
"Focusing on audio synthesis, we achieve high-fidelity generation without the need for large autoregressive models or adversarial losses, demonstrating that DDSP enables utilizing strong inductive biases without losing the expressive power of neural networks.",
"Further, we show that combining interpretable modules permits manipulation of each separate model component, with applications such as independent control of pitch and loudness, realistic extrapolation to pitches not seen during training, blind dereverberation of room acoustics, transfer of extracted room acoustics to new environments, and transformation of timbre between disparate sources.",
"In short, DDSP enables an interpretable and modular approach to generative modeling, without sacrificing the benefits of deep learning.",
"The library will is available at https://github.com/magenta/ddsp and we encourage further contributions from the community and domain experts.\n",
"Neural networks are universal function approximators in the asymptotic limit (Hornik et al., 1989) , but their practical success is largely due to the use of strong structural priors such as convolution (LeCun et al., 1989) , recurrence (Sutskever et al., 2014; Williams & Zipser, 1990; Werbos, 1990) , and self-attention (Vaswani et al., 2017) .",
"These architectural constraints promote generalization and data efficiency to the extent that they align with the data domain.",
"From this perspective, end-to-end learning relies on structural priors to scale, but the practitioner's toolbox is limited to functions that can be expressed differentiably.",
"Here, we increase the size of that toolbox by introducing the Differentiable Digital Signal Processing (DDSP) library, which integrates interpretable signal processing elements into modern automatic differentiation software (TensorFlow) .",
"While this approach has broad applicability, we highlight its potential in this paper through exploring the example of audio synthesis.",
"Objects have a natural tendency to periodically vibrate.",
"Small shape displacements are usually restored with elastic forces that conserve energy (similar to a canonical mass on a spring), leading to harmonic oscillation between kinetic and potential energy (Smith, 2010) .",
"Accordingly, human hearing has evolved to be highly sensitive to phase-coherent oscillation, decomposing audio into spectrotemporal responses through the resonant properties of the basilar membrane and tonotopic mappings into the auditory cortex (Moerel et al., 2012; Chi et al., 2005; Theunissen & Elie, 2014) .",
"However, neural synthesis models often do not exploit this periodic structure for generation and perception.",
"The DDSP library fuses classical DSP with deep learning, providing the ability to take advantage of strong inductive biases without losing the expressive power of neural networks and end-to-end learning.",
"We encourage contributions from domain experts and look forward to expanding the scope of the DDSP library to a wide range of future applications.",
"A APPENDIX Figure 5 : Decomposition of a clip of solo violin.",
"Audio is visualized with log magnitude spectrograms.",
"Loudness and fundamental frequency signals are extracted from the original audio.",
"The loudness curve does not exhibit clear note segmentations because of the effects of the room acoustics.",
"The DDSP autoencoder takes those conditioning signals and predicts amplitudes, harmonic distributions, and noise magnitudes.",
"Note that the amplitudes are clearly segmented along note boundaries without supervision and that the harmonic and noise distributions are complex and dynamic despite the simple conditioning signals.",
"Finally, the extracted impulse response is applied to the combined audio from the synthesizers to give the full resynthesis audio.",
"Figure 6 : Diagram of the Additive Synthesizer component.",
"The synthesizer generates audio as a sum of sinusoids at harmonic (integer) multiples of the fundamental frequency.",
"The neural network is then tasked with emitting time-varying synthesizer parameters (fundamental frequency, amplitude, harmonic distribution).",
"In this example linear-frequency log-magnitude spectrograms show how the harmonics initially follow the frequency contours of the fundamental.",
"We then factorize the harmonic amplitudes into an overall amplitude envelope that controls the loudness, and a normalized distribution among the different harmonics that determines spectral variations."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.07999999821186066,
0,
0.0952380895614624,
0.11428570747375488,
0.0476190447807312,
0.1090909093618393,
0.13793103396892548,
0,
0,
0.07692307233810425,
0.12121211737394333,
0.10526315122842789,
0.13793103396892548,
0,
0.052631575614213943,
0.04081632196903229,
0.07999999821186066,
0.21052631735801697,
0,
0,
0.11764705181121826,
0.0952380895614624,
0,
0,
0,
0.07999999821186066,
0,
0.07692307233810425,
0.07692307233810425,
0,
0
] | B1x1ma4tDr | true | [
"Better audio synthesis by combining interpretable DSP with end-to-end learning."
] |
[
"Spectral Graph Convolutional Networks (GCNs) are a generalization of convolutional networks to learning on graph-structured data.",
"Applications of spectral GCNs have been successful, but limited to a few problems where the graph is fixed, such as shape correspondence and node classification.",
"In this work, we address this limitation by revisiting a particular family of spectral graph networks, Chebyshev GCNs, showing its efficacy in solving graph classification tasks with a variable graph structure and size.",
"Current GCNs also restrict graphs to have at most one edge between any pair of nodes.",
"To this end, we propose a novel multigraph network that learns from multi-relational graphs.",
"We explicitly model different types of edges: annotated edges, learned edges with abstract meaning, and hierarchical edges.",
"We also experiment with different ways to fuse the representations extracted from different edge types.",
"This restriction is sometimes implied from a dataset, however, we relax this restriction for all kinds of datasets.",
"We achieve state-of-the-art results on a variety of chemical, social, and vision graph classification benchmarks."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
1
] | [
0.17777776718139648,
0.18518517911434174,
0.24137930572032928,
0.04444443807005882,
0.04651162400841713,
0.17777776718139648,
0.1395348757505417,
0.043478257954120636,
0.3181818127632141
] | ryxsCiAqKm | false | [
"A novel approach to graph classification based on spectral graph convolutional networks and its extension to multigraphs with learnable relations and hierarchical structure. We show state-of-the art results on chemical, social and image datasets."
] |
[
"Deep neural networks have been recently demonstrated to be vulnerable to backdoor attacks.",
"Specifically, by altering a small set of training examples, an adversary is able to install a backdoor that can be used during inference to fully control the model’s behavior.",
"While the attack is very powerful, it crucially relies on the adversary being able to introduce arbitrary, often clearly mislabeled, inputs to the training set and can thus be detected even by fairly rudimentary data filtering.",
"In this paper, we introduce a new approach to executing backdoor attacks, utilizing adversarial examples and GAN-generated data.",
"The key feature is that the resulting poisoned inputs appear to be consistent with their label and thus seem benign even upon human inspection.",
"Over the last decade, deep learning has made unprecedented progress on a variety of notoriously difficult tasks in computer vision BID16 BID11 , speech recognition BID8 , machine translation BID28 , and game playing BID20 BID27 .",
"Despite this remarkable performance, real-world deployment of such systems remains challenging due to concerns about security and reliability.",
"One particular example receiving significant attention is the existence of adversarial examples -inputs with imperceptible adversarial perturbations that are misclassified with high confidence BID29 BID7 .",
"Such adversarial perturbations can be constructed for a wide range of models, while requiring minimal model knowledge BID22 BID4 and being applicable to real-world scenarios BID26 BID17 BID1 .However",
", this brittleness during inference is not the only vulnerability of existing ML approaches. Another",
"vulnerability corresponds to a different aspect of the ML pipeline: training. State-of-the-art",
"ML models require large amounts of data to achieve good performance. Unfortunately, large",
"datasets are expensive to generate and curate; it is hence common practice to use training examples sourced from other -often untrusted -sources. This practice is usually",
"justified by the robustness of ML models to input and label noise BID24 ) -bad samples might only slightly degrade the model's performance. While this reasoning is",
"valid when only benign noise is present, it breaks down when the noise is maliciously crafted. Attacks based on injecting",
"such malicious noise to the training set are known as data poisoning attacks BID2 .A well-studied form of data",
"poisoning aims to use the malicious samples to reduce the test accuracy of the resulting model BID31 BID21 BID19 BID3 . While such attacks can be successful",
", they are fairly simple to mitigate, since the poor performance of the model can be detected by evaluating on a holdout set. Another form of attack, known as targeted",
"poisoning attacks, aims to misclassify a specific set of inputs at inference time BID14 . These attacks are harder to detect. Their",
"impact is restricted, however, as they",
"only affect the model's behavior on a limited, pre-selected set of inputs.Recently, BID9 proposed a backdoor attack. The purpose of this attack is to plant a backdoor",
"in any model trained on the poisoned training set. This backdoor is activated during inference by a",
"backdoor trigger which, whenever present in a given input, forces the model to predict a specific (likely incorrect) target label. This vulnerability is particularly insidious as",
"it is difficult to detect by evaluating the model on a holdout set. The BID9 attack is based on randomly selecting",
"a small portion of the training set, applying a backdoor trigger to these inputs and changing their labels to the target label. This strategy is very effective. However, it crucially",
"relies on the BID9 Clean-label baseline",
"GAN-based (ours) Adv. example-based (ours) Figure 1 : An example image, labeled as an airplane, poisoned using different strategies: the BID9 attack, the baseline of the same attack restricted to only clean labels, our GAN-based attack, and our adversarial examples-based attack (left to right). The original BID9 attack image is clearly mislabeled while the",
"rest of the images appear plausible. We use the same pattern as BID9 for consistency, but our attacks",
"use a reduced amplitude, as described in Section B.1.assumption that the poisoned inputs introduced to the training set by the adversary can be arbitraryincluding clearly mislabeled input-label pairs. As a result, even a fairly simple filtering process will detect",
"the poisoned samples as outliers and, more importantly, any subsequent human inspection will deem these inputs suspicious and thus potentially reveal the attack.The goal of this paper is to investigate whether the usage of such clearly mislabeled (and thus suspicious) images is really necessary. That is, can such backdoor attacks be carried out when we insist",
"that each poisoned input and its label must be consistent, even to a human?",
"We investigate the backdoor attacks of BID9 in the presence of a simple data filtering scheme.",
"While their attack is powerful, it crucially relies on the addition of arbitrary, mostly mislabeled, inputs into the training set and can thus be detected by filtering.",
"Human inspection of the identified outliers will clearly flag the poisoned samples as unnatural.",
"We argue that, for a backdoor attack to be insidious, it must not rely on inputs that appear mislabeled upon examination.",
"To remain successful under the clean-label restriction, we propose perturbing the poisoned inputs to render them more difficult to classify.",
"We restrict the magnitude of these changes so that the true label remains plausible.We propose two methods for increasing classification difficulty: adversarial p -bounded perturbations and GAN-based interpolation.",
"We find that both methods introduce a backdoor more successfully than the clean-label adaptation of the BID9 attack.These findings demonstrate that backdoor attacks can be made significantly harder to detect than one might initially expect.",
"This emphasizes the need for developing principled and effective methods for protecting ML models from such attacks.A THE ORIGINAL ATTACK OF GU We replicate the experiments of BID9 on the CIFAR-10 ( BID15 dataset.",
"The original work considered the case where the model is trained by an adversary, since they focused on the transfer learning setting.",
"The authors accordingly imposed essentially no constraints on the number of poisoned samples used.",
"In contrast, we study the threat model where an attacker is only allowed to poison a limited number of samples in the dataset.",
"We are thus interested in understanding the fraction of poisoned samples required to ensure that the resulting model indeed has an exploitable backdoor.",
"In Figure A , we plot the attack success rate for different target labels and number of poisoned examples injected.",
"We observe that the attack is very successful even with a small (∼ 75) number of poisoned samples.",
"Note that the poisoning percentages here are calculated relative to the entire dataset.",
"The x-axis thus corresponds to the same scale in terms of examples poisoned as the rest of the plots.",
"While the attack is very effective, most image labels are clearly incorrect (Figure 1) .",
"Despite our above focus on the plausibility of the base image, the backdoor pattern itself could also cause plausibility problems if its presence appears unnatural.To mitigate this potential suspicion, we consider a modified backdoor pattern.",
"Instead of entirely replacing the bottom-right 3-pixel-by-3-pixel square with the pattern, we perturb the original pixel values by a backdoor pattern amplitude.",
"In pixels that are white in the original pattern, we add this amplitude to each color channel (i.e. red, green and blue).",
"Conversely, for black pixels, we subtract this amplitude from each channel.",
"We then clip these values to the normal range of pixel values.",
"(Here, the range is [0, 255] .)",
"Note that when the backdoor pattern amplitude is 255 or greater, this attack is always equivalent to applying the original backdoor pattern.",
"We extend our proposed adversarial example-based attack to reduced backdoor pattern amplitudes.We explore this attack with a random class (the dog class), considering backdoor pattern amplitudes of 16, 32 and 64.",
"All (non-zero) backdoor pattern amplitudes resulted in substantial attack success rates at poisoning percentages of 6% and higher.",
"Higher amplitudes conferred higher attack success rates.",
"At the two lower poisoning percentages tested, the attack success rate was near zero.",
"These results are shown in FIG8 .Image",
"plausibility is greatly improved by reducing the backdoor pattern amplitude. Examples",
"of an image at varying backdoor pattern amplitudes are shown in FIG9 . A more complete",
"set of examples is available in Appendix C.3.3.We have chosen a backdoor pattern amplitude of 32 for further investigation as a balance between conspicuousness and attack success. We tested this",
"attack on all classes, finding similar performance across the classes. These results",
"are shown in FIG8 ."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.25,
0.1538461446762085,
0.08888888359069824,
0.13333332538604736,
0.0555555522441864,
0,
0.06666666269302368,
0,
0.04878048226237297,
0,
0.1666666567325592,
0.0833333283662796,
0.11428570747375488,
0.052631575614213943,
0,
0.19999998807907104,
0.11428570747375488,
0.04999999701976776,
0.1249999925494194,
0,
0.11428570747375488,
0.13793103396892548,
0.10810810327529907,
0.06451612710952759,
0.25641024112701416,
0,
0.0363636314868927,
0.13793103396892548,
0.08163265138864517,
0.0952380895614624,
0.07692307233810425,
0.23076923191547394,
0.052631575614213943,
0,
0.1818181723356247,
0.06666666269302368,
0.05128204822540283,
0.22727271914482117,
0.09090908616781235,
0,
0,
0.05882352590560913,
0.1764705777168274,
0.0624999962747097,
0.06666666269302368,
0.0833333283662796,
0.0714285671710968,
0.07692307233810425,
0.04651162400841713,
0.0624999962747097,
0.05714285373687744,
0,
0.17391303181648254,
0,
0.13333332538604736,
0.1538461446762085,
0.06666666269302368,
0,
0,
0,
0.08695651590824127,
0.07407406717538834,
0.09756097197532654,
0,
0
] | HJg6e2CcK7 | true | [
"We show how to successfully perform backdoor attacks without changing training labels."
] |
[
"Despite their ability to memorize large datasets, deep neural networks often achieve good generalization performance.",
"However, the differences between the learned solutions of networks which generalize and those which do not remain unclear.",
"Additionally, the tuning properties of single directions (defined as the activation of a single unit or some linear combination of units in response to some input) have been highlighted, but their importance has not been evaluated.",
"Here, we connect these lines of inquiry to demonstrate that a network’s reliance on single directions is a good predictor of its generalization performance, across networks trained on datasets with different fractions of corrupted labels, across ensembles of networks trained on datasets with unmodified labels, across different hyper- parameters, and over the course of training.",
"While dropout only regularizes this quantity up to a point, batch normalization implicitly discourages single direction reliance, in part by decreasing the class selectivity of individual units.",
"Finally, we find that class selectivity is a poor predictor of task importance, suggesting not only that networks which generalize well minimize their dependence on individual units by reducing their selectivity, but also that individually selective units may not be necessary for strong network performance.",
"Recent work has demonstrated that deep neural networks (DNNs) are capable of memorizing extremely large datasets such as ImageNet BID39 .",
"Despite this capability, DNNs in practice achieve low generalization error on tasks ranging from image classification BID17 to language translation BID37 .",
"These observations raise a key question: why do some networks generalize while others do not?Answers",
"to these questions have taken a variety of forms. A variety",
"of studies have related generalization performance to the flatness of minima and PAC-Bayes bounds BID18 BID20 BID27 BID14 , though recent work has demonstrated that sharp minima can also generalize BID13 . Others have",
"focused on the information content stored in network weights BID0 , while still others have demonstrated that stochastic gradient descent itself encourages generalization BID8 BID34 BID36 .Here, we use",
"ablation analyses to measure the reliance of trained networks on single directions. We define a",
"single direction in activation space as the activation of a single unit or feature map or some linear combination of units in response to some input. We find that",
"networks which memorize the training set are substantially more dependent on single directions than those which do not, and that this difference is preserved even across sets of networks with identical topology trained on identical data, but with different generalization performance. Moreover, we",
"found that as networks begin to overfit, they become more reliant on single directions, suggesting that this metric could be used as a signal for early stopping.We also show that networks trained with batch normalization are more robust to cumulative ablations than networks trained without batch normalization and that batch normalization decreases the class selectivity of individual feature maps, suggesting an alternative mechanism by which batch normalization may encourage good generalization performance. Finally, we",
"show that, despite the focus on selective single units in the analysis of DNNs (and in neuroscience; BID21 BID40 BID28 BID9 , the class selectivity of single units is a poor predictor of their importance to the network's output.",
"In this work, we have taken an empirical approach to understand what differentiates neural networks which generalize from those which do not.",
"Our experiments demonstrate that generalization capability is related to a network's reliance on single directions, both in networks trained on corrupted and uncorrupted data, and over the course of training for a single network.",
"They also show that batch normalization, a highly successful regularizer, seems to implicitly discourage reliance on single directions.One clear extension of this work is to use these observations to construct a regularizer which more directly penalizes reliance on single directions.",
"As it happens, the most obvious candidate to regularize single direction reliance is dropout (or its variants), which, as we have shown, does not appear to regularize for single direction reliance past the dropout fraction used in training (Section 3.3).",
"Interestingly, these results suggest that one is able to predict a network's generalization performance without inspecting a held-out validation or test set.",
"This observation could be used in several interesting ways.",
"First, in situations where labeled training data is sparse, testing networks' reliance on single directions may provide a mechanism to assess generalization performance without sacrificing training data to be used as a validation set.",
"Second, by using computationally cheap empirical measures of single direction reliance, such as evaluating performance at a single ablation point or sparsely sampling the ablation curve, this metric could be used as a signal for early-stopping or hyperparameter selection.",
"We have shown that this metric is viable in simple datasets (Section 3.2), but further work will be necessary to evaluate viability in more complicated datasets.Another interesting direction for further research would be to evaluate the relationship between single direction reliance and generalization performance across different generalization regimes.",
"In this work, we evaluate generalization in which train and test data are drawn from the same distribution, but a more stringent form of generalization is one in which the test set is drawn from a unique, but overlapping distribution with the train set.",
"The extent to which single direction reliance depends on the overlap between the train and test distributions is also worth exploring in future research.This work makes a potentially surprising observation about the role of individually selective units in DNNs.",
"We found not only that the class selectivity of single directions is largely uncorrelated with their ultimate importance to the network's output, but also that batch normalization decreases the class selectivity of individual feature maps.",
"This result suggests that highly class selective units may actually be harmful to network performance.",
"In addition, it implies than methods for understanding neural networks based on analyzing highly selective single units, or finding optimal inputs for single units, such as activation maximization BID15 ) may be misleading.",
"Importantly, as we have not measured feature selectivity, it is unclear whether these results will generalize to featureselective directions.",
"Further work will be necessary to clarify all of these points."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.08695651590824127,
0.2978723347187042,
0.16393442451953888,
0.2535211145877838,
0.27586206793785095,
0.28169015049934387,
0.23529411852359772,
0.038461532443761826,
0.08888888359069824,
0.04878048226237297,
0.16393442451953888,
0.09999999403953552,
0.3478260934352875,
0.29629629850387573,
0.37681159377098083,
0.3777777850627899,
0.19354838132858276,
0.1538461446762085,
0.26229506731033325,
0.307692289352417,
0.1846153736114502,
0.038461532443761826,
0,
0.16393442451953888,
0.1538461446762085,
0.24657534062862396,
0.22580644488334656,
0.23529411852359772,
0.29999998211860657,
0.08695651590824127,
0.16393442451953888,
0.11999999731779099,
0.0476190447807312
] | r1iuQjxCZ | true | [
"We find that deep networks which generalize poorly are more reliant on single directions than those that generalize well, and evaluate the impact of dropout and batch normalization, as well as class selectivity on single direction reliance."
] |
[
"Typical amortized inference in variational autoencoders is specialized for a single probabilistic query.",
"Here we propose an inference network architecture that generalizes to unseen probabilistic queries.",
"Instead of an encoder-decoder pair, we can train a single inference network directly from data, using a cost function that is stochastic not only over samples, but also over queries.",
"We can use this network to perform the same inference tasks as we would in an undirected graphical model with hidden variables, without having to deal with the intractable partition function.",
"The results can be mapped to the learning of an actual undirected model, which is a notoriously hard problem.",
"Our network also marginalizes nuisance variables as required. ",
"We show that our approach generalizes to unseen probabilistic queries on also unseen test data, providing fast and flexible inference.",
"Experiments show that this approach outperforms or matches PCD and AdVIL on 9 benchmark datasets.",
"Learning the parameters of an undirected probabilistic graphical model (PGM) with hidden variables using maximum likelihood (ML) is a notably difficult problem (Welling and Sutton, 2005; Kuleshov and Ermon, 2017; Li et al., 2019) .",
"When all variables are observed, the range of applicable techniques is broadened (Sutton and McCallum, 2005; Sutton and Minka, 2006; Sutton and McCallum, 2007; Bradley, 2013) , but the problem remains intractable in general.",
"When hidden variables are present, the intractability is twofold:",
"(a) integrating out the hidden variables (also a challenge in directed models) and",
"(b) computing the partition function.",
"The second problem is generally deemed to be harder (Welling and Sutton, 2005) .",
"After learning, the probabilistic queries are in most cases not tractable either, so one has to resort to approximations such as belief propagation or variational inference.",
"These approximations operate in the same way regardless of whether the model is directed, and do not need to compute the partition function.",
"In general, ML learning is harder than inference both in directed and undirected models, but even more so in the latter case.",
"Approximate inference via belief propagation (BP) or variational inference (VI) can be cast as an optimization problem.",
"As such, it rarely has a closed-form solution and is instead solved iteratively, which is computationally intensive.",
"To address this problem, one can use amortized inference.",
"A prime example of this are variational autoencoders (Kingma and Welling, 2013) : a learned function (typically a neural network) is combined with the reparameterization trick (Rezende et al., 2014; Titsias and Lázaro-Gredilla, 2014) to compute the posterior over the hidden variables given the visible ones.",
"Although a variational autoencoder (VAE) performs inference much faster than actual VI optimization, this is not without limitations: they are specialized to answer a single predefined query.",
"In contrast, BP and VI answer arbitrary queries, albeit usually need more computation time.",
"The end goal of learning the parameters of a PGM is to obtain a model that can answer arbitrary probabilistic queries.",
"A probabilistic query requests the distribution of a subset of the variables of the model given some (possibly soft) evidence about another subset of variables.",
"This allows, for instance, to train a model on full images and then perform inpainting in a test image in an arbitrary region that was not known at training time.",
"Since the end goal is to be able to perform arbitrary inference, in this work we suggest to learn a system that is able to answer arbitrary probabilistic queries and avoid ML learning altogether, which completely sidesteps the difficulties associated to the partition function.",
"This puts directed and undirected models on equal footing in terms of usability.",
"To this end, we first unroll inference (we will use BP, but other options are possible) over iterations into a neural network (NN) that outputs the result of an arbitrary query, and then we train said NN to increase its prediction accuracy.",
"At training time we randomize the queries, looking for a consistent parameterization of the NN that generalizes to new queries.",
"The hope for existence of such a parameterization comes from BP actually working for arbitrary queries in a graphical model with a single parameterization.",
"We call this approach query training (QT).",
"Query training is a general approach to learn to infer when the inference target is unknown at training time.",
"It offers the following advantages:",
"1) no need to estimate the partition function or its gradient (the \"sleep\" phase of other common algorithms);",
"2) produces an inference network, which can be faster and more accurate than iterative VI or BP because its weights are trained to compensate for the imperfections of approximate inference run for a small number of iterations;",
"3) arbitrary queries can be solved.",
"In contrast, a VAE is only trained to infer the posterior over the hidden variables, or some other constant query.",
"Why would QT-NNs generalize to new queries or scale well?",
"The worry is that only a small fraction of the exponential number of potential queries is seen during training.",
"The existence of a single inference network that works reasonably well for many different queries follows from the existence of a single PGM in which BP can approximate inference.",
"The discoverability of such a network from limited training data is not guaranteed.",
"However, there is hope for it, since the amount of training data required to adjust the model parameters should scale with the number of these, and not with the number of potential queries.",
"Just like training data should come from the same distribution as test data, the training queries must come from the same distribution the test queries to avoid \"query overfitting\".",
"In future work we will show how QT can be used in more complex undirected models, such as grid MRFs.",
"Other interesting research avenues are modifications to allow sample generation and unroll other inference mechanisms, such as VI."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.1818181723356247,
0.3636363446712494,
0.4583333432674408,
0.3333333432674408,
0.307692289352417,
0.06896550953388214,
0.25641024112701416,
0.05714285373687744,
0.29629629850387573,
0.08163265138864517,
0.06896550953388214,
0.12121211737394333,
0.07999999821186066,
0,
0.17777776718139648,
0.19512194395065308,
0.1463414579629898,
0.1666666567325592,
0.0555555522441864,
0.13793103396892548,
0.09677419066429138,
0.1304347813129425,
0.05882352590560913,
0.5641025304794312,
0.2631579041481018,
0.1666666567325592,
0.290909081697464,
0.060606054961681366,
0.2295081913471222,
0.25641024112701416,
0.29999998211860657,
0,
0.2222222238779068,
0.07999999821186066,
0.10526315122842789,
0.2222222238779068,
0.1538461446762085,
0.10256409645080566,
0.06666666269302368,
0.2702702581882477,
0.40909090638160706,
0.24242423474788666,
0.21739129722118378,
0.25641024112701416,
0.04999999329447746,
0.052631575614213943
] | rJeoKJ3NKr | true | [
"Instead of learning the parameters of a graphical model from data, learn an inference network that can answer the same probabilistic queries."
] |
[
"A plethora of computer vision tasks, such as optical flow and image alignment, can be formulated as non-linear optimization problems.",
"Before the resurgence of deep learning, the dominant family for solving such optimization problems was numerical optimization, e.g, Gauss-Newton (GN).",
"More recently, several attempts were made to formulate learnable GN steps as cascade regression architectures.",
"In this paper, we investigate recent machine learning architectures, such as deep neural networks with residual connections, under the above perspective.",
"To this end, we first demonstrate how residual blocks (when considered as discretization of ODEs) can be viewed as GN steps.",
"Then, we go a step further and propose a new residual block, that is reminiscent of Newton's method in numerical optimization and exhibits faster convergence.",
"We thoroughly evaluate the proposed Newton-ResNet by conducting experiments on image and speech classification and image generation, using 4 datasets.",
"All the experiments demonstrate that Newton-ResNet requires less parameters to achieve the same performance with the original ResNet.",
"A wealth of computer vision problems (e.g., structure from motion (Buchanan & Fitzgibbon, 2005) , stereo (Lucas et al., 1981; Clark et al., 2018) , image alignment (Antonakos et al., 2015) , optical flow (Zikic et al., 2010; Baker & Matthews, 2004; Rosman et al., 2011) ) are posed as nonlinear optimization problems.",
"Before the resurgence of the machine learning era, the dominant family for solving such optimization problems 1 was numerical optimization, e.g, Gauss-Newton (GN).",
"Recently, it was proposed that the GN steps, called descent directions, can be learned and represented as a cascade regression to solve non-linear least square problems (Xiong & De la Torre, 2013) .",
"With the advent of deep learning, the aforementioned ideas were combined with learnable feature representations using deep convolutional neural networks for solving problems such as alignment and stereo (Trigeorgis et al., 2016; Clark et al., 2018) .",
"In this paper, we first try to draw similarities between learning descent directions and the structure of the popular residual networks.",
"Motivated by that, we further extend residual learning by adopting ideas from Newton's numerical optimization method, which exhibits faster convergence rate than Gauss-Newton based methods (both theoretically and empirically as we show in our experiments).",
"ResNet (He et al., 2016) is among the most popular architectures for approximating non-linear functions through learning.",
"The core component of ResNet is the residual block which can be seen as a linear difference equation.",
"That is, the t th residual block is expressed as x t`1 \" x t`C x t for input x t .",
"By considering the residual block as a discretization of Euler ODEs (Haber et al., 2018; Chen et al., 2018) , each residual block expresses a learnable, first order descent direction.",
"We propose to accelerate the convergence (i.e., employ fewer residual blocks) in approximation of non-linear functions by introducing a novel residual block that exploits second-order information in analogy to Newton's method in non-linear optimization.",
"Since the (second order) derivative is not analytically accessible, we rely on the idea of Xiong & De la Torre (2014) to learn the descent directions by exploiting second order information of the input.",
"We build a deep model, called Newton-ResNet, that involves the proposed residual block.",
"Newton-ResNet requires less residual blocks to achieve the same accuracy compared to original ResNet.",
"This is depicted in Fig. 1 ; the contour 2 shows the loss landscape near the minimum of each method and indeed the proposed method requires fewer steps.",
"Our contributions are as follows:",
"• We first establish a conceptual link between residual blocks in deep networks and standard optimization techniques, such as Gauss-Newton.",
"This motivates us to design a novel residual block that learns the descent directions with second order information (akin to Newton steps in nonlinear optimization).",
"A deep network composed of the proposed residual blocks is coined as Newton-ResNet.",
"• We show that Newton-ResNet can effectively approximate non-linear functions, by demonstrating that it requires less residual blocks and hence significantly less parameters to achieve the performance of the original ResNet.",
"We experimentally verify our claim on four different datasets of images and speech in classification tasks.",
"Additionally, we conduct experiments on image generation with Newton-ResNet-based GAN (Goodfellow et al., 2014) .",
"• We empirically demonstrate that Newton-ResNet is a good function approximator even in the absence of activation functions, where the corresponding ResNet performs poorly.",
"In this work, we establish a link between the residual blocks of ResNet architectures and learning decent directions in solving non-linear least squares (e.g., each block can be considered as a decent direction).",
"We exploit this link and we propose a novel residual block that uses second order interactions as reminiscent of Newton's numerical optimization method (i.e., learning Newton-like descent directions).",
"Newton-type methods are likely to converge faster than first order methods (e.g., Gauss-Newton).",
"We demonstrate that in the proposed architecture this translates to less residual blocks (i.e., less decent directions) in the network for achieving the same performance.",
"We conduct validation experiments on image and audio classification with residual networks and verify our intuition.",
"Furthermore, we illustrate that with our block it is possible to remove the non-linear activation functions and still achieve competitive performance."
] | [
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.14999999105930328,
0.04878048226237297,
0.0555555522441864,
0.1428571343421936,
0.4390243887901306,
0.27272728085517883,
0.051282044500112534,
0.10810810327529907,
0.0317460261285305,
0.04651162400841713,
0.18867923319339752,
0.03703703358769417,
0.09756097197532654,
0.14814814925193787,
0,
0.307692289352417,
0.1621621549129486,
0.21739129722118378,
0.307692289352417,
0.15686273574829102,
0.29411762952804565,
0.11764705181121826,
0,
0.07692307233810425,
0.2926829159259796,
0.31111109256744385,
0.1764705777168274,
0.20408162474632263,
0.05405404791235924,
0.0555555522441864,
0.1818181723356247,
0.29629629850387573,
0.3921568691730499,
0.05714285373687744,
0.22727271914482117,
0.1111111044883728,
0.1428571343421936
] | BkxaXeHYDB | true | [
"We demonstrate how residual blocks can be viewed as Gauss-Newton steps; we propose a new residual block that exploits second order information."
] |
[
"In competitive situations, agents may take actions to achieve their goals that unwittingly facilitate an opponent’s goals.",
"We\n",
"consider a domain where three agents operate: (1) a user (human), (2) an attacker (human or a software) agent and (3) an observer (a software) agent.",
"The user and the attacker compete to achieve different goals.",
"When there is a disparity in the domain knowledge the user and the attacker possess, the attacker may use the user’s unfamiliarity with the domain to\n",
"its advantage and further its own goal.",
"In this situation, the observer, whose goal is to support the user may need to intervene, and this intervention needs to occur online, on-time and be accurate.",
"We formalize the online plan intervention problem and propose a solution that uses a decision tree classifier to identify intervention points in situations where agents unwittingly facilitate an opponent’s goal.",
"We trained a classifier using domain-independent features extracted from the observer’s decision space to evaluate the “criticality” of the current state.",
"The trained model is then used in an online setting on IPC benchmarks to identify observations that warrant intervention.",
"Our contributions lay a foundation for further work in the area of deciding when to intervene.",
"When an agent is executing a plan to achieve some goal, it's progress may be challenged by unforeseen changes such as an unexpected modification to the environment or an adversary subverting the agent's goal.",
"In these situations, a passive observer intervening to help the agent reach it's intended goal will be beneficial.",
"Intervention is different from the typical plan recognition problem because we assume the observed agent pursues desirable goals while avoiding undesirable states.",
"Therefore, the observer must (1) monitor actions/state unobtrusively to predict trajectories of the observed agent (keyhole recognition) and (2) assist the observed agent to safely complete the intended task or block the current step if unsafe.",
"Consider a user checking email on a computer.",
"An attacker who wants to steal the user's password makes several approaches: sending an email with a link to a phishing website and sending a PDF file attachment embedded with a keylogger.",
"The user, despite being unaware of the attacker's plan, would like to complete the task of checking email safely and avoid the attacker's goal.",
"Through learning, our observer can recognize risky actions the user may execute in the environment and ensure safety.The decision of when to intervene must be made judicially.",
"Intervening too early may lead to wasted effort chasing down false positives, helpful warnings being ignored as a nuisances, or leaking information for the next attack.",
"Intervening too late may result in the undesirable state.",
"Further, we are interested in assisting a human user with different skill levels, who would benefit more from customized intervention.",
"To this end, we need to identify actions that warrant intervention over three different time horizons: (1) critical action, which if unchecked will definitely trigger the undesirable state, (2) mitigating action, which gives the user some time to react because the threat is not imminent and (3) preventing actions, which allows for long term planning to help the user avoid threats.",
"Based on the time horizon we are current in, we can then plan to correct course accordingly.",
"In this work we focus on identifying the first horizon.",
"Intervention is useful in both online settings, where undesirable states may arrive incrementally and in offline settings where observations are available prior to intervention.In this paper, we model online intervention in a competitive environment where three agents operate: (1) a user (human), (2) an attacker (human or a software) agent and (3) an observer (a software) agent who will intervene the user.",
"The observer passively monitors the user and the attacker competing to achieve different goals.",
"The attacker attempts (both actively and passively) to leverage the progress made by a user to achieve its own goal.",
"The attacker may mask domain knowledge available to the user to expand the attack vector and increase the likelihood of a successfull attack.",
"The user is pursuing a desirable goal while avoiding undesirable states.",
"Using domain-independant features, we train a decision tree classifier to help the observer decide whether to intervene.",
"A variation of the relaxed plan graph BID0 ) models the desirable, undesirable and neutral states that are reachable at different depths.",
"From the graph, we extract several domain independent features: risk, desirability, distances remaining to desirable goal and undesirable states and active landmarks percentage.We train a classifier to recognize an observation as a intervention point and evaluate the learned model on previously unseen observation traces to assess the accuracy.",
"Furthermore, the domain independent features used in the classifier offer a mechanism to explain why the intervention occurred.",
"In real-time, making the decision to intervene for each observation may be costly.",
"We examine how the observer can establish a wait time without compromising accuracy.The contributions of this paper include: (1) formalizing the online intervention problem as an intervention graph that extends the planning graph, (2) introducing domainindependent features that estimate the criticality of the current state to cause a known undesirable state, (3) presenting an approach that learns to classify an observation as intervention or not, (4) incorporating salient features that are better predictors of intervention to generate explanations, and (5) showing this approach works well with benchmarks.",
"We focus on two questions: (1) Using domain-independent features indicative of the likelihood to reach G u from current state, can the intervening agent correctly interrupt to prevent the user from reaching G u ?",
"and (2) If the user was not interrupted now, how can we establish a wait time until the intervention occurred before G u ?",
"To address the first question, we evaluated the performance of the learned model to predict intervention on previously unseen problems.The experiment suit consists of the two example domains from Section 2.",
"To this we added Navigator and Ferry domains from IPC benchmarks.",
"In Navigator domain, an agent simply moves from one point in grid to another goal destination.",
"In the Ferry domain, a single ferry moves cars between different locations.",
"To simulate intervention in active attacker case (the Block-Words domain), we chose word building problems.",
"The words user and the attacker want to build are different but they have some common letters (e.g., TAD/BAD).",
"The attacker is able to exploit the user's progress on stacking blocks to complete word the attacker wants to build.",
"In Easy-IPC and Navigator domains, we designated certain locations on the grid as traps.",
"The goal of the robot is to navigate to a specific point on the grid safely.",
"In the Ferry domain a port is compromised and a ferry carrying a car there results in an undesirable state.",
"The ferry's objective is to transport cars to specified locations without passing a compromised port.In addition to the trained data set, we also generated 3 separate instances of 20 problems each (total of 60) for the benchmark domains to produce testing data for the learned model.",
"The three instances contained intervention problems that were different the trained instances.",
"For example, number of blocks in the domain (block-words), size of grid (navigator, easy-ipc), accessible and inaccessible paths on the grid (navigator, easy-ipc), properties of artifacts in the grid (easy-ipc).",
"For each instance we generated 10 observation traces for each planning problem (i.e., 200 observation traces per instance).",
"We define true-positive as the classifier correctly predicting \"Y\".",
"True-negative is an instance where the classifier correctly predicts \"N\".",
"False-positives are instances where classifier incorrectly predicts an observation as an interrupt.",
"False-negatives are instances where the classifier incorrectly predicts the observation not as an interrupt."
] | [
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.1111111044883728,
0.04878048226237297,
0.13333332538604736,
0.1538461446762085,
0,
0.0952380895614624,
0.25,
0.4615384638309479,
0.1538461446762085,
0.2222222238779068,
0.11999999731779099,
0.15789473056793213,
0.09756097197532654,
0.16326530277729034,
0.07407406717538834,
0.1304347813129425,
0.14999999105930328,
0.12765957415103912,
0.1304347813129425,
0.20689654350280762,
0.04999999329447746,
0.11267605423927307,
0.1666666567325592,
0.06666666269302368,
0.1428571343421936,
0.12121211737394333,
0.1538461446762085,
0.20512819290161133,
0.12903225421905518,
0.1666666567325592,
0.19512194395065308,
0.19672130048274994,
0.2222222238779068,
0.12121211737394333,
0.3218390643596649,
0.2857142686843872,
0.09302324801683426,
0.1666666567325592,
0,
0.0555555522441864,
0.1249999925494194,
0,
0.09756097197532654,
0.1111111044883728,
0.05882352590560913,
0.23529411852359772,
0.21052631735801697,
0.16949151456356049,
0.12903225421905518,
0.09756097197532654,
0,
0.13793103396892548,
0.06666666269302368,
0,
0.060606054961681366
] | rkgqQp3mcV | true | [
"We introduce a machine learning model that uses domain-independent features to estimate the criticality of the current state to cause a known undesirable state."
] |
[
"Deep generative models can emulate the perceptual properties of complex image datasets, providing a latent representation of the data.",
"However, manipulating such representation to perform meaningful and controllable transformations in the data space remains challenging without some form of supervision.",
"While previous work has focused on exploiting statistical independence to \\textit{disentangle} latent factors, we argue that such requirement can be advantageously relaxed and propose instead a non-statistical framework that relies on identifying a modular organization of the network, based on counterfactual manipulations.",
"Our experiments support that modularity between groups of channels is achieved to a certain degree on a variety of generative models.",
"This allowed the design of targeted interventions on complex image datasets, opening the way to applications such as computationally efficient style transfer and the automated assessment of robustness to contextual changes in pattern recognition systems.",
"Deep generative models, by learning a non-linear function mapping a latent space to the space observations, have proven successful at designing realistic images in a variety of complex domains (objects, animals, human faces, interior scenes).",
"In particular, two kinds of approaches emerged as state-of-the-art (SOTA): Generative Adversarial Networks (GAN) (Goodfellow et al., 2014) , and Variational Autoencoders (VAE) (Kingma & Welling, 2013; Rezende et al., 2014) .",
"Efforts have been made to have such models produce disentangled latent representations that can control interpretable properties of images (Kulkarni et al., 2015; Higgins et al., 2017) .",
"However, the resulting models are not necessarily mechanistic (or causal) in the sense that interpretable properties of an image cannot be ascribed to a particular part, a module, of the network architecture.",
"Gaining access to a modular organization of generative models would benefit the interpretability and allow extrapolations, such as generating an object in a background that was not previously associated with this object, as illustrated in a preview of our experimental results in Fig. 1 .",
"Such extrapolations are an integral part of human representational capabilities (consider common expressions such as \"like an elephant in a china shop\") and consistent with the modular organization of its visual system, comprising specialized regions encoding objects, faces and places (see e.g. GrillSpector & Malach (2004) ).",
"Extrapolations moreover likely support adaptability to environmental changes and robust decision making (Dvornik et al., 2018) .",
"How to leverage trained deep generative architectures to perform such extrapolations is an open problem, largely due to the non-linearities and high dimensionality that prevent interpretability of computations performed in successive layers.",
"In this paper, we propose a causal framework to explore modularity, which relates to the causal principle of Independent Mechanisms, stating that the causal mechanisms contributing to the overall generating process do not influence nor inform each other (Peters et al., 2017) .",
"1 We study the effect of direct interventions in the network from the point of view that the mechanisms involved in generating data can be modified individually without affecting each other.",
"This principle can be applied to generative models to assess how well they capture a causal mechanism (Besserve et al., 2018) .",
"Causality allows to assay how an outcome would have changed, had some variables taken different values, referred to as a counterfactual (Pearl, 2009; Imbens & Rubin, 2015) .",
"We use counterfactuals to assess the role of specific internal variables in the overall functioning of trained deep generative models, along with a rigorous definition of disentanglement in a causal framework.",
"Then, we analyze this disentanglement in implemented models based on unsupervised counterfactual manipulations.",
"We show empirically how VAEs and GANs trained on image databases exhibit modularity of their hidden units, encoding different features and allowing counterfactual editing of generated images.",
"Related work.",
"Our work relates to the interpretability of convolutional neural networks, which has been intensively investigated in discriminative architectures (Zeiler & Fergus, 2014; Dosovitskiy & Brox, 2016; Fong & Vedaldi, 2017; Zhang et al., 2017b; a) .",
"Generative models require a different approach, as the downstream effect of changes in intermediate representations are high dimensional.",
"InfoGANs.",
"β-VAEs and other works (Chen et al., 2016; Mathieu et al., 2016; Kulkarni et al., 2015; Higgins et al., 2017) address supervised or unsupervised disentanglement of latent variables related to what we formalize as extrinsic disentanglement of transformations acting on data points.",
"We introduce the novel concept of intrinsic disentanglement to uncover the internal organization of networks, arguing that many interesting transformations are statistically dependent and are thus unlikely to be disentangled in the latent space.",
"This relates to Bau et al. (2018) who proposed a framework based on interventions on internal variables of a GAN which, in contrast to our fully unsupervised approach, requires semantic information.",
"Higgins et al. (2018) suggest a definition of disentanglement based on group representation theory.",
"Compared to this proposal, our approach (introduced independently in (Anonymous, 2018) ) is more flexible as it applies to arbitrary continuous transformations, free from the strong requirements of representation theory (see Appendix F).",
"Finally, an interventional approach to disentanglement has also be taken by Suter et al. (2018) , who focuses on extrinsic disentanglement in a classical graphical model setting and develop measures of interventional robustness based on labeled data.",
"We introduced a mathematical definition of disentanglement, related it to the causal notion of counterfactual and used it for the unsupervised characterization of the representation encoded by different groups of channels in deep generative architectures.",
"We found evidence for interpretable modules of internal variables in four different generative models trained on two complex real world datasets.",
"Our framework opens a way to a better understanding of complex generative architectures and applications such as the style transfer (Gatys et al., 2015) of controllable properties of generated images at low computational cost (no further optimization is required), and the automated assessment of robustness of object recognition systems to contextual changes.",
"From a broader perspective, this research direction contributes to a better exploitation of deep neural networks obtained by costly and highly energy-consuming training procedures, by (1) enhancing their interpretability and (2) allowing them to be used for tasks their where not trained for.",
"This offers a perspective on how more sustainable research in Artificial Intelligence could be fostered in the future."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.17142856121063232,
0.1538461446762085,
0.2142857164144516,
0.21621620655059814,
0.12244897335767746,
0.1599999964237213,
0.043478257954120636,
0.1395348757505417,
0.17391303181648254,
0.25,
0.1269841194152832,
0.11428570747375488,
0.1666666567325592,
0.1090909019112587,
0.09090908616781235,
0.20512819290161133,
0.13636362552642822,
0.3181818127632141,
0.19354838132858276,
0.1395348757505417,
0.07692307233810425,
0.2222222238779068,
0.07692307233810425,
0.21276594698429108,
0.21739129722118378,
0.0624999962747097,
0.07999999821186066,
0.19230768084526062,
0.2978723347187042,
0.25641024112701416,
0.158730149269104,
0.1090909019112587,
0.11428570747375488
] | SJxDDpEKvH | true | [
"We develop a framework to find modular internal representations in generative models and manipulate then to generate counterfactual examples."
] |
[
"Catastrophic forgetting poses a grand challenge for continual learning systems, which prevents neural networks from protecting old knowledge while learning new tasks sequentially.",
"We propose a Differentiable Hebbian Plasticity (DHP) Softmax layer which adds a fast learning plastic component to the slow weights of the softmax output layer.",
"The DHP Softmax behaves as a compressed episodic memory that reactivates existing memory traces, while creating new ones.",
"We demonstrate the flexibility of our model by combining it with existing well-known consolidation methods to prevent catastrophic forgetting.",
"We evaluate our approach on the Permuted MNIST and Split MNIST benchmarks, and introduce Imbalanced Permuted MNIST — a dataset that combines the challenges of class imbalance and concept drift.",
"Our model requires no additional hyperparameters and outperforms comparable baselines by reducing forgetting.",
"A key aspect of human intelligence is the ability to continually adapt and learn in dynamic environments, a characteristic which is challenging to embed into artificial intelligence.",
"Recent advances in machine learning (ML) have shown tremendous improvements in various problems, by learning to solve one complex task very well, through extensive training on large datasets with millions of training examples or more.",
"Most of the ML models that we use during deployment assume that the real-world is stationary, where in fact it is non-stationary and the distribution of acquired data changes over time.",
"Therefore, after learning is complete, and these models are fine-tuned with new data, performance degrades with respect to the original data.",
"This phenomenon *Work done during an internship at Uber AI.",
"† Work done while at Google Brain.",
"known as catastrophic forgetting or catastrophic interference BID17 BID7 serves to be a crucial problem for deep neural networks (DNNs) that are tasked with continual learning BID26 or lifelong learning (Thrun & Mitchell, 1995) .",
"In this learning paradigm, the goal is to adapt and learn consecutive tasks without forgetting how to perform previously learned tasks.",
"Some of the real-world applications that typically require this kind of learning include perception for autonomous vehicles, recommender systems, fraud detection, etc.In most supervised learning methods, DNN architectures require independent and identically distributed (iid) samples from a stationary training distribution.",
"However, for ML systems that require continual learning in the real-world, the iid assumption is easily violated when: (1) There is concept drift or class imbalance in the training data distribution.(2",
") Data representing all scenarios in which the learner is expected to perform are not initially available. In",
"such situations, DNNs face the \"stability-plasticity dilemma\" BID6 BID0 . This",
"presents a continual learning challenge for models that need to balance plasticity (integrate new knowledge) and stability (preserve existing knowledge).Two major",
"theories have been proposed to explain a human's ability to perform continual learning. The first",
"theory is inspired by synaptic consolidation in the mammalian neocortex BID5 where a subset of synapses are rendered less plastic and therefore preserved for a longer timescale. The second",
"theory is the complementary learning systems (CLS) theory BID16 BID23 BID12 , which suggests that humans extract high-level structural information and store it in a different brain area while retaining episodic memories.Here, we extend the work on differentiable plasticity BID18 BID19 to a continual learning setting and develop a model that is capable of adapting quickly to changing environments as well as consolidating previous knowledge by selectively adjusting the plasticity of synapses. We modify",
"the traditional softmax layer and propose to augment the slow weights with a set of plastic weights implemented using Differentiable Hebbian Plasticity (DHP). The model",
"'s slow weights learn deep representations of data and the fast weights implemented with DHP learn to quickly \"auto-associate\" the class labels to representations. We also",
"demonstrate the flexibility of our model by combining it with recent task-specific synaptic consolidation based methods to overcoming catastrophic forgetting such as elastic weight consolidation BID11 BID28 , synaptic intelligence (Zenke et al., 2017) and memory aware synapses . Our model",
"unifies core concepts from Hebbian plasticity, synaptic consolidation and CLS theory to enable rapid adaptation to new unseen data, while consolidating synapses and leveraging compressed episodic memories to remember previous knowledge and mitigate catastrophic forgetting.",
"We have shown that the problem of catastrophic forgetting in continual learning environments can be alleviated by adding compressed episodic memory in the softmax layer through DHP.",
"DHP Softmax alone showed noticeable improvement across all benchmarks when compared to a neural network with a traditional softmax layer.",
"We demonstrated the flexibility of our model where, in addition to DHP Softmax, we can regularize the slow weights using EWC, SI or MAS to improve a model's ability to alleviate catastrophic forgetting.",
"The approach where we combine DHP Softmax and MAS consistently leads to overall superior results compared to other baseline methods on several benchmarks.",
"This gives a strong indication that Hebbian plasticity enables neural networks to learn continually and remember distant memories, thus reducing catastrophic forgetting when learning from sequential datasets in dynamic environments.",
"For the Imbalanced Permuted MNIST experiments shown in Figure 2 , the regularization hyperparameter λ for each of the task-specific consolidation methods is λ = 400 for Online EWC BID28 , λ = 1.0 for SI (Zenke et al., 2017) and λ = 0.1 for MAS .",
"In SI, the damping parameter, ξ, was set to 0.1.",
"Similar to the Permuted MNIST benchmark, to find the best hyperparameter combination for each of these synaptic consolidation methods, we performed a grid search using a task sequence determined by a single seed.",
"Across all experiments, we maintained the the same random probabilities detemined by a single seed to artificially remove training samples from each class.",
"The hyperparameters of the synaptic consolidation methods (i.e. Online EWC, SI and MAS) remain the same with and without DHP Softmax, and the plastic components are not regularized."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.2222222238779068,
0.2666666507720947,
0.25,
0.1428571343421936,
0.04255318641662598,
0.0555555522441864,
0.1702127605676651,
0.1090909019112587,
0.04081632196903229,
0.09302324801683426,
0,
0,
0.29629629850387573,
0.1428571343421936,
0.06557376682758331,
0.11764705181121826,
0.09756097197532654,
0,
0.1818181723356247,
0.2702702581882477,
0.11764705181121826,
0.16470587253570557,
0.21739129722118378,
0.09090908616781235,
0.1666666567325592,
0.2222222238779068,
0.375,
0.1428571343421936,
0.3396226465702057,
0.04444443807005882,
0.30188679695129395,
0.03448275476694107,
0.05882352590560913,
0.07692307233810425,
0.08888888359069824,
0.0416666604578495
] | r1x-E5Ss34 | true | [
"Hebbian plastic weights can behave as a compressed episodic memory storage in neural networks; improving their ability to alleviate catastrophic forgetting in continual learning."
] |
[
"While real brain networks exhibit functional modularity, we investigate whether functional mod- ularity also exists in Deep Neural Networks (DNN) trained through back-propagation.",
"Under the hypothesis that DNN are also organized in task-specific modules, in this paper we seek to dissect a hidden layer into disjoint groups of task-specific hidden neurons with the help of relatively well- studied neuron attribution methods.",
"By saying task-specific, we mean the hidden neurons in the same group are functionally related for predicting a set of similar data samples, i.e. samples with similar feature patterns.\n",
"We argue that such groups of neurons which we call Functional Modules can serve as the basic functional unit in DNN.",
"We propose a preliminary method to identify Functional Modules via bi- clustering attribution scores of hidden neurons.\n",
"We find that first, unsurprisingly, the functional neurons are highly sparse, i.e., only a small sub- set of neurons are important for predicting a small subset of data samples and, while we do not use any label supervision, samples corresponding to the same group (bicluster) show surprisingly coherent feature patterns.",
"We also show that these Functional Modules perform a critical role in discriminating data samples through ablation experiment.",
"While real brain networks exhibit functional modularity, we investigate whether functional modularity also exists in Deep Neural Networks (DNN) trained through back-propagation.",
"Under the hypothesis that DNN are also organized in task-specific modules, in this paper we seek to dissect a hidden layer into disjoint groups of task-specific hidden neurons with the help of relatively wellstudied neuron attribution methods.",
"By saying task-specific, we mean the hidden neurons in the same group are functionally related for predicting a set of similar data samples, i.e. samples with similar feature patterns.",
"We argue that such groups of neurons which we call Functional Modules can serve as the basic functional unit in DNN.",
"We propose a preliminary method to identify Functional Modules via biclustering attribution scores of hidden neurons.",
"We find that first, unsurprisingly, the functional neurons are highly sparse, i.e., only a small subset of neurons are important for predicting a small subset of data samples and, while we do not use any label supervision, samples corresponding to the same group (bicluster) show surprisingly coherent feature patterns.",
"We also show that these Functional Modules perform a critical role in discriminating data samples through ablation experiment.",
"Also, these modules learn rich representations and are able to detect certain feature patterns demonstrated in a visual classification example.",
"We develop an approach to parcellate a hidden layer into functionally related groups which we call Functional Modules , by applying spectral coclustering on the attribution scores of hidden neurons.",
"We find the Functional Modules identifies functionally-related neurons in a layer and play an important role in discriminating data samples.",
"One major limitation of this short paper is that we have not tested on more general cases, such as different layers, different activation function, different models trained on more diverse datasets, etc.",
"In order to gain generalizable insights, such a massive investigation is neccessary."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0
] | [
0.04255318641662598,
0.37931033968925476,
0.29629629850387573,
0.260869562625885,
0.3720930218696594,
0.17142856121063232,
0.1395348757505417,
0.04347825422883034,
0.38596490025520325,
0.30188679695129395,
0.260869562625885,
0.39024388790130615,
0.1764705777168274,
0.1395348757505417,
0.13333332538604736,
0.8148148059844971,
0.3181818127632141,
0.07547169178724289,
0.10810810327529907
] | HJghQ7YU8H | true | [
"We develop an approach to parcellate a hidden layer in DNN into functionally related groups, by applying spectral coclustering on the attribution scores of hidden neurons."
] |
[
"Power-efficient CNN Domain Specific Accelerator (CNN-DSA) chips are currently available for wide use in mobile devices.",
"These chips are mainly used in computer vision applications.",
"However, the recent work of Super Characters method for text classification and sentiment analysis tasks using two-dimensional CNN models has also achieved state-of-the-art results through the method of transfer learning from vision to text.",
"In this paper, we implemented the text classification and sentiment analysis applications on mobile devices using CNN-DSA chips.",
"Compact network representations using one-bit and three-bits precision for coefficients and five-bits for activations are used in the CNN-DSA chip with power consumption less than 300mW.",
"For edge devices under memory and compute constraints, the network is further compressed by approximating the external Fully Connected (FC) layers within the CNN-DSA chip.",
"At the workshop, we have two system demonstrations for NLP tasks.",
"The first demo classifies the input English Wikipedia sentence into one of the 14 classes.",
"The second demo classifies the Chinese online-shopping review into positive or negative.",
"Power-efficient CNN Domain Specific Accelerator (CNN-DSA) chips are currently available for wide use.",
"Sun et al. BID5 ;a) designed a two-dimensional CNN-DSA accelerator which achieved a power consumption of less than 300mW and an ultra power-efficiency of 9.3TOPS/Watt.",
"All the processing is in internal memory instead of external DRAM.",
"Demos on mobile and embedded systems show its applications in real-world implemen-Preliminary work.",
"Under review by the International Conference on Machine Learning (ICML).",
"Do not distribute.",
"Figure 1 .",
"Efficient On-Device Natural Language Processing system demonstration.",
"The CNN-DSA chip is connected to Raspberry Pi through the USB interface.",
"Keyboard sends the typing text input to Raspberry Pi through USB.",
"A monitor is connected to Raspberry Pi through HDMI for display.",
"On the monitor, it shows the introduction for the demo (zoom in to see details).",
"There are two demos.",
"The first demo classifies the input English Wikipedia sentence into one of the 14 ontologies.",
"The second demo classifies the Chinese online-shopping review into positive or negative.tations.",
"The 28nm CNN-DSA accelerator attains a 140fps for 224x224 RGB image inputs at an accuracy comparable to that of the VGG BID2 .For",
"Natural Language Processing tasks, RNN and LSTM models BID8 BID1 are widely used, which are different network architectures from the twodimensional CNN. However",
", the recent work of Super Characters method BID4 BID7 using twodimensional word embedding achieved state-of-the-art result in text classification and sentiment analysis tasks, showcasing the promise of this new approach. The Super",
"Characters method is a two-step method. In the first",
"step, the characters of the input text are drawn onto a blank image, so that an image of the text is generated with each of its characters embedded by the pixel values in the two-dimensional space. The resulting",
"image is called the Super Characters image. In the second",
"step, the generated Super Characters image is fed into a twodimensional CNN models for classification. The two- dimensional",
"CNN models are trained for the text classification task through the method of Transfer Learning, which finetunes the pretrained models on large image dataset, e.g. ImageNet BID0 , with the labeled Super Characters images for the text classsification task.In this paper, we implemented NLP applications on mobile devices using the Super Characters method on a CNN-DSA chip as shown in Figure 1 . It takes arbitrary text",
"input from keyboard connecting to a mobile device (e.g. Raspberry Pi). And then the text is pre-processed",
"into a Super Characters image and sent to the CNN-DSA chip to classify. After post-processing at the mobile",
"device, the final result will be displayed on the monitor.",
"We implemented efficient on-device NLP applications on a 300mw CNN-DSA chip by employing the twodimensional embedding used in the Super Characters method.",
"The two-dimensional embedding converts text into images, which is ready to be fed into CNN-DSA chip for two-dimensional CNN computation.",
"The demonstration system minimizes the power consumption of deep neural networks for text classification, with less than 0.2% accuracy drop from the original VGG model.",
"The potential use cases for this demo system could be the intension recognition in a local-processing smart speaker or Chatbot."
] | [
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.11428570747375488,
0.0714285671710968,
0.2857142686843872,
0.37837836146354675,
0.1860465109348297,
0.0952380895614624,
0.06666666269302368,
0.060606054961681366,
0.06451612710952759,
0.1249999925494194,
0.1860465109348297,
0,
0.1875,
0.06896550953388214,
0,
0,
0,
0.06451612710952759,
0.06666666269302368,
0.06666666269302368,
0.0624999962747097,
0,
0.060606054961681366,
0.0624999962747097,
0.1428571343421936,
0.09756097197532654,
0.20408162474632263,
0.07407406717538834,
0.08163265138864517,
0,
0.21621620655059814,
0.2222222238779068,
0.10810810327529907,
0.17142856121063232,
0.0714285671710968,
0.25,
0.21621620655059814,
0.09090908616781235,
0.10256409645080566
] | r1xfHqSon4 | true | [
"Deploy text classification and sentiment analysis applications for English and Chinese on a 300mW CNN accelerator chip for on-device application scenarios."
] |
[
"Comparing the inferences of diverse candidate models is an essential part of model checking and escaping local optima.",
"To enable efficient comparison, we introduce an amortized variational inference framework that can perform fast and reliable posterior estimation across models of the same architecture.",
"Our Any Parameter Encoder (APE) extends the encoder neural network common in amortized inference to take both a data feature vector and a model parameter vector as input.",
"APE thus reduces posterior inference across unseen data and models to a single forward pass.",
"In experiments comparing candidate topic models for synthetic data and product reviews, our Any Parameter Encoder yields comparable posteriors to more expensive methods in far less time, especially when the encoder architecture is designed in model-aware fashion.",
"We consider the problem of approximate Bayesian inference for latent variable models, such as topic models (Blei et al., 2003) , embedding models (Mohamed et al., 2009) , and dynamical systems models (Shumway and Stoffer, 1991 ).",
"An important step in using such probabilistic models to extract insight from large datasets is model checking and comparison.",
"While many types of comparison are possible (Gelman et al., 2013) , we focus on a problem that we call within-model comparison.",
"Given several candidate parameter vectors θ 1 , θ 2 , .",
". ., all from the same space Θ ⊆ R D , our goal is to efficiently determine which parameter θ m is best at explaining a given dataset of N examples {x n } N n=1 . Multiple ways exist to rank candidate parameters, including performance on heldout data or human-in-the-loop inspection. A principled choice is to select the parameter that maximizes the data's marginal likelihood: N n=1 log p(x n |θ m ). For our latent variable models of interest, computing this likelihood requires marginalizing over a hidden variable h n : p(x n |θ m ) = p(x n |h n , θ m )p(h n |θ m )dh n . This integral is challenging even for a single example n and model m. One promising solution is variational inference (VI). Using VI, we can estimate an approximate posterior q(h n |x n , θ m ) over hidden variables. Approximate posteriors q can be used to compute lower bounds on marginal likelihood, and can also be helpful for human inspection of model insights and uncertainties. However, it is expensive to estimate a separate q at each example n and model m. In this paper, we develop new VI tools 1 that enable rapid-yet-effective within-model comparisons for large datasets.",
"The need for within-model comparison (and our methods) is present in many practical modeling tasks.",
"Here we discuss two possible scenarios, with some details specialized to our intended topic modeling applications (Blei, 2012) .",
"First, in human-in-the-loop scenarios, a domain expert may inspect some estimated parameter θ and then suggest an alternative parameter θ that improves interpretability.",
"In topic modeling, this may mean removing \"intruder words\" to make topics more coherent (Chang et al., 2009) .",
"Second, in automated parameter learning scenarios, many algorithms propose data-driven transformations of the current solution θ into a new candidate θ , in order to escape the local optima common in non-convex optimization objectives for latent variable models (Roberts et al., 2016) , Examples include split-merge proposal moves (Ueda and Ghahramani, 2002; Jain and Neal, 2004) or evolutionary algorithms (Sundararajan and Mengshoel, 2016) .",
"Across both these scenarios, new candidates θ arise repeatedly over time, and estimating approximate posteriors for each is essential to assess fitness yet expensive to perform for large datasets.",
"Our contribution is the Any Parameter Encoder (APE), which amortizes posterior inference across models θ m and data x n .",
"We are inspired by efforts to scale a single model to large datasets by using an encoder neural network (NN) to amortize posterior inference across data examples (Rezende et al., 2014; Kingma and Welling, 2014) .",
"Our key idea is that to additionally generalize across models, we feed model parameter vector θ m and data feature vector x n as input to the encoder.",
"APE is applicable to any model with continuous hidden variables for which amortized inference is possible via the reparameterization trick.",
"Across two datasets and many model parameters, our Any Parameter Encoder produces posterior approximations that are nearly as good as expensive VI, but over 100x faster.",
"Future opportunities include simultaneous training of parameters and encoders, and handling Bayesian nonparametric models where θ changes size during training (Hughes et al., 2015 n )) (6) For encoder methods, the parameters {µ n , log σ 2 n } are the output of a shared encoder NN.",
"For VI, these are free parameters of the optimization problem.",
"Variational Inference (VI).",
"We perform using gradient ascent to maximize the objective in Eq. (1), learning a per-example mean and variance variational parameter.",
"We run gradient updates until our moving average loss (window of 10 steps) has improved by less than 0.001% of its previous value.",
"For our VI runs from random initializations, we use the Adam optimizer with an initial learning rate of .01, decaying the rate by 50% every 5000 steps.",
"For our warm-started runs, we use an initial learning rate of 0.0005.",
"In practice, we ran VI multiple times with different learning rate parameters and took the best one.",
"Table 1 only reports the time to run the best setting, not the total time which includes various restarts.",
"Standard encoder.",
"We use a standard encoder that closely matches the VAE for topic models in Srivastava and Sutton (2017) .",
"The only architectural difference is the addition of a temperature parameter on the µ n vector before applying the softmax to ensure the means lie on the simplex.",
"We found that the additional parameter sped up training by allowing the peakiness of the posterior to be directly tuned by a single parameter.",
"We use a feedforward encoder with two hidden layers, each 100 units.",
"We chose the architecture via hyperparameter sweeps.",
"The total number of trainable parameters in the model is 24,721 on the synthetic data and 316,781 on the real data; this is compared to 16,721 and 19,781 parameters for model-aware APE.",
"NUTS.",
"For the Hamiltonian Monte Carlo (HMC) with the No U-Turn Sampler (NUTS) (Hoffman and Gelman, 2014) , we use a step size of 1 adapted during the warmup phase using Dual Averaging scheme.",
"Upon inspection, we find that the method's slightly lower posterior predictive log likelihood relative to VI is due to its wider posteriors.",
"We also find that the Pyro implementation is (understandably) quite slow and consequently warm-start the NUTS sampler using VI to encourage rapid mixing.",
"We are aware that there exist faster, more specialized implementations, but we decided to keep our tooling consistent for scientific purposes."
] | [
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.1538461446762085,
0.21276594698429108,
0.3333333432674408,
0.1621621549129486,
0.13793103396892548,
0.22641508281230927,
0.09756097197532654,
0.1428571343421936,
0.06451612710952759,
0.12578615546226501,
0.10810810327529907,
0.04999999329447746,
0.09302324801683426,
0,
0.15789473056793213,
0.04081632196903229,
0.1428571343421936,
0.1818181723356247,
0.2916666567325592,
0.19512194395065308,
0.12765957415103912,
0.16129031777381897,
0.0624999962747097,
0,
0.1904761791229248,
0.04444443807005882,
0.08510638028383255,
0.05714285373687744,
0.10256409645080566,
0.052631575614213943,
0.29999998211860657,
0.17777776718139648,
0.1904761791229248,
0.1764705777168274,
0.13793103396892548,
0.1249999925494194,
0.11320754140615463,
0.09302324801683426,
0.13636362552642822,
0.1395348757505417
] | rylGty24YB | true | [
"We develop VAEs where the encoder takes a model parameter vector as input, so we can do rapid inference for many models"
] |
[
"The ability to transfer knowledge to novel environments and tasks is a sensible desiderata for general learning agents.",
"Despite the apparent promises, transfer in RL is still an open and little exploited research area.",
"In this paper, we take a brand-new perspective about transfer: we suggest that the ability to assign credit unveils structural invariants in the tasks that can be transferred to make RL more sample efficient.",
"Our main contribution is Secret, a novel approach to transfer learning for RL that uses a backward-view credit assignment mechanism based on a self-attentive architecture.",
"Two aspects are key to its generality: it learns to assign credit as a separate offline supervised process and exclusively modifies the reward function.",
"Consequently, it can be supplemented by transfer methods that do not modify the reward function and it can be plugged on top of any RL algorithm.",
"To some, intelligence is measured as the capability of transferring knowledge to unprecedented situations.",
"While the notion of intellect itself is hard to define, the ability to reuse learned information is a desirable trait for learning agents.",
"The coffee test (Goertzel et al., 2012) , presented as a way to assess general intelligence, suggests the task of making coffee in a completely unfamiliar kitchen.",
"It requires a combination of advanced features (planning, control and exploration) that would make the task very difficult if not out of scope for the current state-of-the-art Reinforcement Learning (RL) agents to learn.",
"On the other hand, it is solved trivially by humans, who exploit the universally invariant structure of coffee-making: one needs to fetch a mug, find coffee, power the coffee machine, add water and launch the brewing process by pushing the adequate buttons.",
"Thus, to solve the coffee test, transfer learning appears necessary.",
"Were we to possess a random kitchen simulator and a lot of compute, current transfer methods would still fall short of consistently reusing structural information about the task, hence also falling short of efficient adaptation.",
"Credit assignment, which in RL refers to measuring the individual contribution of actions to future rewards, is by definition about understanding the structure of the task.",
"By structure, we mean the relations between elements of the states, actions and environment rewards.",
"In this work, we investigate what credit assignment can bring to transfer.",
"Encouraged by recent successes in transfer based on supervised methods, we propose to learn to assign credit through a separate supervised problem and transfer credit assignment capabilities to new environments.",
"By doing so, we aim at recycling structural information about the underlying task.",
"To this end, we introduce SECRET (SElf-attentional CREdit assignment for Transfer), a transferable credit assignment mechanism consisting of a self-attentive sequence-to-sequence model whose role is to reconstruct the sequence of rewards from a trajectory of agent-environment interactions.",
"It assigns credit for future reward proportionally to the magnitude of attention paid to past state-action pairs.",
"SECRET can be used to incorporate structural knowledge in the reward function without modifying optimal behavior, as we show in various generalization and transfer scenarios that preserve the structure of the task.",
"Existing backward-view credit assignment methods (Arjona-Medina et al., 2019; Hung et al., 2018) require to add auxiliary terms to the loss function used to train agents, which can have detrimental effects to the learning process (de Bruin et al., 2018) , and rely on an external memory, which hinder the generality of their approach.",
"SECRET does neither.",
"Also, as we show in Sec. 3.1, the architecture we consider for SECRET has interesting properties for credit assignment.",
"We elaborate about our novelty with respect to prior work in Sec. 4.",
"We insist on the fact that the focus of our work is on transfer and that it is not our point to compete on credit assignment capabilities.",
"We would like to emphasize several aspects about the generality of SECRET:",
"1) our method does not require any modification to the RL algorithm used to solve the tasks considered,",
"2) it does not require any modification to the agent architecture either and",
"3) it does not alter the set of optimal policies we wish to attain.",
"Moreover, our method for credit assignment is offline, and as a result, it can use interaction data collected by any mean (expert demonstrations, replay memories (Lin, 1992) , backup agent trajectories.",
". . ). We believe that this feature is of importance for real-world use cases where a high number of online interactions is unrealistic but datasets of interactions exist as a byproduct of experimentation.",
"Background We place ourselves in the classical Markov Decision Process (MDP) formalism (Puterman, 1994 ).",
"An MDP is a tuple (S, A, γ, R, P ) where S is a state space, A is an action space, γ is a discount factor (γ ∈ [0, 1)), R : S × A × S → R is a bounded reward function that maps state-action pairs to the expected reward for taking such an action in such a state.",
"Note that we choose a form that includes the resulting state in the definition of the reward function over the typical R : S × A → R. This is for consistency with objects defined later on.",
"Finally, P : S × A → ∆ S is a Markovian transition kernel that maps state-action pairs to a probability distribution over resulting states, ∆ S denoting the simplex over S.",
"An RL agent interacts with an MDP at a given timestep t by choosing an action a t ∈ A and receiving a resulting state s t+1 ∼ P (·|s t , a t ) and a reward r t = R(s t , a t , s t+1 ) from the environment.",
"A trajectory τ = (s i , a i , r i ) i=1,...,T is a set of state-action pairs and resulting rewards accumulated in an episode.",
"A subtrajectory is a portion of trajectory that starts at the beginning of the episode.",
"The performance of an agent is evaluated by its expected discounted cumulative reward E ∞ t=0 γ t r t .",
"In a partially observable MDP (POMDP), the agent receives at each timestep t an observation o t ∼ O(·|s t ) that contains partial information about the underlying state of the environment.",
"2 SECRET: SELF-ATTENTIONAL CREDIT ASSIGNMENT FOR TRANSFER SECRET uses previously collected trajectories from environments in a source distribution.",
"A selfattentive sequence model is trained to predict the final reward in subtrajectories from the sequence of observation-action pairs.",
"The distribution of attention weights from correctly predicted nonzero rewards is viewed as credit assignment.",
"In target environments, the model gets applied to a small set of trajectories.",
"We use the credit assigned to build a denser and more informative reward function that reflects the structure of the (PO)MDP.",
"The case where the target distribution is identical to the source distribution (in which we use held-out environments to assess transfer) will be referred to as generalization or in-domain transfer, as opposed to out-of-domain transfer where the source and the target distributions differ.",
"In this work, we investigated the role credit assignment could play in transfer learning and came up with SECRET, a novel transfer learning method that takes advantage of the relational properties of self-attention and transfers credit assignment instead of policy weights.",
"We showed that SECRET led to improved sample efficiency in generalization and transfer scenarios in non-trivial gridworlds and a more complex 3D navigational task.",
"To the best of our knowledge, this is the first line of work in the exciting direction of credit assignment for transfer.",
"We think it would be worth exploring how SECRET could be incorporated into online reinforcement learning methods and leave this for future work."
] | [
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.2666666507720947,
0.27586206793785095,
0.1860465109348297,
0.5,
0.1666666567325592,
0.277777761220932,
0.2222222238779068,
0.3030303120613098,
0.15789473056793213,
0.1818181723356247,
0.1599999964237213,
0.17391303181648254,
0.1818181723356247,
0.22857142984867096,
0.14814814925193787,
0.23999999463558197,
0.31578946113586426,
0.07692307233810425,
0.31111109256744385,
0.27586206793785095,
0.1428571343421936,
0.178571417927742,
0,
0.25806450843811035,
0,
0.4117647111415863,
0.1599999964237213,
0.20689654350280762,
0.07692307233810425,
0.14814814925193787,
0.27272728085517883,
0.20000000298023224,
0.0714285671710968,
0.1428571343421936,
0.2666666507720947,
0.1538461446762085,
0.1249999925494194,
0.1621621549129486,
0.307692289352417,
0.1249999925494194,
0.1463414579629898,
0.06451612710952759,
0.19999998807907104,
0.2857142686843872,
0.23076923191547394,
0.25,
0.13333332538604736,
0.30434781312942505,
0.11428570747375488,
0.4516128897666931,
0.05714285373687744
] | B1xybgSKwB | true | [
"Secret is a transfer method for RL based on the transfer of credit assignment."
] |
[
"Recent advances in Neural Variational Inference allowed for a renaissance in latent variable models in a variety of domains involving high-dimensional data.",
"In this paper, we introduce two generic Variational Inference frameworks for generative models of Knowledge Graphs; Latent Fact Model and Latent Information Model. ",
"While traditional variational methods derive an analytical approximation for the intractable distribution over the latent variables, here we construct an inference network conditioned on the symbolic representation of entities and relation types in the Knowledge Graph, to provide the variational distributions.",
"The new framework can create models able to discover underlying probabilistic semantics for the symbolic representation by utilising parameterisable distributions which permit training by back-propagation in the context of neural variational inference, resulting in a highly-scalable method.",
"Under a Bernoulli sampling framework, we provide an alternative justification for commonly used techniques in large-scale stochastic variational inference, which drastically reduces training time at a cost of an additional approximation to the variational lower bound. ",
"The generative frameworks are flexible enough to allow training under any prior distribution that permits a re-parametrisation trick, as well as under any scoring function that permits maximum likelihood estimation of the parameters.",
"Experiment results display the potential and efficiency of this framework by improving upon multiple benchmarks with Gaussian prior representations.",
"Code publicly available on Github.",
"In many fields, including physics and biology, being able to represent uncertainty is of crucial importance BID18 .",
"For instance, when link prediction in Knowledge Graphs is used for driving expensive pharmaceutical experiments (Bean et al., 2017) , it would be beneficial to know what is the confidence of a model in its predictions.",
"However, a significant shortcoming of current neural link prediction models BID13 BID38 -and for the vast majority of neural representation learning approaches -is their inability to express a notion of uncertainty.Furthermore, Knowledge Graphs can be very large and web-scale BID14 and often suffer from incompleteness and sparsity BID14 .",
"In a generative probabilistic model, we could leverage the variance in model parameters and predictions for finding which facts to sample during training, in an Active Learning setting BID22 BID17 .",
"BID16 use dropout for modelling uncertainty, however, this is only applied at test time.However, current neural link prediction models typically only return point estimates of parameters and predictions BID32 , and are trained discriminatively rather than generatively: they aim at predicting one variable of interest conditioned on all the others, rather than accurately representing the relationships between different variables BID31 , however, BID16 could still be applied to get uncertainty estimates for these models.",
"The main argument of this article is that there is a lack of methods for quantifying predictive uncertainty in a knowledge graph embedding representation, which can only be utilised using probabilistic modelling, as well as a lack of expressiveness under fixed-point representations.",
"This constitutes a significant contribution to the existing literature because we introduce a framework for creating a family of highly scalable probabilistic models for knowledge graph representation, in a field where there has been a lack of this.",
"We do this in the context of recent advances in variational inference, allowing the use of any prior distribution that permits a re-parametrisation trick, as well as any scoring function which permits maximum likelihood estimation of the parameters.",
"We have successfully created a framework allowing a model to learn embeddings of any prior distribution that permits a re-parametrisation trick via any score function that permits maximum likelihood estimation of the scoring parameters.",
"The framework reduces the parameter by one hyperparameter -as we typically would need to tune a regularisation term for an l1/ l2 loss term, however as the Gaussian distribution is self-regularising this is deemed unnecessary for matching state-ofthe-art performance.",
"We have shown, from preliminary experiments, that these display competitive results with current models.",
"Overall, we believe this work will enable knowledge graph researchers to work towards the goal of creating models better able to express their predictive uncertainty."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1
] | [
0.12121211737394333,
0.1666666567325592,
0.12244897335767746,
0.1249999925494194,
0.1249999925494194,
0.0952380895614624,
0,
0,
0.12903225421905518,
0.0833333283662796,
0.1071428507566452,
0.1395348757505417,
0.07999999821186066,
0.20408162474632263,
0.21739129722118378,
0.045454539358615875,
0.0476190447807312,
0.03999999538064003,
0.0714285671710968,
0.37837836146354675
] | HJM4rsRqFX | true | [
"Working toward generative knowledge graph models to better estimate predictive uncertainty in knowledge inference. "
] |
[
"We present a deep generative model, named Monge-Amp\\`ere flow, which builds on continuous-time gradient flow arising from the Monge-Amp\\`ere equation in optimal transport theory.",
"The generative map from the latent space to the data space follows a dynamical system, where a learnable potential function guides a compressible fluid to flow towards the target density distribution.",
"Training of the model amounts to solving an optimal control problem.",
"The Monge-Amp\\`ere flow has tractable likelihoods and supports efficient sampling and inference.",
"One can easily impose symmetry constraints in the generative model by designing suitable scalar potential functions.",
"We apply the approach to unsupervised density estimation of the MNIST dataset and variational calculation of the two-dimensional Ising model at the critical point.",
"This approach brings insights and techniques from Monge-Amp\\`ere equation, optimal transport, and fluid dynamics into reversible flow-based generative models.",
"Generative modeling is a central topic in modern deep learning research BID20 which finds broad applications in image processing, speech synthesis, reinforcement learning, as well as in solving inverse problems and statistical physics problems.",
"The goal of generative modeling is to capture the full joint probability distribution of high dimensional data and generate new samples according to the learned distribution.",
"There have been significant advances in generative modeling in recent years.",
"Of particular interests are the variational autoencoders (VAEs) BID33 BID47 , generative adversarial networks (GANs) BID19 , and autoregressive models BID18 BID34 BID55 a; BID44 .",
"Besides, there is another class of generative models which has so far gained less attention compared to the aforementioned models.",
"These models invoke a sequence of diffeomorphism to connect between latent variables with a simple base distribution and data which follow a complex distribution.",
"Examples of these flow-based generative models include the NICE and the RealNVP networks BID12 , and the more recently proposed Glow model BID32 .",
"These models enjoy favorable properties such as tractable likelihoods and efficient exact inference due to invertibility of the network.A key concern in the design of flow-based generative models has been the tradeoff between the expressive power of the generative map and the efficiency of training and sampling.",
"One typically needs to impose additional constraints in the network architecture BID12 BID46 BID34 BID44 , which unfortunately weakens the model compared to other generative models.",
"In addition, another challenge to generative modeling is how to impose symmetry conditions such that the model generates symmetry related configurations with equal probability.To further unleash the power of the flow-based generative model, we draw inspirations from the optimal transport theory BID56 BID57 BID45 and dynamical systems BID30 .",
"Optimal transport theory concerns the problem of connecting two probability distributions p(z) and q(x) via transportation z → x at a minimal cost.",
"In this context, the Brenier theorem BID4 states that under the quadratic distance measure, the optimal generative map is the gradient of a convex function.",
"This motivates us to parametrize the vector-valued generative map as the gradient of a scalar potential function, thereby formulating the generation process as a continuous-time gradient flow BID0 .",
"In this regard, a generative map is naturally viewed as a deterministic dynamical system which evolves over time.",
"Numerical integration of the dynamical system gives rise to the neural network representation of the generative model.",
"To this end, E (2017) proposes a dynamical system perspective to machine learning, wherein the training procedure is viewed as a control problem, and the algorithm like back-propagation is naturally derived from the optimal control principle BID36 .",
"Moreover, implemented the generative map as an ODE integration and employed efficient adjoint analysis for its training.In this paper, we devise the Monge-Ampère flow as a new generative model and apply it to two problems: density estimation of the MNIST dataset and variational calculation of the Ising model.",
"In our approach, the probability density is modeled by a compressible fluid, which evolves under the gradient flow of a learnable potential function.",
"The flow has tractable likelihoods and exhibits the same computational complexity for sampling and inference.",
"Moreover, a nice feature of the MongeAmpère flow is that one can construct symmetric generative models more easily by incorporating the symmetries into the scalar potential.",
"The simplicity and generality of this framework allow the principled design of the generative map and gives rise to lightweight yet powerful generative models.",
"Gradient flow of compressible fluids in a learnable potential provides a natural way to set up deep generative models.",
"The Monge-Ampère flow combines ideas and techniques in optimal transport, fluid dynamics, and differential dynamical systems for generative modeling.We have adopted a minimalist implementation of the Monge-Ampère flow with a scalar potential parameterized by a single hidden layer densely connected neural network.",
"There are a number of immediate improvements to further boost its performance.",
"First, one could extend the neural network architecture of the potential function in accordance with the target problem.",
"For example, a convolutional neural network for data with spatial or temporal correlations.",
"Second, one can explore better integration schemes which exactly respect the time-reversal symmetry to ensure reversible sampling and inference.",
"Lastly, by employing the backpropagation scheme of through the ODE integrator one can reduce the memory consumption and achieve guaranteed convergence in the integration.Furthermore, one can employ the Wasserstein distances BID1 instead of the KLdivergence to train the Monge-Ampère flow.",
"With an alternative objective function, one may obtain an even more practically useful generative model with tractable likelihood.",
"One may also consider using batch normalization layers during the integration of the flow BID13 BID44 .",
"However, since the batch normalization breaks the physical interpretation of the continuous gradient flow of a fluid, one still needs to investigate whether it plays either a theoretical or a practical role in the continuous-time flow.Moreover, one can use a time-dependent potential ϕ(x, t) to induce an even richer gradient flow of the probability densities.",
"BID2 has shown that the optimal transport flow (in the sense of minimizing the spatial-time integrated kinetic energy, which upper bounds the squared Wasserstein-2 distance) follows a pressureless flow in a time-dependent potential.",
"The fluid moves with a constant velocity that linearly interpolates between the initial and the final density distributions.",
"Practically, a time-dependent potential corresponds to a deep generative model without sharing parameters in the depth direction as shown in FIG1 .",
"Since handling a large number of independent layers for each integration step may be computationally inefficient, one may simply accept one additional time variable in the potential function, or parametrize ϕ(x, t) as the solution of another differential equation, or partially tie the network parameters using a hyper-network BID22 .Besides",
"applications presented here, the Monge-Ampère flow has wider applications in machine learning and physics problems since it inherits all the advantages of the other flow-based generative models BID12 BID46 BID34 BID13 BID44 . A particular",
"advantage of generative modeling using the Monge-Ampère flow is that it is relatively easy to impose symmetry into the scalar potential. It is thus worth",
"exploiting even larger symmetry groups, such as the permutation for modeling exchangeable probabilities BID35 . Larger scale practical",
"application in statistical and quantum physics is also feasible with the Monge-Ampère flow. For example, one can study",
"the physical properties of realistic molecular systems using Monge-Ampère flow for variational free energy calculation. Lastly, since the mutual information",
"between variables is greatly reduced in the latent space, one can also use the Monge-Ampère flow in conjunction with the latent space hybrid Monte Carlo for efficient sampling BID38 ."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.1818181723356247,
0.17142856121063232,
0,
0.0952380895614624,
0.07692307233810425,
0,
0.0714285671710968,
0.04999999701976776,
0.1249999925494194,
0.19999998807907104,
0.05882352590560913,
0.06896550953388214,
0,
0.06666666269302368,
0.08695651590824127,
0.05882352590560913,
0.11320754140615463,
0,
0.1249999925494194,
0.1818181723356247,
0.2222222238779068,
0.25,
0.0952380895614624,
0.11999999731779099,
0.12903225421905518,
0.1666666567325592,
0.11764705181121826,
0.06666666269302368,
0.1428571343421936,
0.2083333283662796,
0,
0,
0.08695651590824127,
0,
0.0476190447807312,
0.07407406717538834,
0.07999999821186066,
0.07692307233810425,
0.052631575614213943,
0,
0.06896550953388214,
0.037735845893621445,
0.1463414579629898,
0.1875,
0.14814814925193787,
0.0714285671710968,
0.13793103396892548,
0.11428570747375488
] | rkeUrjCcYQ | true | [
"A gradient flow based dynamical system for invertible generative modeling"
] |
[
"Graph Neural Networks (GNNs) are a class of deep models that operates on data with arbitrary topology and order-invariant structure represented as graphs.",
"We introduce an efficient memory layer for GNNs that can learn to jointly perform graph representation learning and graph pooling.",
"We also introduce two new networks based on our memory layer: Memory-Based Graph Neural Network (MemGNN) and Graph Memory Network (GMN) that can learn hierarchical graph representations by coarsening the graph throughout the layers of memory.",
"The experimental results demonstrate that the proposed models achieve state-of-the-art results in six out of seven graph classification and regression benchmarks.",
"We also show that the learned representations could correspond to chemical features in the molecule data.",
"Graph Neural Networks (GNNs) (Wu et al., 2019; Zhou et al., 2018; are a class of deep architectures that operate on data with arbitrary topology represented as graphs such as social networks (Kipf & Welling, 2016) , knowledge graphs (Schlichtkrull et al., 2018) , molecules (Duvenaud et al., 2015) , point clouds (Hassani & Haley, 2019) , and robots .",
"Unlike regular-structured inputs with spatial locality such as grids (e.g., images and volumetric data) and sequences (e.g., speech and text), GNN inputs are variable-size graphs consisting of permutationinvariant nodes and interactions among them.",
"GNNs such as Gated GNN (GGNN) (Li et al., 2015) , Message Passing Neural Network (MPNN) (Gilmer et al., 2017) , Graph Convolutional Network (GCN) (Kipf & Welling, 2016) , and Graph Attention Network (GAT) (Velikovi et al., 2018) learn node embeddings through an iterative process of transferring, transforming, and aggregating the node embeddings from topological neighbors.",
"Each iteration expands the receptive field by one hop and after k iterations the nodes within k hops influence the node embeddings of one another.",
"GNNs are shown to learn better representations compared to random walks (Grover & Leskovec, 2016; Perozzi et al., 2014) , matrix factorization (Belkin & Niyogi, 2002; Ou et al., 2016) , kernel methods (Shervashidze et al., 2011; Kriege et al., 2016) , and probabilistic graphical models (Dai et al., 2016) .",
"These models, however, cannot learn hierarchical representation as they do not exploit the graph compositionality.",
"Recent work such as Differentiable Pooling (DiffPool) (Ying et al., 2018) , TopKPool (Gao & Ji, 2019) , and Self-Attention Graph Pooling (SAGPool) (Lee et al., 2019) define parametric graph pooling layers that let models learn hierarchical graph representation by stacking interleaved layers of GNN and pooling layers.",
"These layers cluster nodes in the latent space such that the clusters are meaningful with respect to the task.",
"These clusters might be communities in a social network or potent functional groups within a chemical dataset.",
"Nevertheless, these models are not efficient as they require an iterative process of message passing after each pooling layer.",
"In this paper, we introduce a memory layer for joint graph representation learning and graph coarsening that consists of a multi-head array of memory keys and a convolution operator to aggregate the soft cluster assignments from different heads.",
"The queries to a memory layer are node embeddings from the previous layer and the outputs are the node embeddings of the coarsened graph.",
"The memory layer does not explicitly require connectivity information and unlike GNNs relies on the global information rather than local topology.",
"These properties make them more efficient and improve their performance.",
"We also introduce two networks based on the proposed layer: Memory-based Graph Neural Network (MemGNN) and Graph Memory Network (GMN).",
"MemGNN consists of a GNN that learns the initial node embeddings, and a stack of memory layers that learns hierarchical graph representation up to the global graph embedding.",
"GMN, on the other hand, learns the hierarchical representation purely based on memory layers and hence does not require message passing.",
"We proposed an efficient memory layer and two deep models for hierarchical graph representation learning.",
"We evaluated the proposed models on nine graph classification and regression tasks and achieved state-of-the-art results on eight of them.",
"We also experimentally showed that the learned representations can capture the well-known chemical features of the molecules.",
"Our study indicated that node attributes concatenated with corresponding topological embeddings in combination with one or more memory layers achieves notable results without using message passing.",
"We also showed that for the topological embeddings, the binary adjacency matrix is sufficient and thus no further preprocessing step is required for extracting them.",
"Finally, we showed that although connectivity information is not directly imposed on the model, the memory layer can process node embeddings and properly cluster and aggregate the learned embeddings.",
"Limitations: In section 4.2, we discussed that on the COLLAB dataset, kernel methods or deep models augmented with deterministic clustering algorithm achieve better performance compared to our models.",
"Analyzing samples in this dataset suggests that in graphs with dense communities, such as cliques, our model lacks the ability to properly detect these dense sub-graphs.",
"Moreover, the results of the DD dataset reveals that our MemGNN model outperforms the GMN model which implies that we need message passing to perform better on this dataset.",
"We speculate that this is because the DD dataset relies more on local information.",
"The most important features to train an SVM on this dataset are surface features which have local behavior.",
"This suggest that for data with strong local interactions, message passing is required to improve the performance.",
"Future Directions: We are planning to introduce a model based on the MemGNN and GMN architectures that can perform node classification by attending to the node embeddings and centroids of the clusters from different layers of hierarchy that the node belongs to.",
"We are also planning to investigate the representation learning capabilities of the proposed models in self-supervised setting.",
"A APPENDIX"
] | [
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.1860465109348297,
0.5641025304794312,
0.31372547149658203,
0.09999999403953552,
0.11428570747375488,
0.11940298229455948,
0.07999999821186066,
0.0923076868057251,
0.04878048226237297,
0.0714285671710968,
0.11428570747375488,
0.13793103396892548,
0.05405404791235924,
0,
0.25641024112701416,
0.23076923191547394,
0.1621621549129486,
0.19999998807907104,
0.13333332538604736,
0.21052631735801697,
0.1904761791229248,
0.307692289352417,
0.4000000059604645,
0.15789473056793213,
0.17142856121063232,
0.2222222238779068,
0.1428571343421936,
0.2666666507720947,
0.0833333283662796,
0.09090908616781235,
0.1818181723356247,
0.1764705777168274,
0.10810810327529907,
0.1621621549129486,
0.23076923191547394,
0.1111111044883728
] | r1laNeBYPB | true | [
"We introduce an efficient memory layer that can learn representation and coarsen input graphs simultaneously without relying on message passing."
] |
[
"Deep neural networks require extensive computing resources, and can not be efficiently applied to embedded devices such as mobile phones, which seriously limits their applicability.",
"To address this problem, we propose a novel encoding scheme by using {-1,+1} to decompose quantized neural networks (QNNs) into multi-branch binary networks, which can be efficiently implemented by bitwise operations (xnor and bitcount) to achieve model compression, computational acceleration and resource saving.",
"Our method can achieve at most ~59 speedup and ~32 memory saving over its full-precision counterparts.",
"Therefore, users can easily achieve different encoding precisions arbitrarily according to their requirements and hardware resources.",
"Our mechanism is very suitable for the use of FPGA and ASIC in terms of data storage and computation, which provides a feasible idea for smart chips.",
"We validate the effectiveness of our method on both large-scale image classification (e.g., ImageNet) and object detection tasks."
] | [
0,
1,
0,
0,
0,
0
] | [
0.10526315122842789,
0.6666666865348816,
0.1249999925494194,
0.1666666567325592,
0.1428571343421936,
0.07692307233810425
] | rylfIYoucQ | false | [
"A novel encoding scheme of using {-1, +1} to decompose QNNs into multi-branch binary networks, in which we used bitwise operations (xnor and bitcount) to achieve model compression, computational acceleration and resource saving. "
] |
[
"We propose a novel method for incorporating conditional information into a generative adversarial network (GAN) for structured prediction tasks.",
"This method is based on fusing features from the generated and conditional information in feature space and allows the discriminator to better capture higher-order statistics from the data.",
"This method also increases the strength of the signals passed through the network where the real or generated data and the conditional data agree.",
"The proposed method is conceptually simpler than the joint convolutional neural network - conditional Markov random field (CNN-CRF) models and enforces higher-order consistency without being limited to a very specific class of high-order potentials.",
"Experimental results demonstrate that this method leads to improvement on a variety of different structured prediction tasks including image synthesis, semantic segmentation, and depth estimation.",
"Convolutional neural networks (CNNs) have demonstrated groundbreaking results on a variety of different learning tasks.",
"However, on tasks where high dimensional structure in the data needs to be preserved, per-pixel regression losses typically result in unstructured outputs since they do not take into consideration non-local dependencies in the data.",
"Structured prediction frameworks such as graphical models and joint CNN-graphical model-based architectures e.g. CNN-CRFs have been used for imposing spatial contiguity using non-local information BID13 BID2 BID24 .",
"The motivation to use CNN-CRF models stems from their ability to capture some structured information from second order statistics using the pairwise part.",
"However, statistical interactions beyond the second-order are tedious to incorporate and render the models complicated BID0 BID12 ).",
"Other approaches have used task-specific perceptual losses to solve this problem BID10 .Generative",
"models provide another way to represent the structure and spacial contiguity in large high-dimensional datasets with complex dependencies. Implicit generative",
"models specify a stochastic procedure to produce outputs from a probability distribution. Such models are appealing",
"because they do not demand parametrization of the probability distribution they are trying to model. Recently, there has been",
"great interest in CNN-based implicit generative models using autoregressive BID4 and adversarial training frameworks BID16 .Generative adversarial networks",
"(GANs) BID6 can be seen as a two player minimax game where the first player, the generator, is tasked with transforming a random input to a specific distribution such that the second player, the discriminator, can not distinguish between the true and synthesized distributions. The most distinct feature of adversarial",
"networks is the discriminator that assesses the discrepancy between the current and target distributions. The discriminator acts as a progressively",
"precise critic of an increasingly accurate generator. Despite their structured prediction capabilities",
", such a training paradigm is often unstable and can suffer from mode collapse. However, recent work on spectral normalization",
"(SN) and gradient penalty has significantly increased training stability BID7 . Conditional GANs (cGANs) BID19 incorporate conditional",
"image information in the discriminator and have been widely used for class conditioned image generation . To that effect, unlike in standard GANs, a discriminator",
"for cGANs discriminates between DISPLAYFORM0 Adversarial loss (a) Concatenated Image Conditioning x y Adversarial loss DISPLAYFORM1 Discriminator models for image conditioning. We propose fusing the features of the input and the ground",
"truth or generated image rather than concatenating.the generated distribution and the target distribution on pairs of samples y and conditional information x.For class conditioning, several unique strategies have been presented to incorporate class information in the discriminator BID23 BID22 . However, a cGAN can also be conditioned by structured data",
"such as an image. Such conditioning is much more useful for structured prediction",
"problems. Since the discriminator in image conditioned-GANs has access to",
"large portions of the image the adversarial loss can be interpreted as a learned loss that incorporates higher order statistics, essentially eliminating the need to manually design higher order loss functions. Consequently, this variation of cGANs has extensively been used",
"for image-to-image translation tasks . However, the best way of incorporating image conditional information",
"into a GAN is not always clear and methods of feeding generated and conditional images to the discriminator tend to use a naive concatenation approach. In this work we address this gap by proposing a discriminator architecture",
"specifically designed for image conditioning. Such a discriminator can contribute to the promise of generalization GANs",
"bring to structured prediction problems whereby a singular and simplistic setup can be used for capturing higher order non-local structural information from higher dimensional data without complicated modeling of energy functions.Contributions. We propose an approach to incorporating conditional information into a cGAN",
"using a fusion architecture (Fig. 1b) . In particular, we make the following key contributions:1. We propose a novel",
"discriminator architecture optimized for incorporating conditional",
"information in cGANs for structured prediction tasks. The method is designed to incorporate conditional information in feature space and thereby",
"allows the discriminator to enforce higher-order consistency in the model. At the same time, this method is conceptually simpler than alternative structured prediction",
"methods such as CNN-CRFs where higher-order potentials have to be manually incorporated in the loss function.2. We demonstrate the effectiveness of this method on a variety of structured prediction tasks",
"including semantic segmentation, depth estimation, and generating real images from semantic masks. Our empirical study demonstrates that using a fusion discriminator is more effective in preserving",
"high-order statistics and structural information in the data.2 RELATED WORK 2.1 CNN-CRF MODELS Models for structured prediction have been extensively studied in",
"computer vision. In the past these models often entailed the construction of hand-engineered features. In 2015, BID15",
"demonstrated that a fully convolutional approach to semantic segmentation could yield",
"state-ofthe-art results at that time with no need for hand-engineering features. BID1 showed that post-processing the results of a CNN with a conditional Markov random field led to",
"significant improvements. Subsequent work by many authors have refined this approach by incorporating the CRF as a layer within",
"a deep network and thereby enabling the parameters of both models to be learnt simultaneously BID11 . Many researchers have used this approach for other structured prediction problems, including image-to-image",
"translation and depth estimation BID14 .In most cases CNN-CRF models only incorporate unary and pairwise potentials. Recent work by BID0 has investigated",
"incorporating higher-order potentials into CNN-based models for semantic segmentation",
", and has found that while it is possible to learn the parameters of these potentials, they can be tedious to incorporate and render the model quite complex. There is a need for developing methods that can incorporate higher-order statistical information with out manual modeling",
"of higher order potentials.",
"Structured prediction problems can be posed as image conditioned GAN problems.",
"The discriminator plays a crucial role in incorporating non-local information in adversarial training setups for structured prediction problems.",
"Image conditioned GANs usually feed concatenated input and output pairs to the discriminator.",
"In this research, we proposed a model for the discriminator of cGANs that involves fusing features from both the input and the output image in feature space.",
"This method provides the discriminator a hierarchy of features at different scales from the conditional data, and thereby allows the discriminator to capture higher-order statistics from the data.",
"We qualitatively demonstrate and empirically validate that this simple modification can significantly improve the general adversarial framework for structured prediction tasks.",
"The results presented in this paper strongly suggest that the mechanism of feeding paired information into the discriminator in image conditioned GAN problems is of paramount importance.",
"The generator G tries to minimize the loss expressed by equation 5 while the discriminator D tries to maximize it.",
"In addition, we impose an L1 reconstruction loss: DISPLAYFORM0 leading to the objective, DISPLAYFORM1 6.2 GENERATOR ARCHITECTURE We adapt our network architectures from those explained in .",
"Let CSRk denote a Convolution-Spectral Norm -ReLU layer with k filters.",
"Let CSRDk donate a similar layer with dropout with a rate of 0.5.",
"All convolutions chosen are 4 × 4 spatial filters applied with a stride 2, and in decoders they are up-sampled by 2.",
"All networks were trained from scratch and weights were initialized from a Gaussian distribution of mean 0 and standard deviation of 0.02.",
"All images were cropped and rescaled to 256 × 256, were up sampled to 268 × 286 and then randomly cropped back to 256 × 256 to incorporate random jitter in the model."
] | [
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.5116279125213623,
0.23999999463558197,
0.13333332538604736,
0.1666666567325592,
0.31372547149658203,
0.1463414579629898,
0.178571417927742,
0.18518517911434174,
0.21276594698429108,
0.1395348757505417,
0.10256409645080566,
0.1304347813129425,
0.09999999403953552,
0.13636362552642822,
0.04651162400841713,
0.23529411852359772,
0.1860465109348297,
0.15789473056793213,
0.08695651590824127,
0.1428571343421936,
0.3404255211353302,
0.22641508281230927,
0.34285715222358704,
0.20512820780277252,
0.2222222238779068,
0.3050847351551056,
0.41025641560554504,
0.24561403691768646,
0.4285714328289032,
0.4307692348957062,
0.3255814015865326,
0.1875,
0.40909090638160706,
0.21276594698429108,
0.3272727131843567,
0.19607841968536377,
0.2083333283662796,
0.09756097197532654,
0.1621621549129486,
0.2800000011920929,
0.09090908616781235,
0.3272727131843567,
0.04255318641662598,
0.11428570747375488,
0.3125,
0.06666666269302368,
0.2222222238779068,
0.2790697515010834,
0.20512820780277252,
0.31372547149658203,
0.2448979616165161,
0.3404255211353302,
0.2800000011920929,
0.1395348757505417,
0.11320754140615463,
0.054054051637649536,
0.10526315122842789,
0.04347825422883034,
0.09090908616781235,
0.12244897335767746
] | SJxfxnA9K7 | true | [
"We propose a novel way to incorporate conditional image information into the discriminator of GANs using feature fusion that can be used for structured prediction tasks."
] |
[
"Intelligent creatures can explore their environments and learn useful skills without supervision.\n",
"In this paper, we propose ``Diversity is All You Need''(DIAYN), a method for learning useful skills without a reward function.",
"Our proposed method learns skills by maximizing an information theoretic objective using a maximum entropy policy.",
"On a variety of simulated robotic tasks, we show that this simple objective results in the unsupervised emergence of diverse skills, such as walking and jumping.",
"In a number of reinforcement learning benchmark environments, our method is able to learn a skill that solves the benchmark task despite never receiving the true task reward.",
"We show how pretrained skills can provide a good parameter initialization for downstream tasks, and can be composed hierarchically to solve complex, sparse reward tasks.",
"Our results suggest that unsupervised discovery of skills can serve as an effective pretraining mechanism for overcoming challenges of exploration and data efficiency in reinforcement learning.",
"Deep reinforcement learning (RL) has been demonstrated to effectively learn a wide range of rewarddriven skills, including playing games (Mnih et al., 2013; Silver et al., 2016) , controlling robots (Gu et al., 2017; Schulman et al., 2015b) , and navigating complex environments (Zhu et al., 2017; Mirowski et al., 2016) .",
"However, intelligent creatures can explore their environments and learn useful skills even without supervision, so that when they are later faced with specific goals, they can use those skills to satisfy the new goals quickly and efficiently.Learning skills without reward has several practical applications.",
"Environments with sparse rewards effectively have no reward until the agent randomly reaches a goal state.",
"Learning useful skills without supervision may help address challenges in exploration in these environments.",
"For long horizon tasks, skills discovered without reward can serve as primitives for hierarchical RL, effectively shortening the episode length.",
"In many practical settings, interacting with the environment is essentially free, but evaluating the reward requires human feedback BID7 .",
"Unsupervised learning of skills may reduce the amount of supervision necessary to learn a task.",
"While we can take the human out of the loop by designing a reward function, it is challenging to design a reward function that elicits the desired behaviors from the agent (Hadfield-Menell et al., 2017) .",
"Finally, when given an unfamiliar environment, it is challenging to determine what tasks an agent should be able to learn.",
"Unsupervised skill discovery partially answers this question.",
"1 Autonomous acquisition of useful skills without any reward signal is an exceedingly challenging problem.",
"A skill is a latent-conditioned policy that alters the state of the environment in a consistent way.",
"We consider the setting where the reward function is unknown, so we want to learn a set of skills by maximizing the utility of this set.",
"Making progress on this problem requires specifying a learning objective that ensures that each skill individually is distinct and that the skills collectively explore large parts of the state space.",
"In this paper, we show how a simple objective based on mutual information can enable RL agents to autonomously discover such skills.",
"These skills are useful for a number of applications, including hierarchical reinforcement learning and imitation learning.We propose a method for learning diverse skills with deep RL in the absence of any rewards.",
"We hypothesize that in order to acquire skills that are useful, we must train the skills so that they maximize coverage over the set of possible behaviors.",
"While one skill might perform a useless behavior like random dithering, other skills should perform behaviors that are distinguishable from random dithering, and therefore more useful.",
"A key idea in our work is to use discriminability between skills as an objective.",
"Further, skills that are distinguishable are not necessarily maximally diverse -a slight difference in states makes two skills distinguishable, but not necessarily diverse in a semantically meaningful way.",
"To combat this problem, we want to learn skills that not only are distinguishable, but also are as diverse as possible.",
"By learning distinguishable skills that are as random as possible, we can \"push\" the skills away from each other, making each skill robust to perturbations and effectively exploring the environment.",
"By maximizing this objective, we can learn skills that run forward, do backflips, skip backwards, and perform face flops (see Figure 3 ).",
"Our paper makes five contributions.",
"First, we propose a method for learning useful skills without any rewards.",
"We formalize our discriminability goal as maximizing an information theoretic objective with a maximum entropy policy.",
"Second, we show that this simple exploration objective results in the unsupervised emergence of diverse skills, such as running and jumping, on several simulated robotic tasks.",
"In a number of RL benchmark environments, our method is able to solve the benchmark task despite never receiving the true task reward.",
"In these environments, some of the learned skills correspond to solving the task, and each skill that solves the task does so in a distinct manner.",
"Third, we propose a simple method for using learned skills for hierarchical RL and find this methods solves challenging tasks.",
"Four, we demonstrate how skills discovered can be quickly adapted to solve a new task.",
"Finally, we show how skills discovered can be used for imitation learning.",
"In this paper, we present DIAYN, a method for learning skills without reward functions.",
"We show that DIAYN learns diverse skills for complex tasks, often solving benchmark tasks with one of the learned skills without actually receiving any task reward.",
"We further proposed methods for using the learned skills (1) to quickly adapt to a new task, (2) to solve complex tasks via hierarchical RL, and (3) to imitate an expert.",
"As a rule of thumb, DIAYN may make learning a task easier by replacing the task's complex action space with a set of useful skills.",
"DIAYN could be combined with methods for augmenting the observation space and reward function.",
"Using the common language of information theory, a joint objective can likely be derived.",
"DIAYN may also more efficiently learn from human preferences by having humans select among learned skills.",
"Finally, the skills produced by DIAYN might be used by game designers to allow players to control complex robots and by artists to animate characters."
] | [
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.277777761220932,
0.380952388048172,
0.1538461446762085,
0.1249999925494194,
0.1702127605676651,
0.5957446694374084,
0.25,
0.13114753365516663,
0.22580644488334656,
0.10256409645080566,
0.2222222238779068,
0.23255813121795654,
0.04878048226237297,
0.21621620655059814,
0.18518517911434174,
0.19512194395065308,
0,
0.2631579041481018,
0.052631575614213943,
0.2222222238779068,
0.1599999964237213,
0.2666666507720947,
0.3199999928474426,
0.1304347813129425,
0.17391303181648254,
0.15789473056793213,
0.08888888359069824,
0.0952380895614624,
0.20408162474632263,
0.1304347813129425,
0,
0.4000000059604645,
0.1538461446762085,
0.12244897335767746,
0.1860465109348297,
0.21276594698429108,
0.2857142686843872,
0.3684210479259491,
0.4571428596973419,
0.3243243098258972,
0.2916666567325592,
0.3529411852359772,
0.17777776718139648,
0.21621620655059814,
0.1621621549129486,
0.05128204822540283,
0.22727271914482117
] | SJx63jRqFm | true | [
"We propose an algorithm for learning useful skills without a reward function, and show how these skills can be used to solve downstream tasks."
] |
[
"Lexical ambiguity, i.e., the presence of two or more meanings for a single word, is an inherent and challenging problem for machine translation systems.",
"Even though the use of recurrent neural networks and attention mechanisms are expected to solve this problem, machine translation systems are not always able to correctly translate lexically ambiguous sentences.",
"In this work, I attempt to resolve the problem of lexical ambiguity in English--Japanese neural machine translation systems by combining a pretrained Bidirectional Encoder Representations from Transformer (BERT) language model that can produce contextualized word embeddings and a Transformer translation model, which is a state-of-the-art architecture for the machine translation task.",
"These two proposed architectures have been shown to be more effective in translating ambiguous sentences than a vanilla Transformer model and the Google Translate system.",
"Furthermore, one of the proposed models, the Transformer_BERT-WE, achieves a higher BLEU score compared to the vanilla Transformer model in terms of general translation, which is concrete proof that the use of contextualized word embeddings from BERT can not only solve the problem of lexical ambiguity, but also boost the translation quality in general.\n",
"Machine translation is one of the most important tasks in the field of natural language processing.",
"In 2014, Sutskever and his fellow researchers at Google introduced the sequence-to-sequence (seq2seq) model (Sutskever et al., 2014) , marking the advent of neural machine translation (NMT) in a breakthrough in the field of machine translation.",
"Since then, seq2seq models have been growing rapidly, evolving from a purely recurrent neural network (RNN)-based encoder-decoder model to recurrence-free models that rely on convolution (Gehring et al., 2017) or attention mechanisms (Vaswani et al., 2017) .",
"The Transformer architecture (Vaswani et al., 2017) , which is based on attention mechanism, is currently the standard model for machine translation tasks because of its effectiveness and efficiency.",
"It also provides a foundation for the advent of state-of-the-art language models, such as Bidirectional Encoder Representations from Transformer (BERT) (Devlin et al., 2018) and GPT-2 (Radford et al., 2019) .",
"Section 2 shows how seq2seq models transformed from a purely RNN-based encoder-decoder model to a transformer model that relies entirely on attention mechanism.",
"Although many significant improvements have been made in the NMT field, lexical ambiguity is still a problem that causes difficulty for machine translation models.",
"Liu et al. (2017) (Liu et al., 2017) show that the performance of RNNbased seq2seq model decreases as the number of senses for each word increases.",
"Section 3 demonstrates that even modern translation models, such as Google Translate, cannot translate some lexically ambiguous sentences and forms hypotheses concerning some causes of this problem.",
"Section 4 describes the BERT language model and explains why BERT vector representations can help resolve the problem of lexical ambiguity.",
"Subsequently, two context-aware machine translation architectures that integrate pretrained BERT and Transformer models are proposed in section 5.",
"For comparison purposes, a vanilla Transformer was built with the same set of hyperparameters and trained with the same settings as the proposed models.",
"Finally, the three models were evaluated based on two criteria: i.e., the capability to produce good translations in general and the ability to translate lexically ambiguous sentences.",
"The evaluation results and sample translations are shown in section 6.3.",
"2 Neural machine translation 2.1 Sequence-to-sequence model NMT is an approach to machine translation, where a large neural network model learns to predict the likelihood of a sequence of words given a source sentence in an end-to-end fashion.",
"The neural network model used for machine translation is called a seq2seq model, which is composed of an encoder and a decoder.",
"RNN and its variants such as long short-term memory (LSTM) and gated recurrent unit (GRU) have been a common choice to build a seq2seq model.",
"The encoder, which is a multilayered RNN cell, encodes the input sequence x into a fixed-sized vector v, which is essentially the last hidden state of the encoder's RNN.",
"The decoder, which is another RNN, maps this context vector to the target sequence y.",
"In other words, a seq2seq model learns to maximize the conditional probability:",
"where T and S are the lengths of the input sentence of the source language and the output sentence of the target language, respectively.",
"In this work, we demonstrate that lexical ambiguity is an inherent problem that contemporary machine translation systems cannot completely address, hypothesize two causes of the problem, and prove that this issue can be addressed by using contextualized word embeddings that dynamically change based on the context of given words.",
"In addition, the BERT language model is demonstrated to be effective at generating contextualized word representations and two machine translation architectures that integrate pretrained BERT and Transformer translation models are proposed.",
"The two architectures are shown to be able to translate semantically ambiguous sentences effectively.",
"Furthermore, the Transformer BERT−WE model outperforms the vanilla Transformer model, proving that our approach can not only resolve the problem of lexical ambiguity, but also increases the translation quality in general."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.14999999105930328,
0.09302324801683426,
0.3050847351551056,
0.09999999403953552,
0.23333333432674408,
0.13793103396892548,
0.17777776718139648,
0.1249999925494194,
0.09302324801683426,
0.09090908616781235,
0.1111111044883728,
0.307692289352417,
0,
0.09756097197532654,
0.23529411852359772,
0.1818181723356247,
0.05714285373687744,
0.04878048226237297,
0.14814814925193787,
0.17391303181648254,
0.22857142984867096,
0.052631575614213943,
0.10526315122842789,
0.06666666269302368,
0.07407406717538834,
0,
0.17241379618644714,
0.09302324801683426,
0.0714285671710968,
0.1904761791229248
] | HJeIrlSFDH | true | [
"The paper solves a lexical ambiguity problem caused from homonym in neural translation by BERT."
] |
[
"This paper focuses on the synthetic generation of human mobility data in urban areas.",
"We present a novel and scalable application of Generative Adversarial Networks (GANs) for modeling and generating human mobility data.",
"We leverage actual ride requests from ride sharing/hailing services from four major cities in the US to train our GANs model.",
"Our model captures the spatial and temporal variability of the ride-request patterns observed for all four cities on any typical day and over any typical week.",
"Previous works have succinctly characterized the spatial and temporal properties of human mobility data sets using the fractal dimensionality and the densification power law, respectively, which we utilize to validate our GANs-generated synthetic data sets.",
"Such synthetic data sets can avoid privacy concerns and be extremely useful for researchers and policy makers on urban mobility and intelligent transportation.",
"Ride sharing or hailing services have disrupted urban transportation in hundreds of cities around the globe (Times, 2018; BID2 .",
"In United States, it has been estimated that between 24% to 43% of the population have used ride-sharing services in 2018 BID21 .",
"Uber alone operates in more than 600 cities around the globe BID22 .",
"Ride sharing services have turned urban transportation into a convenient utility (available any place at any time), and become an important part of the economy in large urban areas BID8 .Ride",
"request data from ride sharing services can potentially be of great value. Data",
"gathered from ride sharing services could be used to provide insights about traffic and human mobility patterns which are essential for intelligent transportation systems. Ride",
"requests in major cities with high penetration by such services exhibit spatial and temporal variability. Modeling",
"of such variability is a challenging problem for researchers. Moreover",
", there are still unresolved challenges, such as: optimal algorithms for dynamic pooling of ride requests BID1 , real-time preplacement of vehicles BID12 , and city scale traffic congestion prediction BID17 and avoidance. Access",
"to large amount of actual ride request data is essential to understanding and addressing these challenges.Data from ride sharing services have been used for real-time sensing and analytics to yield insights on human mobility patterns BID25 BID11 . Each city",
"exhibits a different pattern of urban mobility -there could be cultural or economical factors governing these patterns. If ride sharing",
"services constitute a significant percentage of the riders in a city, can we build models from ride request data to model urban mobility for the whole city and provide societal benefit without compromising personal privacy? This question motivates",
"us to explore the potential of using Generative Adversarial Networks (GANs) to generate synthetic ride request data sets that exhibit very similar attributes as the actual ride request data sets.This work proposes a novel approach of generating synthetic ride request data sets using GANs. This approach involves",
"viewing ride requests as a (temporal) sequence of (spatial) images of ride request locations. The approach uses GANs",
"to match the properties of the synthetic data sets with that of real ride request data sets. Many recent works using",
"neural networks have looked at demand prediction BID29 BID30 and traffic prediction at intersections BID28 . In our work, we are looking",
"at generating actual ride requests for both spatially and temporally granular intervals. Also, we compare and validate",
"the spatial and temporal variations of the DISPLAYFORM0 Figure 1: Ride requests for a small region of downtown San Francisco for a typical week day. Each figure shows the aggregated",
"ride-locations (red dots) over a period of an hour. Each red dot may represent one or",
"more ride-locations. Ride density varies spatially and",
"temporally.synthetic data sets with the real data sets. In dealing with large amount of data",
"for many cities and long training times for GANs, we develop effective ways to parallelize and scale our GANs training runs using large CPU clusters on AWS. We present our GANs scaling approach",
"and experimental results, and show that significant reduction in training times can be achieved.",
"The emergence of ride sharing services and the availability of extensive data sets from such services are creating unprecedented opportunities for:",
"1) doing city-scale data analytics on urban transportation for supporting Intelligent Transportation Systems (ITS);",
"2) improving the efficiency of ride sharing services;",
"3) facilitating real-time traffic congestion prediction; and",
"4) providing new public services for societal benefit.",
"Moreover, the power of neural networks for machine learning has allowed the creation of useful models which can capture human behavior and dynamic real-world scenarios.",
"The key contributions of this paper include:• We map the ride requests of ride sharing services into a time sequence of images that capture both the temporal and spatial attributes of ride request patterns for a city.•",
"Based on extensive real world ride request data, we introduce a GANs based workflow for modeling and generating synthetic and realistic ride request data sets for a city.•",
"We further show that our GANs workload can be effectively scaled using Xeon CPU clusters on AWS, in reducing training times from hours to minutes for each city.•",
"Using previous work on modelling urban mobility patterns, we validate our GANs generated data sets for ride requests for four major US cities, by comparing the spatial and temporal properties of the GANs generated data sets against that of the real data sets.There are other promising avenues for further research. Some",
"open research topics include: Figure 6 : Plots for four cities highlighting the temporal variability of ride requests visible in both real and our model (predicted) for ride request generation. The",
"pattern is representative of any typical day of week.• Using",
"the GANs generated data sets for experiments on new algorithms for dynamic ride pooling, real-time pre-placement of vehicles, and real-time traffic congestion prediction."
] | [
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.9032257795333862,
0.22857142984867096,
0.1666666567325592,
0.1538461446762085,
0.2978723347187042,
0.2631579041481018,
0.2222222238779068,
0.1538461446762085,
0.13793103396892548,
0.2222222238779068,
0.13333332538604736,
0.0952380895614624,
0.060606054961681366,
0.07407406717538834,
0.04255318641662598,
0.18867924809455872,
0.1621621549129486,
0.2641509473323822,
0.2800000011920929,
0.12121211737394333,
0.29411762952804565,
0,
0,
0.0952380895614624,
0.060606054961681366,
0,
0.27586206793785095,
0.13333332538604736,
0.06666666269302368,
0.1666666567325592,
0.19354838132858276,
0.1599999964237213,
0,
0,
0.14999999105930328,
0.1249999925494194,
0.19512194395065308,
0.17391303181648254,
0.24137930572032928,
0.17391303181648254,
0.07407406717538834,
0.2631579041481018
] | H1eMBn09Km | true | [
"This paper focuses on the synthetic generation of human mobility data in urban areas using GANs. "
] |
[
"While deep neural networks are a highly successful model class, their large memory footprint puts considerable strain on energy consumption, communication bandwidth, and storage requirements.",
"Consequently, model size reduction has become an utmost goal in deep learning.",
"Following the classical bits-back argument, we encode the network weights using a random sample, requiring only a number of bits corresponding to the Kullback-Leibler divergence between the sampled variational distribution and the encoding distribution.",
"By imposing a constraint on the Kullback-Leibler divergence, we are able to explicitly control the compression rate, while optimizing the expected loss on the training set.",
"The employed encoding scheme can be shown to be close to the optimal information-theoretical lower bound, with respect to the employed variational family.",
"On benchmarks LeNet-5/MNIST and VGG-16/CIFAR-10, our approach yields the best test performance for a fixed memory budget, and vice versa, it achieves the highest compression rates for a fixed test performance.\n",
"Traditional approaches to model compression usually rely on three main techniques: pruning, quantization and coding.",
"For example, Deep Compression BID3 proposes a pipeline employing all three of these techniques in a systematic manner.",
"From an information-theoretic perspective, the central routine is coding, while pruning and quantization can be seen as helper heuristics to reduce the entropy of the empirical weight-distribution, leading to shorter encoding lengths BID11 .",
"Also, the recently proposed Bayesian Compression BID9 falls into this scheme, despite being motivated by the so-called bits-back argument BID7 which theoretically allows for higher compression rates.1",
"While the bits-back argument certainly motivated the use of variational inference in Bayesian Compression, the downstream encoding is still akin to Deep Compression (and other approaches).",
"In particular, the variational distribution is merely used to derive a deterministic set of weights, which is subsequently encoded with Shannonstyle coding.",
"This approach, however, does not fully exploit the coding efficiency postulated by the bits-back argument.1",
"Recall that the bits-back argument states that, assuming a large dataset and a neural network equipped with a weight-prior p, the effective coding cost of the network weights is KL(q||p) = Eq[log q p ], where q is a variational posterior.",
"However, in order to realize this effective cost, one needs to encode both the network weights and the training targets, while it remains unclear whether it can also be achieved for network weights alone.In this paper, we step aside from the pruning-quantization pipeline and propose a novel coding method which approximately realizes bits-back efficiency.",
"In particular, we refrain from constructing a deterministic weight-set but rather encode a random weight-set from the full variational posterior.",
"This is fundamentally different from first drawing a weight-set and subsequently encoding it -this would be no more efficient than previous approaches.",
"Rather, the coding scheme developed here is allowed to pick a random weight-set which can be cheaply encoded.",
"By using results from BID4 , we show that such a coding scheme always exists and that the bits-back argument indeed represents a theoretical lower bound for its coding efficiency.",
"Moreover, we propose a practical scheme which produces an approximate sample from the variational distribution and which can indeed be encoded with this efficiency.",
"Since our algorithm learns a distribution over weightsets and derives a random message from it, while minimizing the resulting code length, we dub it Minimal Random Code Learning (MIRACLE).",
"In this paper we followed through the philosophy of the bits-back argument for the goal of coding model parameters.",
"Our algorithm is backed by solid recent information-theoretic insights, yet it is simple to implement.",
"We demonstrated that it outperforms the previous state-of-the-art.An important question remaining for future work is how efficient MIRACLE can be made in terms of memory accesses and consequently for energy consumption and inference time.",
"There lies clear potential in this direction, as any single weight can be recovered by its group-index and relative index within each group.",
"By smartly keeping track of these addresses, and using pseudo-random generators as algorithmic lookup-tables, we could design an inference machine which is able to directly run our compressed models, which might lead to considerable savings in memory accesses."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.13333332538604736,
0.0624999962747097,
0.25,
0.0952380895614624,
0.10526315122842789,
0.08888888359069824,
0.05714285373687744,
0.1621621549129486,
0.07999999821186066,
0.04255318641662598,
0.09090908616781235,
0.2926829159259796,
0.11428570747375488,
0.30188679695129395,
0.17910447716712952,
0.21621620655059814,
0.1428571343421936,
0.21052631735801697,
0.25531914830207825,
0.2790697515010834,
0.1666666567325592,
0.2222222238779068,
0,
0.11320754140615463,
0,
0.0714285671710968
] | BkMKZjUMq7 | true | [
"This paper proposes an effective coding scheme for neural networks that encodes a random set of weights from a variational distribution."
] |
[
"Deep neural networks provide state-of-the-art performance for image denoising, where the goal is to recover a near noise-free image from a noisy image.\n",
"The underlying principle is that neural networks trained on large datasets have empirically been shown to be able to generate natural images well from a low-dimensional latent representation of the image.\n",
"Given such a generator network, or prior, a noisy image can be denoised by finding the closest image in the range of the prior.\n",
"However, there is little theory to justify this success, let alone to predict the denoising performance as a function of the networks parameters.\n",
"In this paper we consider the problem of denoising an image from additive Gaussian noise, assuming the image is well described by a deep neural network with ReLu activations functions, mapping a k-dimensional latent space to an n-dimensional image.\n",
"We state and analyze a simple gradient-descent-like iterative algorithm that minimizes a non-convex loss function, and provably removes a fraction of (1 - O(k/n)) of the noise energy.\n",
"We also demonstrate in numerical experiments that this denoising performance is, indeed, achieved by generative priors learned from data.",
"We consider the image or signal denoising problem, where the goal is to remove noise from an unknown image or signal.",
"In more detail, our goal is to obtain an estimate of an image or signal y˚P R n from y \" y˚`η, where η is unknown noise, often modeled as a zero-mean white Gaussian random variable with covariance matrix σ 2 {nI.Image denoising relies on modeling or prior assumptions on the image y˚. For example, suppose that the image y˚lies in a k-dimensional subspace of R n denoted by Y. Then we can estimate the original image by finding the closest point in 2 -distance to the noisy observation y on the subspace Y. The corresponding estimate, denoted byŷ, obeys DISPLAYFORM0 with high probability (throughout, }¨} denotes the 2 -norm). Thus, the noise energy is reduced by a factor of k{n over the trivial estimateŷ \" y which does not use any prior knowledge of the signal.",
"The denoising rate (1) shows that the more concise the image prior or image representation (i.e., the smaller k), the more noise can be removed.",
"If on the other hand the prior (the subspace, in this example) does not include the original image y˚, then the error bound (1) increases as we would remove a significant part of the signal along with noise when projecting onto the range of the signal prior.",
"Thus a concise and accurate prior is crucial for denoising.Real world signals rarely lie in a priori known subspaces, and the last few decades of image denoising research have developed sophisticated and accurate image models or priors and algorithms.",
"Examples include models based on sparse representations in overcomplete dictionaries such as wavelets (Donoho, 1995) and curvelets (Starck et al., 2002) , and algorithms based on exploiting self-similarity within images BID4 .",
"A prominent example of the former class of algorithms is the BM3D BID4 algorithm, which achieves state-of-the-art performance for certain denoising problems.",
"However, the nuances of real world images are difficult to describe with handcrafted models.",
"Thus, starting with the paper (Elad & Aharon, 2006 ) that proposes to learn sparse representation based on training data, it has become common to learn concise representation for denoising (and other inverse problems) from a set of training images.In 2012, Burger et al. BID2 applied deep networks to the denoising problem, by training a deep network on a large set of images.",
"Since then, deep learning based denoisers (Zhang et al., 2017) have set the standard for denoising.",
"The success of deep network priors can be attributed to their ability to efficiently represent and learn realistic image priors, for example via autodecoders (Hinton & Salakhutdinov, 2006) and generative adversarial models (Goodfellow et al., 2014) .",
"Over the last few years, the quality of deep priors has significantly improved (Karras et al., 2017; Ulyanov et al., 2017) .",
"As this field matures, priors will be developed with even smaller latent code dimensionality and more accurate approximation of natural signal manifolds.",
"Consequently, the representation error from deep priors will decrease, and thereby enable even more powerful denoisers.As the influence of deep networks in inverse problems grows, it becomes increasingly important to understand their performance at a theoretical level.",
"Given that most optimization approaches for deep learning are first order gradient methods, a justification is needed for why they do not get stuck in local minima.",
"The closest theoretical work to this question is BID1 , which solves a noisy compressive sensing problem with generative priors by minimizing empirical risk.",
"Under the assumption that the network is Lipschitz, they show that if the global optimizer can be found, which is in principle NP-hard, then a signal estimate is recovered to within the noise level.",
"While the Lipschitzness assumption is quite mild, the resulting theory does not provide justification for why global optimality can be reached.The most related work that establishes theoretical reasons for why gradient methods would not get stuck in local minima, when using deep generative priors for solving inverse problems, is Hand & Voroninski (2018) .",
"In it, the authors establish global favorability for optimization of the noiseless empirical risk function.",
"Specifically, they show existence of a descent direction outside a ball around the global optimizer and a negative multiple of it in the latent space of the generative model.",
"This work does not provide a specific algorithm which provably estimates the global minimizer, nor does it provide an analysis of the robustness of the problem with respect to noise.In this paper, we propose the first algorithm for solving denoising with deep generative priors that provably finds an approximation of the underlying image.",
"Following the lead of Hand & Voroninski (2018), we assume an expansive Gaussian model for the deep generative network in order to establish this result.Contributions: The goal of this paper is to analytically quantify the denoising performance of deep-prior based denoisers.",
"Specifically, we characterize the performance of a simple and efficient algorithm for denoising based on a d-layer generative neural network G : R k Ñ R n , with k ă n, and random weights.",
"In more detail, we propose a gradient method with a tweak that attempts to minimize the least-squares loss f pxq \" 1 2 }Gpxq´y} 2 between the noisy image y and an image in the range of the prior, Gpxq. While f is non-convex, we show that the gradient method yields an estimatex obeying DISPLAYFORM1 with high probability, where the notation À absorbs a constant factor depending on the number of layers of the network, and its expansitivity, as discussed in more detail later. Our result shows that the denoising rate of a deep prior based denoiser is determined by the dimension of the latent representation.We also show in numerical experiments, that this rate-shown to be analytically achieved for random priors-is also experimentally achieved for priors learned from real imaging data. Loss surface f pxq \" }Gpxq´Gpx˚q}, x˚\" r1, 0s, of an expansive network G with ReLu activation functions with k \" 2 nodes in the input layer and n 2 \" 300 and n 3 \" 784 nodes in the hidden and output layers, respectively, with random Gaussian weights in each layer.",
"The surface has a critical point near´x˚, a global minimum at x˚, and a local maximum at 0."
] | [
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.1599999964237213,
0.23333333432674408,
0.23999999463558197,
0.07843136787414551,
0.28125,
0.2222222238779068,
0.1249999925494194,
0.17391303181648254,
0.158730149269104,
0.2745097875595093,
0.17910447716712952,
0.16129031777381897,
0.07017543166875839,
0.08163265138864517,
0.04651162400841713,
0.1794871687889099,
0.08695651590824127,
0.21875,
0.0833333283662796,
0.07843136787414551,
0.1230769157409668,
0.1090909019112587,
0.11320754140615463,
0.24561403691768646,
0.15584415197372437,
0.04651162400841713,
0.1538461446762085,
0.2535211145877838,
0.2153846174478531,
0.23333333432674408,
0.1733333319425583,
0.045454539358615875
] | SklcFsAcKX | true | [
"By analyzing an algorithms minimizing a non-convex loss, we show that all but a small fraction of noise can be removed from an image using a deep neural network based generative prior."
] |
[
"Deep learning yields great results across many fields,\n",
"from speech recognition, image classification, to translation.\n",
"But for each problem, getting a deep model to work well involves\n",
"research into the architecture and a long period of tuning.\n\n",
"We present a single model that yields good results on a number\n",
"of problems spanning multiple domains.",
"In particular, this single model\n",
"is trained concurrently on ImageNet, multiple translation tasks,\n",
"image captioning (COCO dataset), a speech recognition corpus,\n",
"and an English parsing task. \n\n",
"Our model architecture incorporates building blocks from multiple\n",
"domains.",
"It contains convolutional layers, an attention mechanism,\n",
"and sparsely-gated layers.\n\n",
"Each of these computational blocks is crucial for a subset of\n",
"the tasks we train on.",
"Interestingly, even if a block is not\n",
"crucial for a task, we observe that adding it never hurts performance\n",
"and in most cases improves it on all tasks.\n\n",
"We also show that tasks with less data benefit largely from joint\n",
"training with other tasks, while performance on large tasks degrades\n",
"only slightly if at all.",
"Recent successes of deep neural networks have spanned many domains, from computer vision BID16 to speech recognition BID7 and many other tasks.",
"Convolutional networks excel at tasks related to vision, while recurrent neural networks have proven successful at natural language processing tasks, e.g., at machine translation BID31 BID2 .",
"But in each case, the network was designed and tuned specifically for the problem at hand.",
"This limits the impact of deep learning, as this effort needs to be repeated for each new task.",
"It is also very different from the general nature of the human brain, which is able to learn many different tasks and benefit from transfer learning.",
"The natural question arises:Can we create a unified deep learning model to solve tasks across multiple domains?The",
"question about multi-task models has been studied in many papers in the deep learning literature. Natural",
"language processing models have been shown to benefit from a multi-task approach a long time ago BID5 , and recently multi-task machine translation models (MinhThang Luong, 2015) have even been shown to exhibit zero-shot learning when trained on multiple languages (Melvin Johnson, 2016) . Speech",
"recognition has also been shown to benefit from multi-task training BID27 , as have some vision problems, such as facial landmark detection BID36 . But all",
"these models are trained on other tasks from the same domain: translation tasks are trained with other translation tasks, vision tasks with other vision tasks, speech tasks with other speech tasks. Multi-modal",
"learning has been shown to improve learned representations in the unsupervised setting BID22 and when used as a-priori known unrelated tasks BID24 . But no competitive",
"multi-task multi-modal model has been proposed, so the above question remains unanswered.In this work, we take a step toward positively answering the above question by introducing the MultiModel architecture, a single deep-learning model that can simultaneously learn multiple tasks from various domains. Concretely, we train",
"the MultiModel simultaneously on the following 8 corpora:Code available at redacted. (1) WSJ speech corpus",
"(Consortium et al., 1994 ), used for sentence-level speech recognition.(2) ImageNet dataset BID25",
", used for image classification. (3) COCO image captioning",
"dataset BID17 , used for image captioning. (4) WSJ parsing dataset BID18",
", used for constituency parsing. These corpora were chosen as",
"they are commonly used for machine learning the respective tasks: speech-to-text, image classification, captioning, parsing and translation. The model learns all of these",
"tasks and achieves good performance: not state-of-the-art at present, but above many task-specific models studied in recent past (see the Section 3 for details). FIG0 illustrates some decodes",
"taken directly from the model: it is clear that it can caption images, categorize them, translate to French and German and construct parse trees. While the MultiModel is only",
"a first step and will be improved in the future, two key insights are crucial to making it work at all and are our main contributions.Small modality-specific sub-networks convert into a unified representation and back from it. To allow training on input data",
"of widely different sizes and dimensions, such as images, sound waves and text, we need sub-networks to convert inputs into a joint representation space. We call these sub-networks modality",
"nets as they are specific to each modality (images, speech, text) and define transformations between these external domains and a unified representation. We design modality nets to be computationally",
"minimal, promoting heavy feature extraction and ensuring that the majority of computation is performed within the domain-agnostic body of the model. Since our model is auto-regressive, modality",
"nets need to both convert the inputs into the unified representation and later convert from this representation into the output space. Two design decisions were important:• The unified",
"representation is variable-size. While a fixed-size representation is tempting and",
"easier to implement, it creates a bottleneck and limits the performance of the model. • Different tasks from the same domain share modality",
"nets. We avoid creating a sub-network for every task, and prefer",
"only to create one for every input modality. For example, all translation tasks share the same modality-net",
"(and vocabulary), no matter for which language pair. This encourages generalization across tasks and allows to add",
"new tasks on the fly.Computational blocks of different kinds are crucial for good results on various problems. The body of the MultiModel incorporates building blocks from",
"mutiple domains. We use depthwiseseparable convolutions, an attention mechanism",
", and sparsely-gated mixture-of-experts layers. These blocks were introduced in papers that belonged to different",
"domains and were not studied before on tasks from other domains. For example, separable convolutions were introduced in the Xception",
"Figure 2 : The MultiModel, with modality-nets, an encoder, and an autoregressive decoder.architecture BID4 and were not applied to text or speech processing before. On the other hand, the sparsely-gated mixture-of-experts BID29 had",
"been introduced for language processing tasks and has not been studied on image problems. We find that each of these mechanisms is indeed crucial for the domain",
"it was introduced, e.g., attention is far more important for languagerelated tasks than for image-related ones. But, interestingly, adding these computational blocks never hurts performance",
", even on tasks they were not designed for. In fact we find that both attention and mixture-of-experts layers slightly improve",
"performance of MultiModel on ImageNet, the task that needs them the least."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.09999999403953552,
0.09999999403953552,
0,
0.17391303181648254,
0,
0,
0,
0.09999999403953552,
0,
0.1111111044883728,
0.09999999403953552,
0,
0.1249999925494194,
0,
0,
0,
0,
0.09090908616781235,
0,
0,
0,
0.060606054961681366,
0.05405404791235924,
0.07407406717538834,
0,
0.1764705777168274,
0.06896550953388214,
0.14814814925193787,
0.16326530277729034,
0.0555555522441864,
0.06896550953388214,
0.10810810327529907,
0.039215683937072754,
0,
0.07407406717538834,
0,
0,
0,
0.17142856121063232,
0.04999999701976776,
0.05405404791235924,
0.039215683937072754,
0.05128204822540283,
0.05405404791235924,
0.05882352590560913,
0.05882352590560913,
0.09999999403953552,
0.060606054961681366,
0.08695651590824127,
0.06896550953388214,
0.06896550953388214,
0,
0,
0.07407406717538834,
0.06666666269302368,
0.09302325546741486,
0.05405404791235924,
0,
0.060606054961681366,
0
] | HyKZyYlRZ | true | [
"Large scale multi-task architecture solves ImageNet and translation together and shows transfer learning."
] |
[
"Machine learning algorithms for controlling devices will need to learn quickly, with few trials.",
"Such a goal can be attained with concepts borrowed from continental philosophy and formalized using tools from the mathematical theory of categories.",
"Illustrations of this approach are presented on a cyberphysical system: the slot car game, and also on Atari 2600 games.",
"There is a growing need for algorithms that control cyberphysical systems to learn with very little data how to operate quickly in a partially-known environment.",
"Many reinforcement-learning (RL) solutions using neural networks (NN) have proved to work well with emulators, for instance with the Atari 1 2600 games BID17 , or with real systems such as robots BID11 .",
"However, these state-of-the-art approaches need a lot of training data, which may not be obtainable within the allowed time frame or budget.",
"This work thus started as an alternative approach to teach computers to learn quickly to perform as efficiently as the existing solution with approximately one percent of the training data, time, and computing resources.We first review reinforcement learning methods for Markov Decision Processes (MDP) and Partially Observable MDP (POMDP).",
"We then explain the motivation behind our continental-philosophyinspired approach.",
"We describe the two classes of problems on which we focus: the bijective case, which may lead to playing by imitating, and the category-based approach, which should lead to a more innovative behavior of the control algorithms.",
"Both approaches rely on knowledge accumulated during previous experiences, as in Lifelong Machine Learning BID6 .These",
"two approaches are illustrated by results from both a commercial slot car game controlled by an 8-bit Arduino system, and from Atari 2600 video games running within the Arcade Learning Environment (ALE, see BID1 ).",
"Continental philosophy lead us to formalize a mathematical concept to control an agent evolving in a world, whether it is simulated or real.",
"The power of this framework was illustrated by the theoretical example of the slot car on unknown circuits.",
"Results from experiments with a real slot car, using real analog signals confirmed our expectations, even though it only used a basic survival approach.",
"Moreover, the same basic survival strategy was applied to two Atari 2600 games and showed the same trend: even though not as skilled as, for instance, DQN-based agents trained with two hundred million frames, our AI reached in less than ten thousand frames scores that DQN met after learning with a few million frames.The next steps are to apply the transposition properties to the Atari games, as we did for the slot car, which should further decrease the learning time when playing a new game.",
"Moreover, going beyond the basic survival strategy will be mandatory to reach higher scores: approaches based on Monte-Carlo Tree Search will be investigated."
] | [
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.380952388048172,
0.0714285671710968,
0.07692307233810425,
0.2666666507720947,
0.10526315122842789,
0,
0.15686273574829102,
0.1249999925494194,
0.0555555522441864,
0,
0,
0.0714285671710968,
0,
0.13793103396892548,
0.07894736528396606,
0.0714285671710968
] | SyF7Erp6W | true | [
"Continental-philosophy-inspired approach to learn with few data."
] |
[
"Audio signals are sampled at high temporal resolutions, and learning to synthesize audio requires capturing structure across a range of timescales.",
"Generative adversarial networks (GANs) have seen wide success at generating images that are both locally and globally coherent, but they have seen little application to audio generation.",
"In this paper we introduce WaveGAN, a first attempt at applying GANs to unsupervised synthesis of raw-waveform audio.",
"WaveGAN is capable of synthesizing one second slices of audio waveforms with global coherence, suitable for sound effect generation.",
"Our experiments demonstrate that—without labels—WaveGAN learns to produce intelligible words when trained on a small-vocabulary speech dataset, and can also synthesize audio from other domains such as drums, bird vocalizations, and piano.",
"We compare WaveGAN to a method which applies GANs designed for image generation on image-like audio feature representations, finding both approaches to be promising.",
"Synthesizing audio for specific domains has many practical applications in creative sound design for music and film.",
"Musicians and Foley artists scour large databases of sound effects to find particular audio recordings suitable for specific scenarios.",
"This strategy is painstaking and may result in a negative outcome if the ideal sound effect does not exist in the library.",
"A better approach might allow a sound artist to explore a compact latent space of audio, taking broad steps to find the types of sounds they are looking for (e.g. footsteps) and making small adjustments to latent variables to finetune (e.g. a large boot lands on a gravel path).",
"However, audio signals have high temporal resolution, and strategies that learn such a representation must perform effectively in high dimensions.Generative Adversarial Networks (GANs) (Goodfellow et al., 2014) are one such unsupervised strategy for mapping low-dimensional latent vectors to high-dimensional data.",
"The potential advantages of GAN-based approaches to audio synthesis are numerous.",
"Firstly, GANs could be useful for data augmentation (Shrivastava et al., 2017) in data-hungry speech recognition systems.",
"Secondly, GANs could enable rapid and straightforward sampling of large amounts of audio.",
"Furthermore, while the usefulness of generating static images with GANs is arguable, there are many applications (e.g. Foley) for which generating sound effects is immediately useful.",
"But despite their increasing fidelity at synthesizing images (Radford et al., 2016; BID2 Karras et al., 2018) , GANs have yet to be demonstrated capable of synthesizing audio in an unsupervised setting.A naïve solution for applying image-generating GANs to audio would be to operate them on imagelike spectrograms, i.e., time-frequency representations of audio.",
"This practice of bootstrapping image recognition algorithms for audio tasks is commonplace in the discriminative setting (Hershey et al., 2017) .",
"In the generative setting however, this approach is problematic as the most perceptually-informed spectrograms are non-invertible, and hence cannot be listened to without lossy estimations (Griffin & Lim, 1984) or learned inversion models (Shen et al., 2018) .Recent",
"work (van den Oord et al., 2016; Mehri et al., 2017) has shown that neural networks can be trained with autoregression to operate on raw audio. Such approaches",
"are attractive as they dispense with engineered feature representations. However, unlike",
"with GANs, the autoregressive setting results in slow generation as output audio samples must be fed back into the model one at a time.In this work, we investigate both waveform and spectrogram strategies for generating one-second slices of audio with GANs.1 For our spectrogram",
"approach (SpecGAN), we first design a spectrogram representation that allows for approximate inversion, and bootstrap the two-dimensional deep convolutional GAN (DCGAN) method (Radford et al., 2016) to operate on these spectrograms. In WaveGAN, our waveform",
"approach, we flatten the DCGAN architecture to operate in one dimension, resulting in a model with the same number of parameters and numerical operations as its twodimensional analog. With WaveGAN, we provide",
"both a starting point for practical audio synthesis with GANs and a recipe for modifying other image generation methods to operate on waveforms.We primarily envisage our method being applied to the generation of short sound effects suitable for use in music and film. For example, we trained",
"a WaveGAN on drums, resulting in a procedural drum machine designed to assist electronic musicians (demo chrisdonahue.com/wavegan). However, human evaluation",
"for such domain-specific tasks would require expert listeners. Therefore, we also consider",
"a speech benchmark, facilitating straightforward assessment by human annotators. Specifically, we explore a",
"task where success can easily be judged by any English speaker: generating examples of spoken digits \"zero\" through \"nine\".Though our evaluation focuses",
"on a speech generation task, we note that it is not our goal to develop a text-to-speech synthesizer. Instead, our investigation concerns",
"whether unsupervised strategies can learn global structure (e.g. words in speech data) implicit in high-dimensional audio signals without conditioning. Our experiments on speech demonstrate",
"that both WaveGAN and SpecGAN can generate spoken digits that are intelligible to humans. On criteria of sound quality and speaker",
"diversity, human judges indicate a preference for the audio generated by WaveGAN compared to that from SpecGAN.",
"Results for our evaluation appear in TAB0 .",
"We also evaluate our metrics on the real training data, the real test data, and a version of SC09 generated by a parametric speech synthesizer BID4 .",
"We also compare to SampleRNN (Mehri et al., 2017) and two public implementations of WaveNet (van den Oord et al., 2016), but neither method produced competitive results (details in Appendix B), and we excluded them from further evaluation.",
"These autoregressive models have not previously been examined on small-vocabulary speech data, and their success at generating full words has only been demonstrated when conditioning on rich linguistic features.",
"Sound examples for all experiments can be found at chrisdonahue.com/wavegan_examples.While the maximum inception score for SC09 is 10, any score higher than the test set score of 8 should be seen as evidence that a generative model has overfit.",
"Our best WaveGAN model uses phase shuffle with n = 2 and achieves an inception score of 4.7.",
"To compare the effect of phase shuffle to other common regularizers, we also tried using 50% dropout in the discriminator's activations, which resulted in a lower score.",
"Phase shuffle decreased the inception score of SpecGAN, possibly because the operation has an exaggerated effect when applied to the compact temporal axis of spectrograms.Most experiments produced |D| self (diversity) values higher than that of the test data, and all experiments produced |D| train (distance from training data) values higher than that of the test data.",
"While these measures indicate that our generative models produce examples with statistics that deviate from those of the real data, neither metric indicates that the models achieve high inception scores by the trivial solutions outlined in Section 6.2.Compared to examples from WaveGAN, examples from SpecGAN achieve higher inception score (6.0 vs. 4.7) and are labeled more accurately by humans (66% vs. 58%).",
"However, on subjective criteria of sound quality and speaker diversity, humans indicate a preference for examples from WaveGAN.",
"It appears that SpecGAN might better capture the variance in the underlying data compared to WaveGAN, but its success is compromised by sound quality issues when its spectrograms are inverted to audio.",
"It is possible that the poor qualitative ratings for examples from SpecGAN are primarily caused by the lossy Griffin-Lim inversion (Griffin & Lim, 1984) and not the generative procedure itself.",
"We see promise in both waveform and spectrogram audio generation with GANs; our study does not suggest a decisive winner.",
"For a more thorough investigation of spectrogram generation methods, we point to follow-up work BID10 .Finally",
", we train WaveGAN and SpecGAN models on the four other domains listed in Section 5. Somewhat",
"surprisingly, we find that the frequency-domain spectra produced by WaveGAN (a timedomain method) are visually more consistent with the training data (e.g. in terms of sharpness) than those produced by SpecGAN FIG0",
"We present WaveGAN, the first application of GANs to unsupervised audio generation.",
"WaveGAN is fully parallelizable and can generate hours of audio in only a few seconds.",
"In its current form, WaveGAN can be used for creative sound design in multimedia production.",
"In our future work we plan to extend WaveGAN to operate on variable-length audio and also explore a variety of label conditioning strategies.",
"By providing a template for modifying image generation models to operate on audio, we hope that this work catalyzes future investigation of GANs for audio synthesis.",
"Post-processing filters reject frequencies corresponding to noise byproducts created by the generative procedure (top).",
"The filter for speech boosts signal in prominent speech bands, while the filter for bird vocalizations (which are more uniformly-distributed in frequency) simply reduces noise presence."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0
] | [
0.20689654350280762,
0.12121211737394333,
0.23076923191547394,
0.1538461446762085,
0.1538461446762085,
0.19354838132858276,
0.0833333283662796,
0.14814814925193787,
0,
0.04081632196903229,
0.0833333283662796,
0.21052631735801697,
0.07692307233810425,
0.19999998807907104,
0.12121211737394333,
0.11320754140615463,
0.06896550953388214,
0.043478257954120636,
0.22857142984867096,
0.10526315122842789,
0.1599999964237213,
0.09090908616781235,
0.10810810327529907,
0.1599999964237213,
0.0714285671710968,
0,
0,
0,
0.0714285671710968,
0.06451612710952759,
0.07407406717538834,
0.1599999964237213,
0,
0,
0.045454543083906174,
0,
0,
0.07407406717538834,
0.060606058686971664,
0.03999999910593033,
0.06557376682758331,
0,
0.10810810327529907,
0,
0.2142857164144516,
0.0833333283662796,
0,
0.052631575614213943,
0.29999998211860657,
0.08695651590824127,
0,
0.13333332538604736,
0.1818181723356247,
0.09090908616781235,
0
] | ByMVTsR5KQ | true | [
"Learning to synthesize raw waveform audio with GANs"
] |
[
"The difficulty of obtaining sufficient labeled data for supervised learning has motivated domain adaptation, in which a classifier is trained in one domain, source domain, but operates in another, target domain.",
"Reducing domain discrepancy has improved the performance, but it is hampered by the embedded features that do not form clearly separable and aligned clusters.",
"We address this issue by propagating labels using a manifold structure, and by enforcing cycle consistency to align the clusters of features in each domain more closely.",
"Specifically, we prove that cycle consistency leads the embedded features distant from all but one clusters if the source domain is ideally clustered.",
"We additionally utilize more information from approximated local manifold and pursue local manifold consistency for more improvement.",
"Results for various domain adaptation scenarios show tighter clustering and an improvement in classification accuracy.",
"Classifiers trained through supervised learning have many applications (Bahdanau et al., 2015; Redmon et al., 2016) , but it requires a great deal of labeled data, which may be impractical or too costly to collect.",
"Domain adaptation circumvents this problem by exploiting the labeled data available in a closely related domain.",
"We call the domain where the classifier will be used at, the target domain, and assume that it only contains unlabeled data {x t }; and we call the closely related domain the source domain and assume that it contains a significant amount of labeled data {x s , y s }.",
"Domain adaptation requires the source domain data to share discriminative features with the target data (Pan et al., 2010) .",
"In spite of the common features, a classifier trained using only the source data is unlikely to give satisfactory results in the target domain because of the difference between two domains' data distributions, called domain shift (Pan et al., 2010) .",
"This may be addressed by fine-tuning on the target domain with a small set of labeled target data, but it tends to overfit to the small labeled dataset (Csurka, 2017) .",
"Another approach is to find discriminative features which are invariant between two domains by reducing the distance between the feature distributions.",
"For example, domain-adversarial neural network (DANN) (Ganin et al., 2016) achieved remarkable result using generative adversarial networks (GANs) (Goodfellow et al., 2014) .",
"However, this approach still has room to be improved.",
"Because the classifier is trained using labels from the source domain, the source features become clustered, and they determine the decision boundary.",
"It would be better if the embedded features from the target domain formed similar clusters to the source features in class-level so that the decision boundary does not cross the target features.",
"Methods which only reduce the distance between two marginal distributions bring the features into general alignment, but clusters do not match satisfactorily, as shown in Fig. 1(a) .",
"As a consequence, the decision boundary is likely to cross the target features, impairing accuracy.",
"In this work, we propose a novel domain adaptation method to align the manifolds of the source and the target features in class-level, as shown in Fig. 1(b) .",
"We first employ label propagation to evaluate the relation between manifolds.",
"Then, to align them, we reinforce the cycle consistency that is the correspondence between the original labels in the source domain and the labels that are propagated from the source to the target and back to the source domain.",
"The cycle consistency draws features from both domains that are near to each other to converge, and those that are far apart to diverge.",
"The proposed method exploits manifold information using label propagation which had not been taken into account in other cycle consistency based methods.",
"As a result, our approach outperforms other baselines on various scenarios as demonstrated in Sec. 4.",
"Moreover, the role of cycle consistency is theoretically explained in Sec. 3.2 that it leads to aligned manifolds in class-level.",
"To acquire more manifold information within the limited number of mini-batch samples, we utilize local manifold approximation and pursue local manifold consistency.",
"In summary, our contributions are as follows:",
"• We propose a novel domain adaptation method which exploits global and local manifold information to align class-level distributions of the source and the target.",
"• We analyze and demonstrate the benefit of the proposed method over the most similar baseline, Associative domain adaptation (AssocDA) (Haeusser et al., 2017) .",
"• We present the theoretical background on why the proposed cycle consistency leads to class-level manifold alignment, bringing better result in domain adaptation.",
"• We conduct extensive experiments on various scenarios and achieve the state-of-the-art performance.",
"In this paper, we proposed a novel domain adaptation which stems from the objective to correctly align manifolds which might result in better performance.",
"Our method achieved it, which was supported by intuition, theory and experiments.",
"In addition, its superior performance was demonstrated on various benchmark dataset.",
"Based on graph, our method depends on how to construct the graph.",
"Pruning the graph or defining a similarity matrix considering underlying geometry may improve the performance.",
"Our method also can be applied to semi supervised learning only with slight modification.",
"We leave them as future work.",
"A PROOF OF THEOREM 1 Theorem 1.",
"Let {e i |1 ≤ i ≤ C} be the standard bases of C-dimensional Euclidean space.",
"For the sake of simplicity, source data x 1 , x 2 , · · · , x Ns are assumed to be arranged so that the first n 1 data belong to class 1, the n 2 data to class 2, and so forth.",
"Assume that 1) the source data is ideally clustered, in the sense that T ss has positive values if the row and the column are the same class and zero otherwise,",
"i.e.",
", T ss = diag(T 1 , T 2 , · · · , T C ), the block diagonal where T i is a n i × n i positive matrix for i = 1, 2, · · · , C and 2)ŷ s = y s .",
"Then for all 1 ≤ j ≤ C, there exists a nonnegative vector v j ∈ R Ns such that 1) the part where source data belongs to j th class (from [n 1 + n 2 + · · · + n j−1 + 1] th element to [n 1 + n 2 + · · · + n j ] th element) are positive and the other elements are all zero and 2) v j T stŷ t e i = 0 for all 1 ≤ i ≤ C, i =",
"j. From the assumption, T ss is a block diagonal matrix of which block elements are T 1 ,T 2 ,· · · ,T C .",
"v j is all zero except n j elements in the middle of v j .",
"The n j elements are all positive and their indices correspond to those of T j in T ss .",
"In the proof, the left eigenvector u j of T j will be substituted to this part.",
"Proof.",
"From the Perron-Frobenius Theorem (Frobenius et al., 1912; Perron, 1907) that positive matrix has a real and positive eigenvalue with positive left and right eigenvectors, T j , the block diagonal element of T ss , has a positive left eigenvector u j with eigenvalue λ j for all j = 1, 2, · · · C. Then, as shown below, v j = ( 0 0 ··· 0 u j 0 ··· 0 ) where n 1 + n 2 + · · · + n j−1 zeros, u j and n j+1 + n j+2 + · · · + n C zeros are concatenated, is a left eigenvector of T ss with eigenvalue λ j by the definition of eigenvector.",
"From the label propagation, we have,ŷ",
"By multiplying v j (I − T ss ) on the left and e i on the right to the both sides in Equation 13 and combining with the assumptionŷ s = y s , we have,",
"The last zero comes from the definition of v j ."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.17391303181648254,
0.0952380895614624,
0.2222222238779068,
0.1463414579629898,
0.1818181723356247,
0.29411762952804565,
0.038461532443761826,
0.11428570747375488,
0.145454540848732,
0.2702702581882477,
0.18518517911434174,
0.13636362552642822,
0.10526315122842789,
0.04999999329447746,
0.0714285671710968,
0.21621620655059814,
0.27272728085517883,
0,
0.1818181723356247,
0.45454543828964233,
0.2666666507720947,
0.3255814015865326,
0.20512819290161133,
0.19512194395065308,
0,
0.10256409645080566,
0.052631575614213943,
0,
0.4285714328289032,
0.1904761791229248,
0.19512194395065308,
0.0624999962747097,
0.380952388048172,
0.12903225421905518,
0,
0.13333332538604736,
0,
0.12121211737394333,
0,
0.07999999821186066,
0,
0.12765957415103912,
0.09090908616781235,
0.08510638028383255,
0.1111111044883728,
0,
0,
0.1111111044883728,
0.05882352590560913,
0.048192769289016724,
0.07999999821186066,
0.07999999821186066,
0.06896550953388214
] | HJgY6R4YPH | true | [
"A novel domain adaptation method to align manifolds from source and target domains using label propagation for better accuracy."
] |
[
"We propose a new method for training neural networks online in a bandit setting.",
"Similar to prior work, we model the uncertainty only in the last layer of the network, treating the rest of the network as a feature extractor.",
"This allows us to successfully balance between exploration and exploitation due to the efficient, closed-form uncertainty estimates available for linear models.",
"To train the rest of the network, we take advantage of the posterior we have over the last layer, optimizing over all values in the last layer distribution weighted by probability.",
"We derive a closed form, differential approximation to this objective and show empirically over various models and datasets that training the rest of the network in this fashion leads to both better online and offline performance when compared to other methods.",
"Applying machine learning models to real world applications almost always involves deploying systems in dynamic, non-stationary environments.",
"This dilemma requires models to be constantly re-updated with new data in order to maintain a similar model performance across time.",
"Of course, doing this usually requires the new data to be relabeled, which can be expensive or in some cases, impossible.",
"In many situations, this new labeled data can be cheaply acquired by utilizing feedback from the user, where the feedback/reward indicates the quality of the action taken by the model for the given input.",
"Since the inputs are assumed independent, this task can be framed in the contextual bandit setting.",
"Learning in this setting requires a balance between exploring uncertain actions (where we risk performing sub optimal actions) and exploiting actions the model is confident will lead to high reward (where we risk missing out on discovering better actions).Methods",
"based on Thompson sampling (TS) or Upper Confidence Bounds (UCB) provide theoretically BID1 BID0 and empirically established ways BID12 BID3 for balancing exploration/exploitation in this setting. Unfortunately",
", both methods require estimation of model uncertainty. While this can",
"be done easily for most linear models, it is a difficult and open problem for large neural network models underlying many modern machine learning systems. An empirical study",
"by BID17 shows that having good uncertainty estimates is vital for neural networks learning in a bandit setting. Closed formed uncertainty",
"estimations (and online update formulas) are available for many linear models. Since the last layer of many",
"neural networks are usually (generalized) linear models, a straightforward way for learning neural networks in a bandit setting is to estimate the uncertainty (as a distribution over weights) on the last layer only, holding the previous layers fixed as feature functions which provide inputs to the linear model. This method (and variants thereof",
") has been proposed in bandit settings BID17 BID13 as well as other related settings (Snoek et al., 2015; BID16 BID11 and has been shown to work surprisingly well considering its simplicity. This style of methods, which we",
"refer to as Bayesian last layer or BLL methods, also has the advantage of being both relatively model-agnostic and scalable to large models. Of course, BLL methods come with",
"the tacit assumption that the feature functions defined by the rest of the network output good (linearly separable) representations of our inputs. This means that, unless the input",
"data distribution is relatively static, the rest of the network will need to be updated in regular intervals to maintain low regret.In order to maintain low regret, the retraining objective must: 1) allow new data to be incorporated",
"quickly into the learned model, and 2) prevent previously learned information",
"from being quickly forgotten. Previous papers retrain BLL methods simply",
"by sampling minibatches from the entire pool of previously seen data and maximizing log-likelihood over these minibatches, which fails to meet the first criteria above.In this paper we present a new retraining objective for BLL methods meeting both requirements. We avoid retraining the last layer with the",
"entire network (throwing out the uncertainty information we learned about the last layer) or retraining with the last layer fixed (fixing the last layer to the mean of its distribution). Instead, we utilize the uncertainty information",
"gathered about the last layer, and optimize the expected log-likelihood of both new and old data, marginalizing 1 over the entire distribution we have on the last layer. This gives a more robust model that performs relatively",
"well over all likely values of the last layer. While this objective cannot be exactly computed, we derive",
"a closed form, differentiable, approximation. We show that this approximation meets both criteria above,",
"with a likelihood term to maximize that depends only on the new data (meeting the first point), and a quadratic regularization term that is computed only with previously seen data (meeting the second point). We show empirically that this method improves regret on the",
"most difficult bandit tasks studied in BID17 . We additionally test the method on a large state-of-the-art",
"recurrent model, creating a bandit task out of a paraphrasing dataset. Finally, we test the method on convolutional models, constructing",
"a bandit task from a benchmark image classification dataset. We show that our method is fast to adapt to new data without quickly",
"forgetting previous information.",
"As previously done in BID17 , we use a two layer MLP as the underlying model, using the same configuration across all methods.",
"For Marginalize and Sample New, we perform the retraining after 1000 rounds.",
"For Sample All we update after 100 rounds just like BID17 .",
"In Table 1 we report the average cumulative regret as well as the cumulative regret relative to a policy that selects arms uniformly at random.",
"We report the results for both Thompson Sampling (TS) and for UCB policies.",
"Results are similar for either UCB and TS which shows that policies does not influence performance of the training mechanisms.On most of the tasks both Marginalize (our method) and Sample New outperforms Sample All (method used in BID17 ) in terms of cumulative regret.",
"Both Marginalize and Sample New techniques are very similar in performance for the three datasets.",
"All the three datasets used in this experiment are low dimensional, static, and relatively easy to learn, hence there is not much history to retain for Sample New technique.",
"In the next section we will present results on larger datasets and also evaluate where we will show that our method performs better than Sample New.",
"In TAB2 we show that our method Marginalize outperforms both Sample All and Sample New techniques for both multiclass and pool based tasks.",
"Sample All and Sample New have comparable cumulative regret.",
"Sample New has worse offline accuracy on Quora dataset (because it forgets old information), while it has better offline accuracy on MSR (because it is able to adapt quicker).",
"For Batch train, both multiclass and pool based tasks are same-a binary classification problem.",
"Batch train performs only slightly better than our method in terms of offline accuracy, where Batch train gets full feedback, while our method only gets partial (bandit) feedback.",
"FIG1 further shows that when the data distribution changes (switching form Quora to MSR in the pool based task) Marginalize and Sample New are able to adapt much faster than Sample All.",
"Overall Marginalize achieved a lower regret as well as higher offline accuracy for both the bandit settings.",
"In Table 3 we present results for the image classification bandit task, using average cumulative regret and offline accuracy as evaluation metrics.",
"Again, Sample New performs better than Sample All for cumulative regret but under-performs in the offline setting.",
"As expected, our method performs well for both cumulative regret and offline setting.",
"For the multiclass task, our method performs significantly lower than batch train.",
"This is not too surprising, for two reasons:",
"i) Training a CNN architecture takes many more epochs over the data to converge (∼ 20 in our case) which is not achieved in a bandit setting;",
"ii) CIFAR-10 has 10 classes, each defining an arm and in our setting; the bandit algorithms only gets feedback for one class in each round, compared to the full feedback received in batch train.",
"Effectively, the number of labels per class in cut by a factor of 10.",
"This is not as much an issue in the pool task, where we can see the results between batch train and the bandit algorithms are comparable.",
"In this paper we proposed a new method for training neural networks in a bandit setting.",
"We tackle the problem of exploration-exploitation by estimating uncertainty only in the last layer, allowing the method to scale to large state-of-the-art models.",
"We take advantage of having a posterior over the last layer weights by optimizing the rest of the network over all values of the last layer.",
"We show that method outperforms other methods across a diverse set of underlying models, especially in online tasks where the distribution shifts rapidly.",
"We leave it as future work to investigate more sophisticated methods for determining when to retrain the network, how to set the weight (β) of the regularization term in a more automatic way, and its possible connections to methods used for continual learning."
] | [
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.4848484694957733,
0.2926829159259796,
0.14999999105930328,
0.2790697515010834,
0.2181818187236786,
0.10810810327529907,
0.19999998807907104,
0.14999999105930328,
0.1666666567325592,
0.17142856121063232,
0.1090909019112587,
0.08510638028383255,
0,
0.21276594698429108,
0.3499999940395355,
0.277777761220932,
0.375,
0.14814814925193787,
0.12765957415103912,
0.1860465109348297,
0.1599999964237213,
0.06666666269302368,
0,
0.28125,
0.17777776718139648,
0.307692289352417,
0.21052631735801697,
0.060606054961681366,
0.1538461446762085,
0.277777761220932,
0.20512819290161133,
0.19512194395065308,
0,
0.1904761791229248,
0.0624999962747097,
0,
0.09756097197532654,
0.1249999925494194,
0.10169491171836853,
0.17142856121063232,
0.1249999925494194,
0.09090908616781235,
0.09999999403953552,
0,
0,
0,
0.0952380895614624,
0.08163265138864517,
0.277777761220932,
0.1428571343421936,
0.1666666567325592,
0.12121211737394333,
0.1249999925494194,
0.1428571343421936,
0.2222222238779068,
0.16326530277729034,
0.24242423474788666,
0.1818181723356247,
0.4571428596973419,
0.25,
0.3684210479259491,
0.23255813121795654,
0.1818181723356247
] | BklAyh05YQ | true | [
"This paper proposes a new method for neural network learning in online bandit settings by marginalizing over the last layer"
] |
[
"\tDespite a growing literature on explaining neural networks, no consensus has been reached on how to explain a neural network decision or how to evaluate an explanation.\n\t",
"Our contributions in this paper are twofold.",
"First, we investigate schemes to combine explanation methods and reduce model uncertainty to obtain a single aggregated explanation.",
"The aggregation is more robust and aligns better with the neural network than any single explanation method..\n\t",
"Second, we propose a new approach to evaluating explanation methods that circumvents the need for manual evaluation and is not reliant on the alignment of neural networks and humans decision processes.",
"Despite the great success of neural networks especially in classic visual recognition problems, explaining the networks' decisions remains an open research problem Samek et al. (2019) .",
"This is due in part to the complexity of the visual recognition problem and in part to the basic 'ill-posedness' of the explanation task.",
"This challenge is amplified by the fact that there is no agreement on what a sufficient explanation is and how to evaluate an explanation method.",
"Many different explanation strategies and methods have been proposed (Simonyan et al., 2013; Zeiler & Fergus, 2014; Bach et al., 2015; Selvaraju et al., 2017; Smilkov et al., 2017; Sundararajan et al., 2017) .",
"Focusing on visual explanations for individual decisions, most methods either use a backpropagation approach or aim to construct a simpler linear model with an intuitive explanation.",
"The plethora of explanation approaches is a signature of the high-level epistemic uncertainty of the explanation task.",
"This paper is motivated by a key insight in machine learning: Ensemble models can reduce both bias and variance compared to applying a single model.",
"A related approach was pursued for functional visualization in neuroimaging (Hansen et al., 2001 ).",
"Here we for the first time explore the potential of aggregating explanations of individual visual decisions in reducing epistemic uncertainty for neural networks.",
"We test the hypothesis that ensembles of multiple explanation methods are more robust than any single method.",
"This idea is analyzed theoretically and evaluated empirically.",
"We discuss the properties of the aggregate explanations and provide visual evidence that they combine features, hence are more complete and less biased than individual schemes.",
"Based on this insight, we propose two ways to aggregate explanation methods, AGG-Mean and AGG-Var.",
"In experiments on Imagenet, MNIST, and FashionMNIST, the aggregates identify relevant parts of the image more accurately than any single method.",
"Second, we introduce IROF (Iterative Removal Of Features) as a new approach to quantitatively evaluate explanation methods without relying on human evaluation.",
"We circumvent the problems of high correlation between neighbor pixels as well as the human bias that are present in current evaluation methods.",
"In this work we gave a simple proof that aggregating explanation methods will perform at least as good as the typical individual method.",
"In practice, we found evidence that aggregating methods outperforms any single method.",
"We found this evidence substantiated across quantitative metrics.",
"While our results show that different vanilla explanation methods perform best on different network architectures, an aggregation supersedes all of them on any given architecture.",
"Additionally we proposed a novel way of evaluation for explanation methods that circumvents the problem of high correlation between pixels and does not rely on visual inspection by humans, an inherently misleading metric."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.052631575614213943,
0.09090908616781235,
0.19354838132858276,
0.1818181723356247,
0.27272728085517883,
0.09999999403953552,
0.25,
0.21621620655059814,
0.14999999105930328,
0.14999999105930328,
0.1428571343421936,
0.10256409645080566,
0.12903225421905518,
0.17142856121063232,
0.375,
0.08695651590824127,
0.20512820780277252,
0.13333332538604736,
0.11428570747375488,
0.10810810327529907,
0.277777761220932,
0.21621620655059814,
0.14814814925193787,
0.08695651590824127,
0.21052631735801697,
0.25531914830207825
] | B1xeZJHKPB | true | [
"We show in theory and in practice that combining multiple explanation methods for DNN benefits the explanation."
] |
[
"We present SOSELETO (SOurce SELEction for Target Optimization), a new method for exploiting a source dataset to solve a classification problem on a target dataset. ",
"SOSELETO is based on the following simple intuition: some source examples are more informative than others for the target problem. ",
"To capture this intuition, source samples are each given weights; these weights are solved for jointly with the source and target classification problems via a bilevel optimization scheme. ",
"The target therefore gets to choose the source samples which are most informative for its own classification task. ",
"Furthermore, the bilevel nature of the optimization acts as a kind of regularization on the target, mitigating overfitting. ",
"SOSELETO may be applied to both classic transfer learning, as well as the problem of training on datasets with noisy labels; we show state of the art results on both of these problems.",
"Deep learning has made possible many remarkable successes, leading to state of the art algorithms in computer vision, speech and audio, and natural language processing.",
"A key ingredient in this success has been the availability of large datasets.",
"While such datasets are common in certain settings, in other scenarios this is not true.",
"Examples of the latter include \"specialist\" scenarios, for instance a dataset which is entirely composed of different species of tree; and medical imaging, in which datasets on the order of hundreds to a thousand are common.A natural question is then how one may apply the techniques of deep learning within these relatively data-poor regimes.",
"A standard approach involves the concept of transfer learning: one uses knowledge gleaned from the source (data-rich regime), and transfers it over to the target (data-poor regime).",
"One of the most common versions of this approach involves a two-stage technique.",
"In the first stage, a network is trained on the source classification task; in the second stage, this network is adapted to the target classification task.",
"There are two variants for this second stage.",
"In feature extraction (e.g. ), only the parameters of the last layer (i.e. the classifier) are allowed to adapt to the target classification task; whereas in fine-tuning (e.g. BID12 ), the parameters of all of the network layers (i.e. both the features/representation and the classifier) are allowed to adapt.",
"The idea is that by pre-training the network on the source data, a useful feature representation may be learned, which may then be recycled -either partially or completely -for the target regime.",
"This two-stage approach has been quite popular, and works reasonably well on a variety of applications.Despite this success, we claim that the two-stage approach misses an essential insight: some source examples are more informative than others for the target classification problem.",
"For example, if the source is a large set of natural images and the target consists exclusively of cars, then we might expect that source images of cars, trucks, and motorcycles might be more relevant for the target task than, say, spoons.",
"However, this example is merely illustrative; in practice, the source and target datasets may have no overlapping classes at all.",
"As a result, we don't know a priori which source examples will be important.",
"Thus, we propose to learn this source filtering as part of an end-to-end training process.The resulting algorithm is SOSELETO: SOurce SELEction for Target Optimization.",
"Each training sample in the source dataset is given a weight, corresponding to how important it is.",
"The shared source/target representation is then optimized by means of a bilevel optimization.",
"In the interior level, the source minimizes its classification loss with respect to the representation parameters, for fixed values of the sample weights.",
"In the exterior level, the target minimizes its classification loss with respect to both the source sample weights and its own classification layer.",
"The sample weights implicitly control the representation through the interior level.",
"The target therefore gets to choose the source samples which are most informative for its own classification task.",
"Furthermore, the bilevel nature of the optimization acts as a kind of regularization on the target, mitigating overfitting, as the target does not directly control the representation parameters.",
"Finally, note that the entire processtraining of the shared representation, target classifier, and source weights -happens simultaneously.We pause here to note that the general philosophy behind SOSELETO is related to the literature on instance reweighting for domain adaptation, see for example BID32 .",
"However, there is a crucial difference between SOSELETO and this literature, which is related to the difference between domain adaptation and more general transfer learning.",
"Domain adaptation is concerned with the situation in which there is either full overlap between the source and target label sets; or in some more recent work BID43 , partial but significant overlap.",
"Transfer learning, by contrast, refers to the more general situation in which there may be zero overlap between label sets, or possibly very minimal overlap.",
"(For example, if the source consists of natural images and the target of medical images.)",
"The instance reweighting literature is concerned with domain adaptation; the techniques are therefore relevant to the case in which source and target have the same labels.",
"SOSELETO is quite different: it makes no such assumptions, and is therefore a more general approach which can be applied to both \"pure\" transfer learning, in which there is no overlap between source and target label sets, as well as domain adaptation.",
"(Note also a further distinction with domain adaptation: the target is often -though not always -taken to be unlabelled in domain adaptation. This is not the case for our setting of transfer learning.)Above, we have illustrated how SOSELETO may be applied to the problem of transfer learning.",
"However, the same algorithm can be applied to the problem of training with noisy labels.",
"Concretely, we assume that there is a large noisy dataset, as well as a much smaller clean dataset; the latter can be constructed cheaply through careful hand-labelling, given its small size.",
"Then if we take the source to be the large noisy dataset, and the target to the small clean dataset, SOSELETO can be applied to the problem.",
"The algorithm will assign high weights to samples with correct labels and low weights to those with incorrect labels, thereby implicitly denoising the source, and allowing for an accurate classifier to be trained.The remainder of the paper is organized as follows.",
"Section 2 presents related work.",
"Section 3 presents the SOSELETO algorithm, deriving descent equations as well as convergence properties of the bilevel optimization.",
"Section 4 presents results of experiments on both transfer learning as well as training with noisy labels.",
"Section 5 concludes.",
"We have presented SOSELETO, a technique for exploiting a source dataset to learn a target classification task.",
"This exploitation takes the form of joint training through bilevel optimization, in which the source loss is weighted by sample, and is optimized with respect to the network parameters; while the target loss is optimized with respect to these weights and its own classifier.",
"We have derived an efficient algorithm for performing this bilevel optimization, through joint descent in the network parameters and the source weights, and have analyzed the algorithm's convergence properties.",
"We have empirically shown the effectiveness of the algorithm on both learning with label noise, as well as transfer learning problems.",
"An interesting direction for future research involves incorporating an additional domain alignment term into SOSELETO, in the case where the source and target dataset have overlapping labels.",
"We note that SOSELETO is architecture-agnostic, and thus may be easily deployed.",
"Furthermore, although we have focused on classification tasks, the technique is general and may be applied to other learning tasks within computer vision; this is an important direction for future research."
] | [
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.22857142984867096,
0.11764705181121826,
0.19512194395065308,
0.12121211737394333,
0.13333332538604736,
0.09756097197532654,
0,
0,
0,
0.03333333134651184,
0.10256409645080566,
0.07692307233810425,
0.12121211737394333,
0,
0,
0.1428571343421936,
0.07547169178724289,
0.08695651590824127,
0.05882352590560913,
0.14814814925193787,
0.10256409645080566,
0.19999998807907104,
0.14814814925193787,
0.11764705181121826,
0.12121211737394333,
0,
0.0624999962747097,
0.0555555522441864,
0.03999999538064003,
0.05714285373687744,
0.09302324801683426,
0.052631575614213943,
0.07407406717538834,
0.10526315122842789,
0.07999999821186066,
0.07692307233810425,
0.1428571343421936,
0.04651162400841713,
0.060606054961681366,
0.04081632196903229,
0,
0,
0.13333332538604736,
0,
0.20689654350280762,
0.1702127605676651,
0.05128204822540283,
0.0624999962747097,
0.04999999701976776,
0,
0
] | Hye-LiR5Y7 | true | [
"Learning with limited training data by exploiting \"helpful\" instances from a rich data source. "
] |
[
"We present a novel approach for training neural abstract architectures which in- corporates (partial) supervision over the machine’s interpretable components.",
"To cleanly capture the set of neural architectures to which our method applies, we introduce the concept of a differential neural computational machine (∂NCM) and show that several existing architectures (e.g., NTMs, NRAMs) can be instantiated as a ∂NCM and can thus benefit from any amount of additional supervision over their interpretable components.",
"Based on our method, we performed a detailed experimental evaluation with both, the NTM and NRAM architectures, and showed that the approach leads to significantly better convergence and generalization capabilities of the learning phase than when training using only input-output examples.\n",
"Recently, there has been substantial interest in neural abstract machines that can induce programs from examples BID2 ; BID4 ; ; BID7 ; BID11 ; BID14 ; BID18 ; BID20 ; BID23 ; BID24 .",
"While significant progress has been made towards learning interesting algorithms BID8 , ensuring the training of these machines converges to the desired solution can be very challenging.",
"Interestingly however, even though these machines differ architecturally, they tend to rely on components (e.g., neural memory) that are more interpretable than a typical neural network (e.g., an LSTM).",
"A key question then is:Can we somehow provide additional amounts of supervision for these interpretable components during training so to bias the learning towards the desired solution?In",
"this work we investigate this question in depth. We",
"refer to the type of supervision mentioned above as partial trace supervision, capturing the intuition that more detailed information, beyond inputoutput examples, is provided during learning. To",
"study the question systematically, we introduce the notion of a differential neural computational machine (∂NCM), a formalism which allows for clean characterization of the neural abstract machines that fall inside our class and that can benefit from any amount of partial trace information. We",
"show that common architectures such as Neural Turing Machines (NTMs) and Neural Random Access Machines (NRAMs) can be phrased as ∂NCMs, useful also because these architectures form the basis for many recent extensions, e.g., BID8 ; BID9 ; BID11 . We",
"also explain why other machines such as the Neural Program Interpreter (NPI) BID18 or its recent extensions (e.g., the Neural Program Lattice BID15 ) cannot be instantiated as an ∂NCM and are thus restricted to require large (and potentially prohibitive) amounts of supervision. We",
"believe the ∂NCM abstraction is a useful step in better understanding how different neural abstract machines compare when it comes to additional supervision. We",
"then present ∂NCM loss functions which abstractly capture the concept of partial trace information and show how to instantiate these for both the NTM and the NRAM. We",
"also performed an extensive evaluation for how partial trace information affects training in both architectures. Overall",
", our experimental results indicate that the additional supervision can substantially improve convergence while leading to better generalization and interpretability.To provide an intuition for the problem we study in this work, consider the simple task of training an NTM to flip the third bit in a bit stream (called Flip3rd) -such bitstream tasks have been extensively studied in the area of program synthesis (e.g., BID10 ; BID17 ). An example",
"input-output pair for this task could be examples, our goal is to train an NTM that solves this task. An example",
"NTM that generalizes well and is understandable is shown in FIG0 . Here, the",
"y-axis is time (descending), the x-axis is the accessed memory location, the white squares represent the write head of the NTM, and the orange squares represent the read head. As we can",
"see, the model writes the input sequence to the tape and then reads from the tape in the same order. However,",
"in the absence of richer supervision, the NTM (and other neural architectures) can easily overfit to the training set -an example of an overfitting NTM is shown in FIG0 . Here, the",
"traces are chaotic and difficult to interpret. Further,",
"even if the NTM generalizes, it can do so with erratic reads and writes, an example of which is shown in FIG0 . Here, the",
"NTM learns to read from the third bit (circled) with a smaller weight than from other locations, and also reads and writes erratically near the end of the sequence. This model",
"is less interpretable than the one in FIG0 because it is unclear how the model knows which the third bit actually is, or why a different read weight would help flip that bit.In this work we will develop principled ways for guiding the training of a neural abstract machine towards the behavior shown in FIG0 . For instance",
", for Flip3rd, providing partial trace information on the NTM's read heads for 10% of the input-output examples is sufficient to bias the learning towards the NTM shown in FIG0 100% of the time.",
"We presented a method for incorporating (any amount of) additional supervision into the training of neural abstract machines.",
"The basic idea was to provide this supervision (called partial trace information) over the interpretable components of the machine and to thus more effectively guide the learning towards the desired solution.",
"We introduced the ∂NCM architecture in order to precisely capture the neural abstract machines to which our method applies.",
"We showed how to formulate partial trace information as abstract loss functions, how to instantiate common neural architectures such as NTMs and NRAMs as ∂NCMs and concretize the ∂NCM loss functions.",
"Our experimental results indicate that partial trace information is effective in biasing the learning of both NTM's and NRAM's towards better converge, generalization and interpretability of the resulting models."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0
] | [
0.3243243098258972,
0.25,
0.1818181723356247,
0.04651162400841713,
0.1860465109348297,
0.08695651590824127,
0.23255813121795654,
0.07999999821186066,
0.23255813121795654,
0.25925925374031067,
0.1111111044883728,
0.16949151456356049,
0.2926829159259796,
0.2380952388048172,
0.1818181723356247,
0.1265822798013687,
0.0555555522441864,
0.06896550953388214,
0.10256409645080566,
0.12121211737394333,
0.2380952388048172,
0.07999999821186066,
0.09999999403953552,
0.13636362552642822,
0.1515151411294937,
0.17777776718139648,
0.4000000059604645,
0.27272728085517883,
0.23529411852359772,
0.2857142686843872,
0.1395348757505417
] | S1q_Cz-Cb | true | [
"We increase the amount of trace supervision possible to utilize when training fully differentiable neural machine architectures."
] |
[
"Bayesian learning of model parameters in neural networks is important in scenarios where estimates with well-calibrated uncertainty are important.",
"In this paper, we propose Bayesian quantized networks (BQNs), quantized neural networks (QNNs) for which we learn a posterior distribution over their discrete parameters.",
"We provide a set of efficient algorithms for learning and prediction in BQNs without the need to sample from their parameters or activations, which not only allows for differentiable learning in quantized models but also reduces the variance in gradients estimation.",
"We evaluate BQNs on MNIST, Fashion-MNIST and KMNIST classification datasets compared against bootstrap ensemble of QNNs (E-QNN).",
"We demonstrate BQNs achieve both lower predictive errors and better-calibrated uncertainties than E-QNN (with less than 20% of the negative log-likelihood).",
"A Bayesian approach to deep learning considers the network's parameters to be random variables and seeks to infer their posterior distribution given the training data.",
"Models trained this way, called Bayesian neural networks (BNNs) (Wang & Yeung, 2016) , in principle have well-calibrated uncertainties when they make predictions, which is important in scenarios such as active learning and reinforcement learning (Gal, 2016) .",
"Furthermore, the posterior distribution over the model parameters provides valuable information for evaluation and compression of neural networks.",
"There are three main challenges in using BNNs: (1) Intractable posterior: Computing and storing the exact posterior distribution over the network weights is intractable due to the complexity and high-dimensionality of deep networks.",
"(2) Prediction: Performing a forward pass (a.k.a. as probabilistic propagation) in a BNN to compute a prediction for an input cannot be performed exactly, since the distribution of hidden activations at each layer is intractable to compute.",
"(3) Learning:",
"The classic evidence lower bound (ELBO) learning objective for training BNNs is not amenable to backpropagation as the ELBO is not an explicit function of the output of probabilistic propagation.",
"These challenges are typically addressed either by making simplifying assumptions about the distributions of the parameters and activations, or by using sampling-based approaches, which are expensive and unreliable (likely to overestimate the uncertainties in predictions).",
"Our goal is to propose a sampling-free method which uses probabilistic propagation to deterministically learn BNNs.",
"A seemingly unrelated area of deep learning research is that of quantized neural networks (QNNs), which offer advantages of computational and memory efficiency compared to continuous-valued models.",
"QNNs, like BNNs, face challenges in training, though for different reasons: (4.1) The non-differentiable activation function is not amenable to backpropagation.",
"(4.2)",
"Gradient updates cease to be meaningful, since the model parameters in QNNs are coarsely quantized.",
"In this work, we combine the ideas of BNNs and QNNs in a novel way that addresses the aforementioned challenges (1)(2)(3)(4) in training both models.",
"We propose Bayesian quantized networks (BQNs), models that (like QNNs) have quantized parameters and activations over which they learn (like BNNs) categorical posterior distributions.",
"BQNs have several appealing properties:",
"• BQNs solve challenge (1) due to their use of categorical distributions for their model parameters.",
"• BQNs can be trained via sampling-free backpropagation and stochastic gradient ascent of a differentiable lower bound to ELBO, which addresses challenges (2), (3) and (4) above.",
"• BQNs leverage efficient tensor operations for probabilistic propagation, further addressing challenge (2).",
"We show the equivalence between probabilistic propagation in BQNs and tensor contractions (Kolda & Bader, 2009) , and introduce a rank-1 CP tensor decomposition (mean-field approximation) that speeds up the forward pass in BQNs.",
"• BQNs provide a tunable trade-off between computational resource and model complexity: using a refined quantization allows for more complex distribution at the cost of more computation.",
"• Sampling from a learned BQN provides an alternative way to obtain deterministic QNNs .",
"In our experiments, we demonstrate the expressive power of BQNs.",
"We show that BQNs trained using our sampling-free method have much better-calibrated uncertainty compared with the stateof-the-art Bootstrap ensemble of quantized neural networks (E-QNN) trained by Courbariaux et al. (2016) .",
"More impressively, our trained BQNs achieve comparable log-likelihood against Gaussian Bayesian neural network (BNN) trained with stochastic gradient variational Bayes (SGVB) (Shridhar et al., 2019) (the performance of Gaussian BNNs are expected to be better than BQNs since they allows for continuous random variables).",
"We further verify that BQNs can be easily used to compress (Bayesian) neural networks and obtain determinstic QNNs.",
"Finally, we evaluate the effect of mean-field approximation in BQN, by comparing with its Monte-Carlo realizations, where no approximation is used.",
"We show that our sampling-free probabilistic propagation achieves similar accuracy and log-likelihood -justifying the use of mean-field approximation in BQNs.",
"We present a sampling-free, backpropagation-compatible, variational-inference-based approach for learning Bayesian quantized neural networks (BQNs).",
"We develop a suite of algorithms for efficient inference in BQNs such that our approach scales to large problems.",
"We evaluate our BQNs by Monte-Carlo sampling, which proves that our approach is able to learn a proper posterior distribution on QNNs.",
"Furthermore, we show that our approach can also be used to learn (ensemble) QNNs by taking maximum a posterior (or sampling from) the posterior distribution.",
"assuming g n (φ) can be (approximately) computed by sampling-free probabilistic propagation as in Section 2.",
"However, this approach has two major limitations:",
"(a) the Bayes' rule needed to be derived case by case, and analytic rule for most common cases are not known yet.",
"(b) it is not compatible to modern optimization methods (such as SGD or ADAM) as the optimization is solved analytically for each data point, therefore difficult to cope with large-scale models.",
"(2) Sampling-based Variational inference (SVI), formulates an optimization problem and solves it approximately via stochastic gradient descent (SGD).",
"The most popular method among all is, Stochastic Gradient Variational Bayes (SGVB), which approximates L n (φ) by the average of multiple samples (Graves, 2011; Blundell et al., 2015; Shridhar et al., 2019) .",
"Before each step of learning or prediction, a number of independent samples of the model parameters {θ s } S s=1 are drawn according to the current estimate of Q, i.e. θ s ∼ Q, by which the predictive function g n (φ) and the loss L n (φ) can be approximated by",
"where f n (θ) = Pr[y n |x n , θ] denotes the predictive function given a specific realization θ of the model parameters.",
"The gradients of L n (φ) can now be approximated as",
"This approach has multiple drawbacks:",
"(a) Repeated sampling suffers from high variance, besides being computationally expensive in both learning and prediction phases;",
"(b) While g n (φ) is differentiable w.r.t. φ, f n (θ) may not be differentiable w.r.t. θ.",
"One such example is quantized neural networks, whose backpropagation is approximated by straight through estimator (Bengio et al., 2013",
"Our approach considers a wider scope of problem settings, where the model could be stochastic, i.e.",
"] is an arbitrary function.",
"Furthermore, Wu et al. (2018) considers the case that all parameters θ are Gaussian distributed, whose sampling-free probabilistic propagation requires complicated approximation (Shekhovtsov & Flach, 2018) .",
"Quantized Neural Networks These models can be categorized into two classes: (1) Partially quantized networks, where only weights are discretized (Han et al., 2015; Zhu et al., 2016) ; (2) Fully quantized networks, where both weights and hidden units are quantized (Courbariaux et al., 2015; Kim & Smaragdis, 2016; Zhou et al., 2016; Rastegari et al., 2016; Hubara et al., 2017) .",
"While both classes provide compact size, low-precision neural network models, fully quantized networks further enjoy fast computation provided by specialized bit-wise operations.",
"In general, quantized neural networks are difficult to train due to their non-differentiability.",
"Gradient descent by backpropagation is approximated by either straight-through estimators (Bengio et al., 2013) or probabilistic methods (Esser et al., 2015; Shayer et al., 2017; Peters & Welling, 2018) .",
"Unlike these papers, we focus on Bayesian learning of fully quantized networks in this paper.",
"Optimization of quantized neural networks typically requires dedicated loss function, learning scheduling and initialization.",
"For example, Peters & Welling (2018) considers pre-training of a continuous-valued neural network as the initialization.",
"Since our approach considers learning from scratch (with an uniform initialization), the performance could be inferior to prior works in terms of absolute accuracy.",
"Tensor Networks and Tensorial Neural Networks Tensor networks (TNs) are widely used in numerical analysis (Grasedyck et al., 2013) , quantum physiscs (Orús, 2014), and recently machine learning (Cichocki et al., 2016; 2017) to model interactions among multi-dimensional random objects.",
"Various tensorial neural networks (TNNs) (Su et al., 2018; Newman et al., 2018) have been proposed that reduce the size of neural networks by replacing the linear layers with TNs.",
"Recently, (Robeva & Seigal, 2017) points out the duality between probabilistic graphical models (PGMs) and TNs.",
"I.e. there exists a bijection between PGMs and TNs.",
"Our paper advances this line of thinking by connecting hierarchical Bayesian models (e.g. Bayesian neural networks) and hierarchical TNs."
] | [
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.1249999925494194,
0.7222222089767456,
0.2745097875595093,
0.0624999962747097,
0.05714285373687744,
0.2702702581882477,
0.08163265138864517,
0.3125,
0.13333332538604736,
0.12244897335767746,
0.04878048226237297,
0.08888888359069824,
0.2666666507720947,
0.09999999403953552,
0.05405404791235924,
0.13333332538604736,
0.10526315122842789,
0.4864864945411682,
0,
0.19999998807907104,
0.09756097197532654,
0.0714285671710968,
0.09090908616781235,
0.14999999105930328,
0.06896550953388214,
0.07999999821186066,
0.09090908616781235,
0.07017543166875839,
0.060606054961681366,
0.05714285373687744,
0.05714285373687744,
0.3448275923728943,
0.1764705777168274,
0.3333333432674408,
0.25641024112701416,
0,
0,
0.0555555522441864,
0.0476190447807312,
0,
0.04255318641662598,
0.10344827175140381,
0.1111111044883728,
0,
0,
0,
0,
0.11764705181121826,
0.0624999962747097,
0,
0.04878048226237297,
0.06896551698446274,
0.05405404791235924,
0.14814814925193787,
0,
0.19999998807907104,
0.06896550953388214,
0.06451612710952759,
0,
0,
0,
0,
0.07999999821186066,
0.060606054961681366
] | rylVHR4FPB | true | [
"We propose Bayesian quantized networks, for which we learn a posterior distribution over their quantized parameters."
] |
[
"Injecting adversarial examples during training, known as adversarial training, can improve robustness against one-step attacks, but not for unknown iterative attacks.",
"To address this challenge, we first show iteratively generated adversarial images easily transfer between networks trained with the same strategy.",
"Inspired by this observation, we propose cascade adversarial training, which transfers the knowledge of the end results of adversarial training.",
"We train a network from scratch by injecting iteratively generated adversarial images crafted from already defended networks in addition to one-step adversarial images from the network being trained.",
"We also propose to utilize embedding space for both classification and low-level (pixel-level) similarity learning to ignore unknown pixel level perturbation.",
"During training, we inject adversarial images without replacing their corresponding clean images and penalize the distance between the two embeddings (clean and adversarial).",
"Experimental results show that cascade adversarial training together with our proposed low-level similarity learning efficiently enhances the robustness against iterative attacks, but at the expense of decreased robustness against one-step attacks.",
"We show that combining those two techniques can also improve robustness under the worst case black box attack scenario.",
"Injecting adversarial examples during training (adversarial training), BID1 BID3 increases the robustness of a network against adversarial attacks.",
"The networks trained with one-step methods have shown noticeable robustness against onestep attacks, but, limited robustness against iterative attacks at test time.",
"To address this challenge, we have made the following contributions:Cascade adversarial training: We first show that iteratively generated adversarial images transfer well between networks when the source and the target networks are trained with the same training method.",
"Inspired by this observation, we propose cascade adversarial training which transfers the knowledge of the end results of adversarial training.",
"In particular, we train a network by injecting iter FGSM images (section 2.1) crafted from an already defended network (a network trained with adversarial training) in addition to the one-step adversarial images crafted from the network being trained.",
"The concept of using already trained networks for adversarial training is also introduced in BID9 .",
"In their work, purely trained networks are used as another source networks to generate one-step adversarial examples for training.",
"On the contrary, our cascade adversarial training uses already defended network for iter FGSM images generation.Low level similarity learning: We advance the previous data augmentation approach by adding additional regularization in deep features to encourage a network to be insensitive to adversarial perturbation.",
"In particular, we inject adversarial images in the mini batch without replacing their corresponding clean images and penalize distance between embeddings from the clean and the adversarial examples.",
"There are past examples of using embedding space for learning similarity of high level features like face similarity between two different images BID8 BID7 Wen et al., 2016) .",
"Instead, we use the embedding space for learning similarity of the pixel level differences between two similar images.",
"The intuition of using this regularization is that small difference on input should not drastically change the high level feature representation.",
"We performed through transfer analysis and showed iter FGSM images transfer easily between networks trained with the same strategy.",
"We exploited this and proposed cascade adversarial training, a method to train a network with iter FGSM adversarial images crafted from already defended networks.",
"We also proposed adversarial training regularized with a unified embedding for classification and low-level similarity learning by penalizing distance between the clean and their corresponding adversarial embeddings.",
"Combining those two techniques (low level similarity learning + cascade adversarial training) with deeper networks further improved robustness against iterative attacks for both white-box and black-box attacks.However, there is still a gap between accuracy for the clean images and that for the adversarial images.",
"Improving robustness against both one-step and iterative attacks still remains challenging since it is shown to be difficult to train networks robust for both one-step and iterative attacks simultaneously.",
"Future research is necessary to further improve the robustness against iterative attack without sacrificing the accuracy for step attacks or clean images under both white-box attack and black-box attack scenarios.",
"We perform 24x24 random crop and random flip on 32x32 original images.",
"We generate adversarial images with \"step ll\" after these steps otherwise noted.We use stochastic gradient descent (SGD) optimizer with momentum of 0.9, weight decay of 0.0001 and mini batch size of 128.",
"For adversarial training, we generate k = 64 adversarial examples among 128 images in one mini-batch.",
"We start with a learning rate of 0.1, divide it by 10 at 4k and 6k iterations, and terminate training at 8k iterations for MNIST, and 48k and 72k iterations, and terminate training at 94k iterations for CIFAR10.",
"Ensemble models Pre-trained models R20 E , R20 P,E , R110 E , R110 P,E R20 3 , R110 3 R110 E2 , R110 P,E2 R20 4 , R110 4 Cascade models Pre-trained model R20 K,C , R20 P,C R20 P R110 K,C , R110 P,C R110 P R110 K,C2 , R110 P,C2R110 P Figure 7: Argument to the softmax vs. in test time.",
"\"step ll\", \"step FGSM\" and \"random sign\" methods were used to generate test-time adversarial images.",
"Arguments to the softmax were measured by changing for each test method and averaged over randomly chosen 128 images from CIFAR10 test-set.",
"Blue line represents true class and the red line represents mean of the false classes.",
"Shaded region shows ± 1 standard deviation of each line.We draw average value of the argument to the softmax layer for the true class and the false classes to visualize how the adversarial training works as in figure 7 .",
"Standard training, as expected, shows dramatic drop in the values for the true class as we increase in \"step ll\" or \"step FGSM direction. With adversarial training, we observe that the value drop is limited at small and our method even increases the value in certain range upto =10. Note that adversarial training is not the same as the gradient masking.As illustrated in figure 7, it exposes gradient information, however, quickly distort gradients along the sign of the gradient (\"step ll\" or \"step FGSM) direction.",
"We also observe improved results (broader margins than baseline) for \"random sign\" added images even though we didn't inject random sign added images during training.",
"Overall shape of the argument to the softmax layer in our case becomes smoother than Kurakin's method, suggesting our method is good for pixel level regularization.",
"Even though actual value of the embeddings for the true class in our case is smaller than that in Kurakin's, the standard deviation of our case is less than Kurakin's, making better margin between the true class and false classes.",
"We observe accuracies for the \"step FGSM\" adversarial images become higher than those for the clean images (\"label leaking\" phenomenon) by training with \"step FGSM\" examples as in .",
"Interestingly, we also observe \"label leaking\" phenomenon even without providing true labels for adversarial images generation.",
"We argue that \"label leaking\" is a natural result of the adversarial training."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.277777761220932,
0.05405404791235924,
0.11764705181121826,
0.04999999329447746,
0.2702702581882477,
0.10810810327529907,
0.31111109256744385,
0.2222222238779068,
0.29411762952804565,
0.1621621549129486,
0.11999999731779099,
0.12121211737394333,
0.04255318641662598,
0.1249999925494194,
0.11428570747375488,
0.1428571343421936,
0.10256409645080566,
0.13636362552642822,
0.1764705777168274,
0.052631575614213943,
0.05714285373687744,
0.10256409645080566,
0.2380952388048172,
0.3636363446712494,
0.25,
0.27272728085517883,
0.0714285671710968,
0.08510638028383255,
0.0624999962747097,
0.13333332538604736,
0.043478257954120636,
0.12903225421905518,
0.05128204822540283,
0.06896550953388214,
0.11764705181121826,
0.07792207598686218,
0.04999999329447746,
0.04878048226237297,
0.04444443807005882,
0.09999999403953552,
0.060606054961681366,
0.13333332538604736
] | HyRVBzap- | true | [
"Cascade adversarial training + low level similarity learning improve robustness against both white box and black box attacks."
] |
[
"Whereas it is believed that techniques such as Adam, batch normalization and, more recently, SeLU nonlinearities ``solve'' the exploding gradient problem, we show that this is not the case and that in a range of popular MLP architectures, exploding gradients exist and that they limit the depth to which networks can be effectively trained, both in theory and in practice.",
"We explain why exploding gradients occur and highlight the {\\it collapsing domain problem}, which can arise in architectures that avoid exploding gradients. \n\n",
"ResNets have significantly lower gradients and thus can circumvent the exploding gradient problem, enabling the effective training of much deeper networks, which we show is a consequence of a surprising mathematical property.",
"By noticing that {\\it any neural network is a residual network}, we devise the {\\it residual trick}, which reveals that introducing skip connections simplifies the network mathematically, and that this simplicity may be the major cause for their success.",
"Arguably, the primary reason for the recent success of neural networks is their \"depth\", i.e. their ability to compose and jointly train nonlinear functions so that they co-adapt.",
"A large body of work has detailed the benefits of depth (e.g. Montafur et al. (2014) ; BID13 Martens et al. (2013) ; BID9 ; Shamir & Eldan (2015) ; Telgarsky (2015) ; Mhaskar & Shamir (2016) ).The",
"exploding gradient problem has been a major challenge for training very deep feedforward neural networks at least since the advent of gradient-based parameter learning (Hochreiter, 1991) . In",
"a nutshell, it describes the phenomenon that as the gradient is backpropagated through the network, it may grow exponentially from layer to layer. This",
"can, for example, make the application of vanilla SGD impossible for networks beyond a certain depth. Either",
"the step size is too large for updates to lower layers to be useful or it is too small for updates to higher layers to be useful. While",
"this intuitive notion is widely understood, there are important gaps in the foundational understanding of this phenomenon. In this",
"paper, we take a significant step towards closing those gaps.To begin with, there is no well-accepted metric for determining the presence of pathological exploding gradients. Should",
"we care about the length of the gradient vector? Should",
"we care about the size of individual components of the gradient vector? Should",
"we care about the eigenvalues of the Jacobians of individual layers? Depending",
"on the metric used, different strategies arise for combating exploding gradients. For example",
", manipulating the width of layers a suggested by e.g. BID3 ; Han et al. (2017) can greatly impact the size of gradient vector components but leaves the length of the gradient vector relatively unchanged.The underlying problem is that it is unknown whether exploding gradients according to any of these metrics necessarily lead to training difficulties. There is a",
"large body of evidence that gradient explosion defined by some metric when paired with some optimization algorithm on some architectures and datasets is associated with poor results (e.g. Schoenholz et al. (2017) ; Yang & Schoenholz (2017) ). But, can we",
"make general statements about entire classes of algorithms and architectures?Algorithms such",
"as RMSprop (Tieleman & Hinton, 2012) , Adam (Kingma & Ba, 2015) or vSGD (Schaul et al., 2013) are light modifications of SGD that rescale different parts of the gradient vector and are known to be able to lead to improved training outcomes. This raises an",
"another important unanswered question. Are exploding",
"gradients merely a numerical quirk to be overcome by simply rescaling different parts of the gradient vector or are they reflective of an inherently difficult optimization problem that cannot be easily tackled by simple modifications to a stock algorithm?It has become",
"a common notion that techniques such as introducing normalization layers (e.g. Ioffe & Szegedy (2015) , BID6 , BID12 , Salimans & Kingma (2016) ) or careful initial scaling of weights (e.g. He et al. (2015) , BID14 , Saxe et al. (2014) , Mishking & Matas (2016) ) largely eliminate exploding gradients by stabilizing forward activations. This notion was",
"espoused in landmark papers. The paper that",
"introduced batch normalization (Ioffe & Szegedy, 2015) states:In traditional deep networks, too-high learning rate may result in the gradients that explode or vanish, as well as getting stuck in poor local minima. Batch Normalization",
"helps address these issues.The paper that introduced ResNet (He et al., 2016b) states:Is learning better networks as easy as stacking more layers? An obstacle to answering",
"this question was the notorious problem of vanishing/exploding gradients, which hamper convergence from the beginning. This problem, however, has",
"been largely addressed by normalized initialization and intermediate normalization layers, ...We argue that these claims are overly optimistic. While scaling weights or normalizing",
"forward activations can reduce gradient growth defined according to certain metrics in certain situations, these techniques are not effective in general and can cause other problems even when they are effective. We intend to add nuance to these ideas",
"which have been widely adopted by the community (e.g. BID12 ; BID7 ). In particular, we intend to correct the",
"misconception that stabilizing forward activations is sufficient for avoiding exploding gradients (e.g. Klambauer et al. (2017) ).ResNet (He et al., 2016b) and other neural",
"network architectures utilizing skip connections (e.g. Huang et al. (2017) , Szegedy et al. (2016) ) have been highly successful recently. While the performance of networks without",
"skip connections starts to degrade when depth is increased beyond a certain point, the performance of ResNet continues to improve until a much greater depth is reached. While favorable changes to properties of",
"the gradient brought about by the introduction of skip connections have been demonstrated for specific architectures (e.g. Yang & Schoenholz (2017) ; BID7 ), a general explanation for the power of skip connections has not been given.Our contributions are as follows:1. We introduce the 'gradient scale coefficient",
"' (GSC), a novel measurement for assessing the presence of pathological exploding gradients (section 2). It is robust to confounders such as network",
"scaling (section 2) and layer width (section 3) and can be used",
"directly to show that training",
"is difficult (section 4). Therefore, we propose the unification of research",
"on the exploding gradient problem under this metric.2. We demonstrate that exploding gradients are in fact",
"present in a variety of popular MLP architectures, including architectures utilizing techniques that supposedly combat exploding gradients. We show that introducing normalization layers may even",
"exacerbate the exploding gradient problem (section 3).3. We show that exploding gradients as defined by the GSC",
"are not a numerical quirk to be overcome by rescaling different parts of the gradient vector, but are indicative of an inherently complex optimization problem and that they limit the depth to which MLP archi-tectures can be effectively trained, rendering very deep MLPs effectively much shallower (section 4). To our knowledge, this is the first time such a link has",
"been established.4. For the first time, we show why exploding gradients are",
"likely to occur in deep networks even when the forward activations do not explode (section 5). We argue that this is a fundamental reason for the difficulty",
"of constructing very deep trainable networks.5. For the first time, we define the 'collapsing domain problem'",
"for training very deep feedforward networks. We show how this problem can arise precisely in architectures",
"that avoid exploding gradients via careful initial scaling of weights and that it can be at least as damaging to the training process (section 6).6. For the first time, we show that the introduction of skip connections",
"has a strong gradientreducing effect on deep network architectures in general. We detail the surprising mathematical relationship that makes this possible",
"(section 7).7. We introduce the 'residual trick' (section 4), which reveals that ResNets are",
"a mathematically simpler version of networks without skip connections and thus approximately achieve what we term the 'orthogonal initial state'. This provides, we argue, the major reason for their superior performance at great",
"depths as well as an important criterion for neural network design in general (section 7).In section 8, we conclude and derive practical recommendations for designing and training",
"deep networks as well as key implications of our work for deep learning research.In the appendix in section B, we provide further high-level discussion. In section B.1, we discuss related work including the relationship of exploding gradients",
"with other measures of network trainability, such as eigenspectrum analysis (Saxe et al., 2014) , shattering gradients BID7 , trajectory lengths (Raghu et al., 2017) , covariate shift (e.g. (Ioffe & Szegedy, 2015) ) and Hessian conditioning (e.g. BID12 ). Recently, the behavior of neural networks at great depth was analyzed using mean field theory",
"(Poole et al., 2016; Schoenholz et al., 2017; Yang & Schoenholz, 2017; BID3 and dynamical systems theory BID10 BID0 . We discuss these lines of work in relation to this paper in sections B.1.1 and B.1.2 respectively",
". We discuss the implications of our work for the vanishing gradient problem in section B.2. We compare",
"the exploding gradient problem as it occurs in feedforward networks to the exploding and vanishing",
"gradient problems in RNNs (e.g. Pascanu et al. (2013) ) in section B.3. In section B.4, we highlight open research questions and potential future work.",
"Summary In this paper, we demonstrate that contrary to popular belief, many MLP architectures composed of popular layer types exhibit exploding gradients, and those that do not exhibit collapsing domains (section 3).",
"This tradeoff is caused by the discrepancy between absolute determinants and qm norms of layer-wise Jacobians (section 5).",
"Both sides of this tradeoff cause pathologies.",
"Exploding gradients, when defined by the GSC (section 2) cause low effective depth (section 4).",
"Collapsing domains cause pseudo-linearity and can also cause low effective depth (section 6).",
"However, both pathologies are caused to a surprisingly large degree by untrainable, and thus potentially unnecessary non-orthogonality contained in the initial functions.",
"Making the initial functions more orthogonal via e.g. skip connections leads to improved outcomes (section 7).",
"The effective depth measure has several limitations.One can train a linear MLP to have effective depth much larger than 1, but the result will still be equivalent to a depth 1 network.Consider the following training algorithm: first randomly re-sample the weights, then apply gradient descent.",
"Clearly, this algorithm is equivalent to just running gradient descent in any meaningful sense.",
"The re-sampling step nonetheless blows up the residual functions so as to significantly increase effective depth.The effective depth measure is very susceptible to the initial step size.",
"In our experiments, we found that starting off with unnecessarily large step sizes, even if those step sizes were later reduced, lead to worse outcomes.",
"However, because of the inflating impact on the residual function, the effective depth would be much higher nonetheless.Effective depth may change depending on how layers are defined.",
"In a ReLU MLP, for example, instead of considering a linear transformation and the following ReLU operation as different layers, we may define them to be part of the same layer.",
"While the function computed by the network and the course of gradient-based training do not depend on such redefinition, effective depth can be susceptible to such changes."
] | [
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.4146341383457184,
0.37037035822868347,
0.25806450843811035,
0.1538461446762085,
0.13333332538604736,
0.0952380895614624,
0.19999998807907104,
0.18867923319339752,
0.08163265138864517,
0.15686273574829102,
0.07999999821186066,
0.09999999403953552,
0.0952380895614624,
0.09090908616781235,
0.04651162400841713,
0.1304347813129425,
0.2195121943950653,
0.11428570747375488,
0.045454543083906174,
0.1621621549129486,
0.05128204822540283,
0.23188404738903046,
0.07499999552965164,
0.09999999403953552,
0.1538461446762085,
0.09999999403953552,
0.15686273574829102,
0.1428571343421936,
0.21875,
0.15094339847564697,
0.17241378128528595,
0.06779660284519196,
0.13333332538604736,
0.1621621549129486,
0.1428571343421936,
0.1395348757505417,
0.15789473056793213,
0.045454543083906174,
0.3265306055545807,
0.24561403691768646,
0.3265306055545807,
0.32941174507141113,
0.260869562625885,
0.23728813230991364,
0.04081632196903229,
0.2448979616165161,
0.3030303120613098,
0.18518517911434174,
0.1702127605676651,
0.0624999962747097,
0.06896550953388214,
0.1249999925494194,
0.09756097197532654,
0.12121211737394333,
0.20408162474632263,
0.3478260934352875,
0.10344827175140381,
0.19354838132858276,
0.07843136787414551,
0,
0.08510638028383255,
0.13333332538604736,
0.145454540848732,
0.07999999821186066,
0.19178082048892975,
0.12765957415103912,
0.1090909019112587,
0.07017543166875839,
0.14035087823867798,
0.1666666567325592,
0.24561403691768646
] | HkpYwMZRb | true | [
"We show that in contras to popular wisdom, the exploding gradient problem has not been solved and that it limits the depth to which MLPs can be effectively trained. We show why gradients explode and how ResNet handles them."
] |
[
"In this paper, we are interested in two seemingly different concepts: \\textit{adversarial training} and \\textit{generative adversarial networks (GANs)}.",
"Particularly, how these techniques work to improve each other.",
"To this end, we analyze the limitation of adversarial training as a defense method, starting from questioning how well the robustness of a model can generalize.",
"Then, we successfully improve the generalizability via data augmentation by the ``fake'' images sampled from generative adversarial network.",
"After that, we are surprised to see that the resulting robust classifier leads to a better generator, for free.",
"We intuitively explain this interesting phenomenon and leave the theoretical analysis for future work.\n",
"Motivated by these observations, we propose a system that combines generator, discriminator, and adversarial attacker together in a single network.",
"After end-to-end training and fine tuning, our method can simultaneously improve the robustness of classifiers, measured by accuracy under strong adversarial attacks, and the quality of generators, evaluated both aesthetically and quantitatively.",
"In terms of the classifier, we achieve better robustness than the state-of-the-art adversarial training algorithm proposed in (Madry \\textit{et al.}, 2017), while our generator achieves competitive performance compared with SN-GAN (Miyato and Koyama, 2018).",
"Deep neural networks have been very successful in modeling images, texts, and audios.",
"Nonetheless, their characters have not yet been fully understood BID36 , leaving a big hole for malicious attack algorithms.",
"In this paper, we start from adversarial attacks and defense but try to find the connection with Generative Adversarial Network (GAN) BID10 .",
"Superficially, the difference between them is that the adversarial attack is the algorithm that finds a highly resembled image to cheat the classifier, whereas the GAN algorithm at its core is a generative model where the generator learns to convert white noise to images that look like authentic to the discriminator.",
"We show in this paper that they are indeed closely related and can be used to strengthen each other: to accelerate and stabilize the GAN training cycle, the discriminator is expected to stay robust to adversarial examples; at the same time, a well trained generator provides a continuous support in probability space and thus improves the generalization ability of discriminator, even under adversarial attacks.",
"That is the starting point of our idea to associate generative networks with robust classifiers.",
"In this paper, we draw a connection between adversarial training BID25 and generative adversarial network BID10 .",
"Our primary goal is to improve the generalization ability of adversarial training and this is achieved by data augmentation by the unlimited fake images.",
"Independently, we see an improvement of both robustness and convergence speed in GAN training.",
"While the theoretical principle in behind is still unclear to us, we gave an intuitive explanation.",
"Apart from that, a minor contribution of our paper is the improved loss function of AC-GAN, showing a better result in image quality."
] | [
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.060606054961681366,
0,
0.15789473056793213,
0.1249999925494194,
0.060606054961681366,
0.13333332538604736,
0.05882352590560913,
0.1860465109348297,
0.12244897335767746,
0,
0.05882352590560913,
0.1621621549129486,
0.15686273574829102,
0.14705881476402283,
0.06666666269302368,
0.13333332538604736,
0.1666666567325592,
0.13793103396892548,
0.06451612710952759,
0.1666666567325592
] | ryxtE3C5Fm | true | [
"We found adversarial training not only speeds up the GAN training but also increases the image quality"
] |
[
"We present a method to train self-binarizing neural networks, that is, networks that evolve their weights and activations during training to become binary.",
"To obtain similar binary networks, existing methods rely on the sign activation function.",
"This function, however, has no gradients for non-zero values, which makes standard backpropagation impossible.",
"To circumvent the difficulty of training a network relying on the sign activation function, these methods alternate between floating-point and binary representations of the network during training, which is sub-optimal and inefficient.",
"We approach the binarization task by training on a unique representation involving a smooth activation function, which is iteratively sharpened during training until it becomes a binary representation equivalent to the sign activation function.",
"Additionally, we introduce a new technique to perform binary batch normalization that simplifies the conventional batch normalization by transforming it into a simple comparison operation.",
"This is unlike existing methods, which are forced to the retain the conventional floating-point-based batch normalization.",
"Our binary networks, apart from displaying advantages of lower memory and computation as compared to conventional floating-point and binary networks, also show higher classification accuracy than existing state-of-the-art methods on multiple benchmark datasets.",
"Deep learning has brought about remarkable advancements to the state-of-the-art in several fields including computer vision and natural language processing.",
"In particular, convolutional neural networks (CNN's) have shown state-of-the-art performance in several tasks such as object recognition with AlexNet BID19 , VGG BID29 , ResNet and detection with R-CNN BID10 BID9 BID26 .",
"However, to achieve real-time performance these networks are dependent on specialized hardware like GPU's because they are computation and memory demanding.",
"For example, AlexNet takes up 250Mb for its 60M parameters while VGG requires 528Mb for its 138M parameters.While the performance of deep networks has been gradually improving over the last few years, their computational speed has been steadily decreasing BID32 .",
"Notwithstanding this, interest has grown significantly in the deployment of CNN's in virtual reality headsets (Oculus, GearVR), augmented reality gear (HoloLens, Epson Moverio), and other wearable, mobile, and embedded devices.",
"While such devices typically have very restricted power and memory capacites, they demand low latency and real-time performance to be able to provide a good user experience.",
"Not surprisingly, there is considerable interest in making deep learning models computationally efficient to better suit such devices BID24 BID4 BID40 .Several",
"methods of compression, quantization, and dimensionality reduction have been introduced to lower memory and computation requirements. These methods",
"produce near state-of-the-art results, either with fewer parameters or with lower precision parameters, which is possible thanks to the redundancies in deep networks BID3 .In this paper",
"we focus on the solution involving binarization of weights and activations, which is the most extreme form of quantization. Binarized neural",
"networks achieve high memory and computational efficiency while keeping performance comparable to their floating point counterparts. BID5 have shown",
"that binary networks allow the replacement of multiplication and additions by simple bit-wise operations, which are both time and power efficient.The challenge in training a binary neural network is to convert all its parameters from the continuous domain to a binary representation, typically done using the sign activation function. However, since",
"the gradient of sign is zero for all nonzero inputs, it makes standard back-propagation impossible. Existing state-of-the-art",
"methods for network binarization BID5 BID25 alternate between a binarized forward pass and a floating point backward pass to circumvent this problem. In their case, the gradients",
"for the sign activation are approximated during the backward pass, thus introducing inaccuracies in training. Furthermore, batch normalization",
"BID16 is necessary in binary networks to avoid exploding feature map values due to the large scale of the weights. However, during inference, using",
"batch normalization introduces intermediary floating point representations. This means, despite binarizing weights",
"and activations, these networks can not be used on chips that do not support floating-point computations.In our method, the scaled hyperbolic tangent function tanh is used to bound the values in the range [−1, 1]. The network starts with floating point",
"values for weights and activations, and progressively evolves into a binary network as the scaling factor is increased. Firstly, this means that we do not have",
"to toggle between the binary and floating point weight representations. Secondly, we have a continuously differentiable",
"function that allows backpropagation passes. As another important contribution, we reduce the",
"standard batch normalization operation during the inference stage to a simple comparison. This modification is not only very efficient and",
"can be accomplished using fixedpoint operations, it is also an order of magnitude faster than the floating-point counterpart. More clearly, while existing binarization methods",
"perform, at each layer, the steps of binary convolutions, floating-point batch normalization, and sign activation, we only need to perform binary convolutions followed by our comparison-based batch normalization, which serves as the sign activation at the same time (see Fig. 1 ).",
"We present a novel method to binarize a deep network that is principled, simple, and results in binarization of weights and activations.",
"Instead of relying on the sign function, we use the tanh function with a controllable slope.",
"This simplifies the training process without breaking the flow of derivatives in the back-propagation phase as compared to that of existing methods that have to toggle between floating-point and binary representations.",
"In addition to this, we replace the conventional batch normalization, which forces existing binarization methods to use floating point computations, by a simpler comparison operation that is directly adapted to networks with binary activations.",
"Our simplified batch normalization is not only computationally trivial, it is also extremely memoryefficient.",
"Our trained binary networks outperform those of existing binarization schemes on standard benchmarks despite using lesser memory and computation."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0
] | [
0.3478260934352875,
0.052631575614213943,
0,
0.23076923191547394,
0.15094339847564697,
0.1702127605676651,
0.14999999105930328,
0.2545454502105713,
0.2222222238779068,
0.145454540848732,
0.17777776718139648,
0.09999999403953552,
0.1538461446762085,
0.1599999964237213,
0.25531914830207825,
0.24390242993831635,
0.23529411852359772,
0.27272728085517883,
0.13636362552642822,
0.34285715222358704,
0.1904761791229248,
0.19999998807907104,
0.0952380895614624,
0.25531914830207825,
0.05405404791235924,
0.2222222238779068,
0.2800000011920929,
0.19512194395065308,
0.10810810327529907,
0.2666666507720947,
0.16326530277729034,
0.13333332538604736,
0.5777777433395386,
0.14999999105930328,
0.23529411852359772,
0.21052631735801697,
0.052631575614213943,
0.1818181723356247
] | HJxKajC5t7 | true | [
"A method to binarize both weights and activations of a deep neural network that is efficient in computation and memory usage and performs better than the state-of-the-art."
] |
[
" In many applications, the training data for a machine learning task is partitioned across multiple nodes, and aggregating this data may be infeasible due to storage, communication, or privacy constraints.",
"In this work, we present Good-Enough Model Spaces (GEMS), a novel framework for learning a global satisficing (i.e. \"good-enough\") model within a few communication rounds by carefully combining the space of local nodes' satisficing models.",
"In experiments on benchmark and medical datasets, our approach outperforms other baseline aggregation techniques such as ensembling or model averaging, and performs comparably to the ideal non-distributed models.\n",
"There has been significant work in designing distributed optimization methods in response to challenges arising from a wide range of large-scale learning applications.",
"These methods typically aim to train a global model by performing numerous communication rounds between distributed nodes.",
"However, most approaches treat communication reduction as an objective, not a constraint, and seek to minimize the number of communication rounds while maintaining model performance.",
"Less explored is the inverse setting-where our communication budget is fixed and we aim to maximize accuracy while restricting communication to only a few rounds.",
"These few-shot model aggregation methods are ideal when any of the following conditions holds:• Limited network infrastructure: Distributed optimization methods typically require a connected network to support the collection of numerous learning updates.",
"Such a network can be difficult to set up and maintain, especially in settings where devices may represent different organizational entities (e.g., a network of different hospitals).•",
"Privacy and data ephemerality: Privacy policies or regulations like GDPR may require nodes to periodically delete the raw local data. Few-shot",
"methods enable learning an aggregate model in ephemeral settings, where a node may lose access to its raw data. Additionally",
", as fewer messages are sent between nodes, these methods have the potential to offer increased privacy benefits.• Extreme asynchronicity",
": Even in settings where privacy is not a concern, messages from distributed nodes may be unevenly spaced and sporadically communicated over days, weeks, or even months (e.g., in the case of remote sensor networks or satellites). Few-shot methods drastically",
"limit communication and thus reduce the wall-clock time required to learn an aggregate model.Throughout this paper, we reference a simple motivating example. Consider two hospitals, A and",
"B, which each maintain private (unshareable) patient data pertinent to some disease. As A and B are geographically",
"distant, the patients they serve sometimes exhibit different symptoms. Without sharing the raw training",
"data, A and B would like to jointly learn a single model capable of generalizing to a wide range of patients. The prevalent learning paradigm",
"in this settingdistributed or federated optimization-dictates that A and B share iterative model updates (e.g., gradient information) over a network.From a meta-learning or multitask perspective, we can view each hospital (node) as a separate learning task, where our goal is to learn a single aggregate model which performs well on each task. However, these schemes often make",
"similar assumptions on aggregating data and learning updates from different tasks.As a promising alternative, we present good-enough model spaces (GEMS), a framework for learning an aggregate model over distributed nodes within a small number of communication rounds. Intuitively, the key idea in GEMS",
"is to take advantage of the fact that many possible hypotheses may yield 'good enough' performance for a learning task on local data, and that considering the intersection between these sets can allow us to compute a global model quickly and easily. Our proposed approach has several",
"advantages. First, it is simple and interpretable",
"in that each node only communicates its locally optimal model and a small amount of metadata corresponding to local performance. Second, each node's message scales linearly",
"in the local model size. Finally, GEMS is modular, allowing the operator",
"to tradeoff the aggregate model's size against its performance via a hyperparameter .We make the following contributions in this work",
". First, we present a general formulation of the",
"GEMS framework. Second, we offer a method for calculating the",
"good-enough space on each node as a R d ball. We empirically validate GEMS on both standard",
"benchmarks (MNIST and CIFAR-10) as well as a domain-specific health dataset. We consider learning convex classifiers and neural",
"networks in standard distributed setups as well as scenarios in which some small global held-out data may be used for fine-tuning. We find that on average, GEMS increases the accuracy",
"of local baselines by 10.1 points and comes within 43% of the (unachievable) global ideal. With fine-tuning, GEMS increases the accuracy of local",
"baselines by 41.3 points and comes within 86% of the global ideal.",
"In summary, we introduce GEMS, a framework for learning an aggregated model across different nodes within a few rounds of communication.",
"We validate one approach for constructing good-enough model spaces (as R d balls) on three datasets for both convex classifiers and simple feedforward networks.",
"Despite the simplicity of the proposed approach, we find that it outperforms a wide range of baselines for effective model aggregation TAB0"
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.11764705181121826,
0.5090909004211426,
0.03999999538064003,
0.1818181723356247,
0.307692289352417,
0.30434781312942505,
0.13636362552642822,
0.15686273574829102,
0.08163265138864517,
0.04878048226237297,
0.2380952388048172,
0,
0.16129031777381897,
0.20408162474632263,
0,
0,
0.1818181723356247,
0.1315789371728897,
0.5806451439857483,
0.15625,
0,
0.1702127605676651,
0.060606054961681366,
0.1463414579629898,
0.20000000298023224,
0.1875,
0.10526315122842789,
0.15789473056793213,
0.1599999964237213,
0.0952380895614624,
0.11428570747375488,
0.5238094925880432,
0.13333332538604736,
0.1904761791229248
] | rJlrN9Bjh4 | true | [
"We present Good-Enough Model Spaces (GEMS), a framework for learning an aggregate model over distributed nodes within a small number of communication rounds."
] |
[
"We propose the Wasserstein Auto-Encoder (WAE)---a new algorithm for building a generative model of the data distribution.",
"WAE minimizes a penalized form of the Wasserstein distance between the model distribution and the target distribution, which leads to a different regularizer than the one used by the Variational Auto-Encoder (VAE).\n",
"This regularizer encourages the encoded training distribution to match the prior.",
"We compare our algorithm with several other techniques and show that it is a generalization of adversarial auto-encoders (AAE).",
"Our experiments show that WAE shares many of the properties of VAEs (stable training, encoder-decoder architecture, nice latent manifold structure) while generating samples of better quality.",
"The field of representation learning was initially driven by supervised approaches, with impressive results using large labelled datasets.",
"Unsupervised generative modeling, in contrast, used to be a domain governed by probabilistic approaches focusing on low-dimensional data.",
"Recent years have seen a convergence of those two approaches.",
"In the new field that formed at the intersection, variational auto-encoders (VAEs) BID16 constitute one well-established approach, theoretically elegant yet with the drawback that they tend to generate blurry samples when applied to natural images.",
"In contrast, generative adversarial networks (GANs) BID9 turned out to be more impressive in terms of the visual quality of images sampled from the model, but come without an encoder, have been reported harder to train, and suffer from the \"mode collapse\" problem where the resulting model is unable to capture all the variability in the true data distribution.",
"There has been a flurry of activity in assaying numerous configurations of GANs as well as combinations of VAEs and GANs.",
"A unifying framework combining the best of GANs and VAEs in a principled way is yet to be discovered.This work builds up on the theoretical analysis presented in BID3 .",
"Following ; BID3 , we approach generative modeling from the optimal transport (OT) point of view.",
"The OT cost (Villani, 2003) is a way to measure a distance between probability distributions and provides a much weaker topology than many others, including f -divergences associated with the original GAN algorithms BID25 .",
"This is particularly important in applications, where data is usually supported on low dimensional manifolds in the input space X .",
"As a result, stronger notions of distances (such as f -divergences, which capture the density ratio between distributions) often max out, providing no useful gradients for training.",
"In contrast, OT was claimed to have a nicer behaviour BID11 although it requires, in its GAN-like implementation, the addition of a constraint or a regularization term into the objective.",
": Both VAE and WAE minimize two terms: the reconstruction cost and the regularizer penalizing discrepancy between P Z and distribution induced by the encoder Q. VAE forces Q(Z|X =",
"x) to match P Z for all the different input examples x drawn from P X .",
"This is illustrated on picture",
"(a), where every single red ball is forced to match P Z depicted as the white shape.",
"Red balls start intersecting, which leads to problems with reconstruction.",
"In contrast, WAE forces the continuous mixture Q Z := Q(Z|X)dP X to match P Z , as depicted with the green ball in picture",
"(b).",
"As a result latent codes of different examples get a chance to stay far away from each other, promoting a better reconstruction.In this work we aim at minimizing OT W c (P X , P G ) between the true (but unknown) data distribution P X and a latent variable model P G specified by the prior distribution P Z of latent codes Z ∈ Z and the generative model P G (X|Z) of the data points X ∈ X given Z. Our main contributions are listed below (cf. also FIG0 ):• A new family of regularized auto-encoders (Algorithms 1, 2 and Eq. 4), which we call Wasserstein Auto-Encoders (WAE), that minimize the optimal transport W c (P X , P G ) for any cost function c.",
"Similarly to VAE, the objective of WAE is composed of two terms: the c-reconstruction cost and a regularizer D Z (P Z , Q Z ) penalizing a discrepancy between two distributions in Z: P Z and a distribution of encoded data points, i.e. DISPLAYFORM0 When c is the squared cost and D Z is the GAN objective, WAE coincides with adversarial auto-encoders of BID23 .•",
"Empirical evaluation of WAE on MNIST and CelebA datasets with squared cost c(x, y) = x − y 2 2 . Our",
"experiments show that WAE keeps the good properties of VAEs (stable training, encoder-decoder architecture, and a nice latent manifold structure) while generating samples of better quality, approaching those of GANs.• We",
"propose and examine two different regularizers D Z (P Z , Q Z ). One",
"is based on GANs and adversarial training in the latent space Z. The other uses the maximum mean discrepancy, which is known to perform well when matching high-dimensional standard normal distributions P Z BID10 . Importantly",
", the second option leads to a fully adversary-free min-min optimization problem.• Finally,",
"the theoretical considerations presented in BID3 and used here to derive the WAE objective might be interesting in their own right. In particular",
", Theorem 1 shows that in the case of generative models, the primal form of W c (P X , P G ) is equivalent to a problem involving the optimization of a probabilistic encoder Q(Z|X) .The paper is",
"structured as follows. In Section 2",
"we review a novel auto-encoder formulation for OT between P X and the latent variable model P G derived in BID3 . Relaxing the",
"resulting constrained optimization problem we arrive at an objective of Wasserstein auto-encoders. We propose two",
"different regularizers, leading to WAE-GAN and WAE-MMD algorithms. Section 3 discusses",
"the related work. We present the experimental",
"results in Section 4 and conclude by pointing out some promising directions for future work.",
"Using the optimal transport cost, we have derived Wasserstein auto-encoders-a new family of algorithms for building generative models.",
"We discussed their relations to other probabilistic modeling techniques.",
"We conducted experiments using two particular implementations of the proposed method, showing that in comparison to VAEs, the images sampled from the trained WAE models are of better quality, without compromising the stability of training and the quality of reconstruction.",
"Future work will include further exploration of the criteria for matching the encoded distribution Q Z to the prior distribution P Z , assaying the possibility of adversarially training the cost function c in the input space X , and a theoretical analysis of the dual formulations for WAE-GAN and WAE-MMD."
] | [
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.4375,
0.22727271914482117,
0.07692307233810425,
0.17142856121063232,
0.14999999105930328,
0.05882352590560913,
0.11764705181121826,
0.1538461446762085,
0.08510638028383255,
0.0615384578704834,
0.12121211737394333,
0.1818181723356247,
0.1249999925494194,
0.0833333283662796,
0.11764705181121826,
0.1860465109348297,
0.1395348757505417,
0.09756097197532654,
0.06451612710952759,
0.0952380895614624,
0.060606054961681366,
0.07692307233810425,
0.05128204822540283,
0.11538461595773697,
0.0952380895614624,
0.1111111044883728,
0.21739129722118378,
0.06896550953388214,
0.1666666567325592,
0.13333332538604736,
0.05405404791235924,
0.1249999925494194,
0,
0.1621621549129486,
0.25806450843811035,
0,
0.1818181723356247,
0,
0.23529411852359772,
0.07999999821186066,
0.12244897335767746,
0.1111111044883728
] | HkL7n1-0b | true | [
"We propose a new auto-encoder based on the Wasserstein distance, which improves on the sampling properties of VAE."
] |
[
"In this paper, we aim to develop a novel mechanism to preserve differential privacy (DP) in adversarial learning for deep neural networks, with provable robustness to adversarial examples.",
"We leverage the sequential composition theory in DP, to establish a new connection between DP preservation and provable robustness.",
"To address the trade-off among model utility, privacy loss, and robustness, we design an original, differentially private, adversarial objective function, based on the post-processing property in DP, to tighten the sensitivity of our model.",
"An end-to-end theoretical analysis and thorough evaluations show that our mechanism notably improves the robustness of DP deep neural networks.",
"The pervasiveness of machine learning exposes new vulnerabilities in software systems, in which deployed machine learning models can be used",
"(a) to reveal sensitive information in private training data (Fredrikson et al., 2015) , and/or",
"(b) to make the models misclassify, such as adversarial examples (Carlini & Wagner, 2017) .",
"Efforts to prevent such attacks typically seek one of three solutions:",
"(1) Models which preserve differential privacy (DP) (Dwork et al., 2006) , a rigorous formulation of privacy in probabilistic terms; (2) Adversarial training algorithms, which augment training data to consist of benign examples and adversarial examples crafted during the training process, thereby empirically increasing the classification accuracy given adversarial examples (Kardan & Stanley, 2017; Matyasko & Chau, 2017) ; and (3) Provable robustness, in which the model classification given adversarial examples is theoretically guaranteed to be consistent, i.e., a small perturbation in the input does not change the predicted label (Cisse et al., 2017; Kolter & Wong, 2017) .",
"On the one hand, private models, trained with existing privacy-preserving mechanisms (Abadi et al., 2016; Shokri & Shmatikov, 2015; Phan et al., 2016; 2017b; a; Yu et al., 2019; Lee & Kifer, 2018) , are unshielded under adversarial examples.",
"On the other hand, robust models, trained with adversarial learning algorithms (with or without provable robustness to adversarial examples), do not offer privacy protections to the training data.",
"That one-sided approach poses serious risks to machine learning-based systems; since adversaries can attack a deployed model by using both privacy inference attacks and adversarial examples.",
"To be safe, a model must be",
"i) private to protect the training data, and",
"ii) robust to adversarial examples.",
"Unfortunately, there has not yet been research on how to develop such a model, which thus remains a largely open challenge.",
"Simply combining existing DP-preserving mechanisms and provable robustness conditions (Cisse et al., 2017; Kolter & Wong, 2017; Raghunathan et al., 2018) cannot solve the problem, for many reasons.",
"(a) Existing sensitivity bounds (Phan et al., 2016; 2017b;",
"a) and designs (Yu et al., 2019; Lee & Kifer, 2018) have not been developed to protect the training data in adversarial training.",
"It is obvious that using adversarial examples crafted from the private training data to train our models introduces a previously unknown privacy risk, disclosing the participation of the benign examples (Song et al., 2019) .",
"(b) There is an unrevealed interplay among DP preservation, adversarial learning, and robustness bounds.",
"(c) Existing algorithms cannot be readily applied to address the trade-off among model utility, privacy loss, and robustness.",
"Therefore, theoretically bounding the robustness of a model (which both protects the privacy and is robust against adversarial examples) is nontrivial.",
"In this paper, we established a connection among DP preservation to protect the training data, adversarial learning, and provable robustness.",
"A sequential composition robustness theory was introduced to generalize robustness given any sequential and bounded function of independent defensive mechanisms.",
"An original DP-preserving mechanism was designed to address the trade-off among model utility, privacy loss, and robustness by tightening the global sensitivity bounds.",
"A new Monte Carlo Estimation was proposed to improve and stabilize the estimation of the robustness bounds; thus improving the certified accuracy under adversarial example attacks.",
"However, there are several limitations.",
"First, the accuracy of our model under adversarial example attacks is still very low.",
"Second, the mechanism scalability is dependent on the model structures.",
"Third, further study is needed to address the threats from adversarial examples crafted by unseen attack algorithms.",
"Fourth, in this study, our goal is to illustrate the difficulties in providing DP protections to the training data in adversarial learning with robustness bounds.",
"The problem is more challenging when working with complex and large networks, such as ResNet (He et al., 2015) , VGG16 (Zhang et al., 2015) , LSTM (Hochreiter & Schmidhuber, 1997) , and GAN (Goodfellow et al., 2014a) .",
"Fifth, there can be alternative approaches to draft and to use DP adversarial examples.",
"Addressing these limitations needs significant efforts from both research and practice communities.",
"A NOTATIONS AND TERMINOLOGIES",
"Function/model f that maps inputs x to a vector of scores f (x) = {f1(x), . . . , fK (x)} yx ∈ y A single true class label of example x y(x) = max k∈K f k (x)",
"Predicted label for the example x given the function f x adv = x + α Adversarial example where α is the perturbation lp(µ) = {α ∈ R d : α p ≤ µ} The lp-norm ball of attack radius µ ( r , δr)",
"Robustness budget r and broken probability δr",
"The expected value of f k (x) E lb andÊ ub Lower and upper bounds of the expected valueÊf (x) =",
"Feature representation learning model with x and parameters θ1 Bt A batch of benign examples xi",
"Data reconstruction function given Bt in a(x, θ1)",
"The values of all hidden neurons in the hidden layer h1 of a(x, θ1) given the batch Bt RB t (θ1) and R B t (θ1)",
"Approximated and perturbed functions of RB t (θ1) xi and xi Perturbed and reconstructed inputs xi",
"Sensitivity of the approximated function RB t (θ1) h1B Sensitivities of x and h, given the perturbation α ∈ lp(1)",
"Privacy budget to protect the training data D (κ + ϕ)max Robustness size guarantee given an input x at the inference time B PSEUDO-CODE OF ADVERSARIAL TRAINING (KURAKIN ET AL., 2016B)",
"Given a loss function:",
"where m 1 and m 2 correspondingly are the numbers of examples in B t and B adv t at each training step.",
"Proof 1 Assume that B t and B t differ in the last tuple, x m (x m ).",
"Then,",
"Proof 2 Regarding the computation of h 1Bt = {θ",
"The sensitivity of a function h is defined as the maximum change in output, that can be generated by a change in the input (Lecuyer et al., 2018) .",
"Therefore, the global sensitivity of h 1 can be computed as follows:",
"following matrix norms (Operator norm, 2018): θ T 1 1,1 is the maximum 1-norm of θ 1 's columns.",
"By injecting Laplace noise Lap(",
", and χ 2 drawn as a Laplace noise [Lap(",
"β , in our mechanism, the perturbed affine transformation h 1Bt is presented as:",
"This results in an ( 1 /γ)-DP affine transformation h 1Bt = {θ",
"Similarly, the perturbed inputs",
"where ∆ x is the sensitivity measuring the maximum change in the input layer that can be generated by a change in the batch B t and γ x = ∆ R m∆x .",
"Following (Lecuyer et al., 2018) , ∆ x can be computed as follows:",
"Consequently, Lemma 3 does hold."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.1666666567325592,
0.13333332538604736,
0.0952380895614624,
0,
0.0714285671710968,
0.1538461446762085,
0.07999999821186066,
0.09090908616781235,
0.09756097197532654,
0.045454543083906174,
0.1111111044883728,
0.054054051637649536,
0,
0.10526315122842789,
0.1249999925494194,
0.06451612710952759,
0,
0,
0.11764705181121826,
0.04651162400841713,
0,
0.06896550953388214,
0,
0.06451612710952759,
0.06896550953388214,
0.060606054961681366,
0.05714285373687744,
0,
0,
0,
0.0714285671710968,
0.1875,
0.04878048226237297,
0.0833333283662796,
0,
0,
0.0476190447807312,
0.0416666641831398,
0.1111111044883728,
0,
0.07407406717538834,
0.10526315122842789,
0.0624999962747097,
0,
0,
0.1428571343421936,
0,
0.06666666269302368,
0.07407406717538834,
0,
0.0555555522441864,
0,
0,
0,
0,
0.07999999821186066,
0.0833333283662796,
0,
0.054054051637649536,
0,
0
] | Byg-An4tPr | true | [
"Preserving Differential Privacy in Adversarial Learning with Provable Robustness to Adversarial Examples"
] |
[
"In high-dimensional reinforcement learning settings with sparse rewards, performing\n",
"effective exploration to even obtain any reward signal is an open challenge.\n",
"While model-based approaches hold promise of better exploration via planning, it\n",
"is extremely difficult to learn a reliable enough Markov Decision Process (MDP)\n",
"in high dimensions (e.g., over 10^100 states).",
"In this paper, we propose learning\n",
"an abstract MDP over a much smaller number of states (e.g., 10^5), which we can\n",
"plan over for effective exploration.",
"We assume we have an abstraction function\n",
"that maps concrete states (e.g., raw pixels) to abstract states (e.g., agent position,\n",
"ignoring other objects).",
"In our approach, a manager maintains an abstract\n",
"MDP over a subset of the abstract states, which grows monotonically through targeted\n",
"exploration (possible due to the abstract MDP).",
"Concurrently, we learn a\n",
"worker policy to travel between abstract states; the worker deals with the messiness\n",
"of concrete states and presents a clean abstraction to the manager.",
"On three of\n",
"the hardest games from the Arcade Learning Environment (Montezuma's,\n",
"Pitfall!, and Private Eye), our approach outperforms the previous\n",
"state-of-the-art by over a factor of 2 in each game.",
"In Pitfall!, our approach is\n",
"the first to achieve superhuman performance without demonstrations.",
"Exploration is a key bottleneck in high-dimensional, sparse-reward reinforcement learning tasks.",
"Random exploration (e.g., via epsilon-greedy) suffices when rewards are abundant BID24 , but when rewards are sparse, it can be difficult for an agent starting out to even find any positive reward needed to bootstrap learning.",
"For example, the infamously difficult game MON-TEZUMA'S REVENGE from the Arcade Learning Environment (ALE) BID6 contains over 10 100 states and requires the agent to go thousands of timesteps without receiving reward.",
"Performing effective exploration in this setting is thus an open problem; without demonstrations, even state-of-the-art intrinsically-motivated RL agents BID5 BID44 achieve only about one-tenth the score of an expert human .In",
"this paper, we investigate model-based reinforcement learning BID17 as a potential solution to the exploration problem. The",
"hope is that with a model of the state transitions and rewards, one can perform planning under the model to obtain a more informed exploration strategy. However",
", as the model being learned is imperfect, errors in the model compound BID42 BID43 when planning over many time steps. Furthermore",
", even if a perfect model were known, in high-dimensional state spaces (e.g. over 10 100 states), planning-computing the optimal policy (e.g. via value iteration)-is intractable. As a result",
", model-based RL has had limited success in high-dimensional settings BID42 . To address",
"this, some prior work has focused on learning more accurate models by using more expressive function approximators BID25 , and learning local models BID21 BID50 . Others have",
"attempted to robustly use imperfect models by conditioning on, instead of directly following, model-based rollouts BID49 , frequently replanning, and combining model-based with model-free approaches BID0 BID40 . However, none",
"of these techniques offer a fundamental solution.Instead of directly learning a model over the concrete state space, we propose an approach inspired by hierarchical reinforcement learning (HRL) BID41 BID46 Figure 1 : (a) Illustration",
"of the abstract MDP on MONTEZUMA'S REVENGE. We have superimposed",
"a white grid on top of the original game. At any given time, the",
"agent is in one of the grid cellseach grid cell is an abstract state. In this example, the agent",
"starts at the top of a ladder (yellow dot). The worker then navigates",
"transitions between abstract states (green arrows) to follow a plan made by the manager (red dots). (b) Circles represent abstract",
"states. Shaded circles represent states",
"within the known set. The manager navigates the agent",
"to the fringe of the known set (s 3 ), then randomly explores with π d to discover new transitions near s 3 (dotted box). (c) The worker extends the abstract",
"MDP by learning to navigate to the newly discovered abstract states (dotted arrows). 2017), and learn a model over a much",
"smaller abstract state space. Specifically, we assume we have a state",
"abstraction function BID22 BID36 BID9 , which maps a highdimensional concrete state (e.g. all pixels on the screen) to a low-dimensional abstract state (e.g. the position of the agent). We then aim to learn an (abstract) Markov",
"Decision Process (MDP) over this abstract state space as follows: A manager maintains an (abstract) MDP over a subset of all possible abstract states which we call the known set, which is grown over time. The crucial property we enforce is that this",
"abstract MDP is highly accurate and near deterministic on the known set, so we can perform planning without suffering from compounding errors, and do it efficiently since we are working with a much smaller number of abstract states. Concurrently, we learn a worker policy that",
"the manager uses to transition between abstract states. The worker policy has access to the concrete",
"states; its goal is to hide the messy details of the real world from the manager (e.g., jumping over monsters) so that the manager has a much simpler planning problem (e.g., traversing between two locations). In our implementation, the worker keeps an inventory",
"of skills (i.e., options BID41 ), each of which is driven by a deep neural network; the worker assigns an appropriate skill for each transition between abstract states. In this way, the worker does not \"forget\" BID20 , and",
"we ensure monotonic progress in learning the abstract MDP. This abstract MDP, which enables us to efficiently explore",
"via planning, is a key difference between our work and previous HRL work (e.g., BID4 BID46 ), which also learn skills and operate on latent abstract state spaces but without forming an MDP.We evaluate our approach on three of the most challenging games from the ALE BID6 : MONTEZUMA'S REVENGE, PITFALL!, and PRIVATE EYE. In all three domains, our approach achieves more than 2x the",
"reward of prior non-demonstration state-of-the-art approaches. In PITFALL!, we are the first to achieve superhuman performance",
"without demonstrations, surpassing the prior state-of-the-art by over 100x. Additionally, since our approach is model-based, we can generalize",
"to new rewards without re-training, as long as the reward function is a function of the abstract states. When evaluated on a new reward function never seen during training",
", our approach achieves over 3x the reward of prior state-of-the-art methods explicitly trained on the new rewards.",
"This work presents a framework for tackling long-horizon, sparse-reward, high-dimensional tasks by using abstraction to decrease the dimensionality of the state space and to address compounding model errors.",
"Empirically, this framework performs well in hard exploration tasks, and theoretically guarantees near-optimality.",
"However, this work has limitations as well.",
"First, our approach relies on some prior knowledge in the state abstraction function, although we compare against state-ofthe-art methods using a similar amount of prior knowledge in our experiments.",
"This information is readily available in the ALE, which exposes the RAM, and in many robotics tasks, which expose the underlying state (e.g., joint angles and object positions).",
"Still, future work could attempt to automatically learn the state abstraction or extract the abstraction directly from the visible pixels.",
"One potential method might be to start with a coarse represention, and iteratively refine the representation by splitting abstract states whenever reward is discovered.",
"Another limitation of our work is that our simple theoretical guarantees require relatively strong assumptions.",
"Fortunately, even when these assumptions are not satisfied, our approach can still perform well, as in our experiments.",
"A EXPERIMENT DETAILS Following Mnih et al. (2015) , the pixel concrete states are downsampled and cropped to 84 by 84 and then are converted to grayscale.",
"To capture velocity information, the worker receives as input the past four frames stacked together.",
"Every action is repeated 4 times.In addition, MONTEZUMA'S REVENGE and PITFALL! are deterministic by default.",
"As a result, the manager deterministically navigates to the fringes of the known set by calling on the worker's deterministic, saved skills.",
"To minimize wallclock training time, we save the states at the fringes of the known set and enable the worker to teleport to those states, instead of repeatedly re-simulating the entire trajectory.",
"When the worker teleports, we count all the frames it would have had to simulate as part of the training frames.",
"Importantly, this only affects wallclock time, and does not benefit or change the agent in any way.",
"Notably, this does not apply to PRIVATE EYE, where the initial state is stochastically chosen from two similar possible states.A.1",
"HYPERPARAMETERS All of our hyperparameters are only tuned on MONTEZUMA'S REVENGE.",
"Our skills are trained with the Adam optimizer BID19 with the default hyperparameters.",
"TAB5 describes all hyperparameters and the values used during experiments (bolded), as well as other values that we tuned over (non-bolded).",
"Most of our hyperparameters were selected once and never tuned."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0,
0.05128204822540283,
0,
0.21052631735801697,
0,
0,
0.09302324801683426,
0,
0.060606058686971664,
0.10256409645080566,
0,
0.11764705181121826,
0.10256409645080566,
0.12121211737394333,
0.06666666269302368,
0.10810810327529907,
0.1621621549129486,
0,
0,
0.17142856121063232,
0.1666666567325592,
0.06451612710952759,
0.11764705181121826,
0.054054051637649536,
0.03389830142259598,
0.0714285671710968,
0.0714285671710968,
0.09302324801683426,
0.11999999731779099,
0,
0.03703703358769417,
0,
0.11999999731779099,
0.1111111044883728,
0.06896550953388214,
0.1666666567325592,
0.10256409645080566,
0.04878048226237297,
0.05128204822540283,
0.17777776718139648,
0,
0,
0.07547169178724289,
0.2222222238779068,
0.11428570747375488,
0.20689654350280762,
0.09677419066429138,
0.12121211737394333,
0.09999999403953552,
0.0615384578704834,
0.12903225421905518,
0.1860465109348297,
0.119047611951828,
0.1428571343421936,
0.09090908616781235,
0.1666666567325592,
0.09302324801683426,
0.1538461446762085,
0.05128204822540283,
0,
0.07843136787414551,
0.039215680211782455,
0.09302324801683426,
0.19999998807907104,
0,
0,
0.12244897335767746,
0,
0.0952380895614624,
0.17777776718139648,
0.07692307233810425,
0.045454539358615875,
0.04651162400841713,
0.0416666604578495,
0.054054051637649536,
0,
0.04444443807005882,
0.0555555522441864
] | ryxLG2RcYX | true | [
"We automatically construct and explore a small abstract Markov Decision Process, enabling us to achieve state-of-the-art results on Montezuma's Revenge, Pitfall!, and Private Eye by a significant margin."
] |
[
"Most deep reinforcement learning (RL) systems are not able to learn effectively from off-policy data, especially if they cannot explore online in the environment.",
"This is a critical shortcoming for applying RL to real-world problems where collecting data is expensive, and models must be tested offline before being deployed to interact with the environment -- e.g. systems that learn from human interaction.",
"Thus, we develop a novel class of off-policy batch RL algorithms which use KL-control to penalize divergence from a pre-trained prior model of probable actions.",
"This KL-constraint reduces extrapolation error, enabling effective offline learning, without exploration, from a fixed batch of data.",
"We also use dropout-based uncertainty estimates to lower bound the target Q-values as a more efficient alternative to Double Q-Learning.",
"This Way Off-Policy (WOP) algorithm is tested on both traditional RL tasks from OpenAI Gym, and on the problem of open-domain dialog generation; a challenging reinforcement learning problem with a 20,000 dimensional action space.",
"WOP allows for the extraction of multiple different reward functions post-hoc from collected human interaction data, and can learn effectively from all of these.",
"We test real-world generalization by deploying dialog models live to converse with humans in an open-domain setting, and demonstrate that WOP achieves significant improvements over state-of-the-art prior methods in batch deep RL.\n",
"In order to scale deep reinforcement learning (RL) to safety-critical, real-world domains, two abilities are needed.",
"First, since collecting real-world interaction data can be expensive and timeconsuming, algorithms must be able to learn from off-policy data no matter how it was generated, or how little correlation between the data distribution and the current policy.",
"Second, it is often necessary to carefully test a policy before deploying it to the real world; for example, to ensure its behavior is safe and appropriate for humans.",
"Thus, the algorithm must be able to learn offline first, from a static batch of data, without the ability to explore.",
"This off-policy, batch reinforcement learning (BRL) setting represents a challenging RL problem.",
"Most deep RL algorithms fail to learn from data that is not heavily correlated with the current policy (Fujimoto et al., 2018b) .",
"Even models based on off-policy algorithms like Q-learning fail to learn in the offline, batch setting, when the model is not able to explore.",
"If the batch data is not sufficient to cover the state-action space, BRL models can suffer from extrapolation error, learning unrealistic value estimates of state-action pairs not contained in the batch (Fujimoto et al., 2018b) .",
"It can be impossible to correct for extrapolation error when there is a mismatch in the distribution of stateactions pairs in the batch data, and the distribution induced by the learned policy.",
"For example, if the policy learns to select actions which are not contained in the batch, it cannot learn a reasonable value function for those actions.",
"Figure 1 illustrates this concept, where the batch only covers a subset of possible policies.",
"Extrapolation error is particularly problematic in high-dimensional state and action spaces (such as those inherent in language generation).",
"We propose to resolve these issues by leveraging a pre-trained generative model of the state-action space, p(a|s), trained on known sequences of interaction data.",
"While training with RL, we penalize divergence from this prior model with different forms of KL-control.",
"This technique ensures that the RL model learns a policy that stays close the state-action distribution of the batch, combating Figure 1 : In this example batch RL problem, the robot's goal is to travel the minimum distance around the black walls to get to the red flag.",
"A trained behavior policy generated the batch data; the probability of each of the states appearing in the batch, p B (s), is in yellow (white locations are not contained in the batch).",
"If the offline RL policy estimates the value of going up or left from the start position is high, it will have no way to refine this estimate using the batch data, or learn a good policy in this region of state space.",
"The KL-constraint ensures that the RL policy will stay within the support of the batch data.",
"However, the behavior policy is suboptimal, so using behavior cloning to directly imitate the batch data will result in suboptimal return.",
"Instead, the KL-constrained model can learn to find the optimal policy, which is within the support of the batch.",
"extrapolation error.",
"We also propose using dropout to obtain uncertainty estimates of the target Qvalues, and use this lower bound to alleviate overestimation bias.",
"We benchmark against a discrete adaptation of Batch Constrained Q-learning (BCQ) (Fujimoto et al., 2018b) , a recently proposed state-of-the-art BRL algorithm for continuous domains, and show that our Way Off-Policy algorithm achieves superior performance in both a traditional RL domain, as well as in a challenging, underexplored, real-world reinforcement learning problem: using implicitly expressed human reactions in chat to improve open-domain dialog systems.",
"When a machine learning system interacts with humans, ideally we would like to learn about the humans' preferences in order to improve its performance.",
"Yet having humans manually indicate their preferences through explicit means like pressing a button (e.g. Christiano et al. (2017) ) or submitting a feedback report, does not scale.",
"Instead, we would like to be able to use humans' implicit reactions, such as the sentiment they express, or the length of the conversation, in order to improve the policy.",
"However, applying off-policy batch RL to language generation is challenging because the number of potential combinations of words and sentences leads to a combinatorial explosion in the size of the state space.",
"The action space -the set of frequent vocabulary words in the English language -is 20,000-dimensional.",
"This compounds extrapolation error, making BRL even more difficult.",
"However, when learning from human interactions in the wild, it is crucial to be able to learn offline and test the policy before deploying it, lest it learn inappropriate behaviors (e.g. Horton (2016) ).",
"To support this work, we developed an interactive online platform that allows humans to chat with deep neural network dialog models running on a GPU; the BRL models trained for this study are available live at https://neural.chat/rl/.",
"Through this platform we collected human responses to a set of over 40 different dialog models over the course of several months.",
"Using our Way Off-Policy algorithm, we are able to effectively learn from this batch of data, in spite of the fact that it was generated with a vastly different set of model architectures, which were trained on different datasets.",
"Further, we use the batch to learn from many different reward functions designed post-hoc to extract implicit human preferences, something that is only possible with effective off-policy BRL.",
"In summary, the contributions of this paper are:",
"• A novel algorithm, Way Off-Policy learning, which is the first to propose using KL-control from a pre-trained prior model as a way to reduce extrapolation error in batch RL.",
"• Experiments showing the effectiveness of WOP above strong baselines based on prior work (e.g. Fujimoto et al. (2018b) ), on both traditional RL tasks and on the challenging problem of open-domain dialog generation.",
"• A set of novel conversation rewards based on how human preferences are implicitly expressed in text.",
"We are the first work to learn from implicit signals in conversation offline using batch RL.",
"This paper presents the Way Off-Policy (WOP) algorithm, which improves performance when learning off-policy without the possibility to explore -i.e. batch RL (BRL).",
"We are the first to propose using KL-control from a strong prior model pre-trained on data as a way to avoid extrapolation and instability in BRL.",
"Our results on traditional RL tasks demonstrate that our WOP algorithm provides performance improvements over state-of-the-art BRL techniques, and the results in dialog generation show that KL-control is critical to achieving good performance in this real-world, highdimensional setting.",
"In a generative domain such as dialog, the true reward function is not known, and trivially exploiting the rewards can actually lead to worse performance.",
"Thus, KL-control may be particularly necessary to ensure samples remain realistic and close to the data distribution.",
"We propose several reward functions that could allow an open-domain dialog generation model to learn from rich cues implicit in human interaction, where learning from expressed sentiment was most promising.",
"We find that maximizing implicit rewards leads to better performance than relying on explicit feedback.",
"We hope that the techniques presented here can improve learning with RL from offline data, making it easier to apply RL to safety-critical settings such as human interaction.",
"A APPENDIX"
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.35999998450279236,
0.2857142686843872,
0.36734694242477417,
0.23255813121795654,
0.17777776718139648,
0.17543859779834747,
0.2916666567325592,
0.27586206793785095,
0.04878048226237297,
0.17241378128528595,
0.11999999731779099,
0.5333333015441895,
0.15789473056793213,
0.2448979616165161,
0.2916666567325592,
0.28070175647735596,
0.30188679695129395,
0.19999998807907104,
0.19512194395065308,
0.04651162400841713,
0.2448979616165161,
0.19512194395065308,
0.21875,
0.1538461446762085,
0.32258063554763794,
0.25,
0.17777776718139648,
0.2857142686843872,
0.1702127605676651,
0.1927710771560669,
0.20408162474632263,
0.03703703358769417,
0.15686273574829102,
0.2641509473323822,
0.1463414579629898,
0,
0.17543859779834747,
0.19354838132858276,
0.260869562625885,
0.32258063554763794,
0.22641508281230927,
0.11764705181121826,
0.37037035822868347,
0.14035087823867798,
0.09302324801683426,
0.380952388048172,
0.2448979616165161,
0.35999998450279236,
0.23333333432674408,
0.1599999964237213,
0.1428571343421936,
0.2545454502105713,
0.1463414579629898,
0.307692289352417
] | rJl5rRVFvH | true | [
"We show that KL-control from a pre-trained prior can allow RL models to learn from a static batch of collected data, without the ability to explore online in the environment."
] |
[
"Transfer and adaptation to new unknown environmental dynamics is a key challenge for reinforcement learning (RL).",
"An even greater challenge is performing near-optimally in a single attempt at test time, possibly without access to dense rewards, which is not addressed by current methods that require multiple experience rollouts for adaptation.",
"To achieve single episode transfer in a family of environments with related dynamics, we propose a general algorithm that optimizes a probe and an inference model to rapidly estimate underlying latent variables of test dynamics, which are then immediately used as input to a universal control policy.",
"This modular approach enables integration of state-of-the-art algorithms for variational inference or RL.",
"Moreover, our approach does not require access to rewards at test time, allowing it to perform in settings where existing adaptive approaches cannot.",
"In diverse experimental domains with a single episode test constraint, our method significantly outperforms existing adaptive approaches and shows favorable performance against baselines for robust transfer.",
"One salient feature of human intelligence is the ability to perform well in a single attempt at a new task instance, by recognizing critical characteristics of the instance and immediately executing appropriate behavior based on experience in similar instances.",
"Artificial agents must do likewise in applications where success must be achieved in one attempt and failure is irreversible.",
"This problem setting, single episode transfer, imposes a challenging constraint in which an agent experiences-and is evaluated on-only one episode of a test instance.",
"As a motivating example, a key challenge in precision medicine is the uniqueness of each patient's response to therapeutics (Hodson, 2016; Bordbar et al., 2015; Whirl-Carrillo et al., 2012) .",
"Adaptive therapy is a promising approach that formulates a treatment strategy as a sequential decision-making problem (Zhang et al., 2017; West et al., 2018; Petersen et al., 2019) .",
"However, heterogeneity among instances may require explicitly accounting for factors that underlie individual patient dynamics.",
"For example, in the case of adaptive therapy for sepsis (Petersen et al., 2019) , predicting patient response prior to treatment is not possible.",
"However, differences in patient responses can be observed via blood measurements very early after the onset of treatment (Cockrell and An, 2018) .",
"As a first step to address single episode transfer in reinforcement learning (RL), we propose a general algorithm for near-optimal test-time performance in a family of environments where differences in dynamics can be ascertained early during an episode.",
"Our key idea is to train an inference model and a probe that together achieve rapid inference of latent variables-which account for variation in a family of similar dynamical systems-using a small fraction (e.g., 5%) of the test episode, then deploy a universal policy conditioned on the estimated parameters for near-optimal control on the new instance.",
"Our approach combines the advantages of robust transfer and adaptation-based transfer, as we learn a single universal policy that requires no further training during test, but which is adapted to the new environment by conditioning on an unsupervised estimation of new latent dynamics.",
"In contrast to methods that quickly adapt or train policies via gradients during test but assume access to multiple test rollouts and/or dense rewards (Finn et al., 2017; Killian et al., 2017; Rakelly et al., 2019) , we explicitly optimize for performance in one test episode without accessing the reward function at test time.",
"Hence our method applies to real-world settings in which rewards during test are highly delayed or even completely inaccessible-e.g., a reward that depends on physiological factors that are accessible only in simulation and not from real patients.",
"We also consider computation time a crucial factor for real-time application, whereas some existing approaches require considerable computation during test (Killian et al., 2017) .",
"Our algorithm builds on variational inference and RL as submodules, which ensures practical compatibility with existing RL workflows.",
"Our main contribution is a simple general algorithm for single episode transfer in families of environments with varying dynamics, via rapid inference of latent variables and immediate execution of a universal policy.",
"Our method attains significantly higher cumulative rewards, with orders of magnitude faster computation time during test, than the state-of-the-art model-based method (Killian et al., 2017) , on benchmark high-dimensional domains whose dynamics are discontinuous and continuous in latent parameters.",
"We also show superior performance over optimization-based meta-learning and favorable performance versus baselines for robust transfer.",
"2D navigation and Acrobot are solved upon attaining terminal reward of 1000 and 10, respectively.",
"SEPT outperforms all baselines in 2D navigation and takes significantly fewer number of steps to solve (Figures 2a and 2c) .",
"While a single instance of 2D navigation is easy for RL, handling multiple instances is highly non-trivial.",
"EPOpt-adv and Avg almost never solve the test instance-we set \"steps to solve\" to 50 for test episodes that were unsolved-because interpolating between instance-specific optimal policies in policy parameter space is not meaningful for any task instance.",
"MAML did not perform well despite having the advantage of being provided with rewards at test time, unlike SEPT.",
"The gradient adaptation step was likely ineffective because the rewards are sparse and delayed.",
"BNN requires significantly more steps than SEPT, and it uses four orders of magnitude longer computation time (Table 4) , due to training a policy from scratch during the test episode.",
"Training times of all algorithms except BNN are in the same order of magnitude (Table 3) .",
"In Acrobot and HIV, where dynamics are continuous in latent variables, interpolation within policy space can produce meaningful policies, so all baselines are feasible in principle.",
"SEPT is statistically significantly faster than BNN and Avg, is within error bars of MAML, while EPOpt-adv outperforms the rest by a small margin (Figures 2b and 2d ).",
"Figure 5 shows that SEPT is competitive in terms of percentage of solved instances.",
"As the true values of latent variables for Acrobot test instances were interpolated and extrapolated from the training values, this shows that SEPT is robust to out-oftraining dynamics.",
"BNN requires more steps due to simultaneously learning and executing control during the test episode.",
"On HIV, SEPT reaches significantly higher cumulative rewards than all methods.",
"Oracle is within the margin of error of Avg.",
"This may be due to insufficient examples of the high-dimensional ground truth hidden parameters.",
"Due to its long computational time, we run three seeds for BNN on HIV, shown in Figure 4b , and find it was unable to adapt within one test episode.",
"Comparing directly to reported results in DPT (Yao et al., 2018) , SEPT solves 2D Navigation at least 33% (>10 steps) faster, and the cumulative reward of SEPT (mean and standard error) are above DPT's mean cumulative reward in Acrobot (Table 2) .",
"Together, these results show that methods that explicitly distinguish different dynamics (e.g., SEPT and BNN) can significantly outperform methods that implicitly interpolate in policy parameter space (e.g., Avg and EPOpt-adv) in settings where z has large discontinuous effect on dynamics, such as 2D navigation.",
"When dynamics are continuous in latent variables (e.g., Acrobot and HIV), interpolation-based methods fare better than BNN, which faces the difficulty of learning a model of the entire family of dynamics.",
"SEPT worked the best in the first case and is robust to the second case because it explicitly distinguishes dynamics and does not require learning a full transition model.",
"Moreover, SEPT does not require rewards at test time allowing it be useful on a broader class of problems than optimization-based meta-learning approaches like MAML.",
"Appendix D contains training curves.",
"Figures 2f, 2g and 2j show that the probe phase is necessary to solve 2D navigation quickly, while giving similar performance in Acrobot and significant improvement in HIV.",
"SEPT significantly outperformed TotalVar in 2D navigation and HIV, while TotalVar gives slight improvement in Acrobot, showing that directly using VAE performance as the reward for probing in certain environments can be more effective than a reward that deliberately encourages perturbation of state dimensions.",
"The clear advantage of SEPT over MaxEnt in 2D navigation and HIV supports our hypothesis in Section 3.3 that the variational lowerbound, rather than its negation in the maximum entropy viewpoint, should be used as the probe reward, while performance was not significantly differentiated in Acrobot.",
"SEPT outperforms DynaSEPT on all problems where dynamics are stationary during each instance.",
"On the other hand, DynaSEPT is the better choice in a non-stationary variant of 2D navigation where the dynamics \"switch\" abruptly at t = 10 (Figure 4c) .",
"Figure 3 shows that SEPT is robust to varying the probe length T p and dim(z).",
"Even with certain suboptimal probe length and dim(z), it can outperform all baselines on 2D navigation in both steps-to-solve and final reward; it is within error bars of all baselines on Acrobot based on final cumulative reward; and final cumulative reward exceeds that of baselines in HIV.",
"Increasing T p means foregoing valuable steps of the control policy and increasing difficulty of trajectory reconstruction for the VAE in high dimensional state spaces; T p is a hyper-parameter that should be validated for each application.",
"Appendix D.5 shows the effect of β on latent variable encodings.",
"We propose a general algorithm for single episode transfer among MDPs with different stationary dynamics, which is a challenging goal with real-world significance that deserves increased effort from the transfer learning and RL community.",
"Our method, Single Episode Policy Transfer (SEPT), trains a probe policy and an inference model to discover a latent representation of dynamics using very few initial steps in a single test episode, such that a universal policy can execute optimal control without access to rewards at test time.",
"Strong performance versus baselines in domains involving both continuous and discontinuous dependence of dynamics on latent variables show the promise of SEPT for problems where different dynamics can be distinguished via a short probing phase.",
"The dedicated probing phase may be improved by other objectives, in addition to performance of the inference model, to mitigate the risk and opportunity cost of probing.",
"An open challenge is single episode transfer in domains where differences in dynamics of different instances are not detectable early during an episode, or where latent variables are fixed but dynamics are nonstationary.",
"Further research on dynamic probing and control, as sketched in DynaSEPT, is one path toward addressing this challenge.",
"Our work is one step along a broader avenue of research on general transfer learning in RL equipped with the realistic constraint of a single episode for adaptation and evaluation.",
"A DERIVATIONS Proposition 1.",
"Let p ϕ (τ ) denote the distribution of trajectories induced by π ϕ .",
"Then the gradient of the entropy H(p ϕ (τ )) is given by",
"Proof.",
"Assuming regularity, the gradient of the entropy is",
"For trajectory τ := (s 0 , a 0 , s 1 , . . . , s t ) generated by the probe policy π ϕ :",
"Since p(s 0 ) and p(s i+1 |s i , a i ) do not depend on ϕ, we get",
"Substituting this into the gradient of the entropy gives equation 3.",
"Restore trained decoder ψ, encoder φ, probe policy ϕ, and control policy θ 3:"
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.14999999105930328,
0.10526315122842789,
0.4923076927661896,
0.1621621549129486,
0.04347825422883034,
0.23999999463558197,
0.1355932205915451,
0.09756097197532654,
0.17391303181648254,
0.11764705181121826,
0.04255318641662598,
0.05128204822540283,
0.1249999925494194,
0.17391303181648254,
0.28070175647735596,
0.3055555522441864,
0.21875,
0.11594202369451523,
0.09999999403953552,
0.0833333283662796,
0.1463414579629898,
0.7169811129570007,
0.16129031777381897,
0.1538461446762085,
0.10526315122842789,
0.1395348757505417,
0.14999999105930328,
0.13793103396892548,
0.09302324801683426,
0.052631575614213943,
0.1818181723356247,
0.10256409645080566,
0.1666666567325592,
0.11764705181121826,
0.10810810327529907,
0.19607841968536377,
0.10256409645080566,
0,
0.0624999962747097,
0.052631575614213943,
0.15094339847564697,
0.09836065024137497,
0.1249999925494194,
0.2641509473323822,
0.12244897335767746,
0.08163265138864517,
0,
0.07999999821186066,
0.2222222238779068,
0.0923076868057251,
0,
0.12244897335767746,
0.04999999701976776,
0.14035087823867798,
0.2142857164144516,
0.1111111044883728,
0.2545454502105713,
0.27272728085517883,
0.31578946113586426,
0.21276594698429108,
0.23076923191547394,
0.1428571343421936,
0.307692289352417,
0,
0.05405404791235924,
0.0555555522441864,
0.06451612710952759,
0.08888888359069824,
0.09756097197532654,
0.05882352590560913,
0.10810810327529907
] | rJeQoCNYDS | true | [
"Single episode policy transfer in a family of environments with related dynamics, via optimized probing for rapid inference of latent variables and immediate execution of a universal policy."
] |
[
"Domain specific goal-oriented dialogue systems typically require modeling three types of inputs, viz.,",
"(i) the knowledge-base associated with the domain,",
"(ii) the history of the conversation, which is a sequence of utterances and",
"(iii) the current utterance for which the response needs to be generated.",
"While modeling these inputs, current state-of-the-art models such as Mem2Seq typically ignore the rich structure inherent in the knowledge graph and the sentences in the conversation context.",
"Inspired by the recent success of structure-aware Graph Convolutional Networks (GCNs) for various NLP tasks such as machine translation, semantic role labeling and document dating, we propose a memory augmented GCN for goal-oriented dialogues.",
"Our model exploits",
"(i) the entity relation graph in a knowledge-base and",
"(ii) the dependency graph associated with an utterance to compute richer representations for words and entities.",
"Further, we take cognizance of the fact that in certain situations, such as, when the conversation is in a code-mixed language, dependency parsers may not be available.",
"We show that in such situations we could use the global word co-occurrence graph and use it to enrich the representations of utterances.",
"We experiment with the modified DSTC2 dataset and its recently released code-mixed versions in four languages and show that our method outperforms existing state-of-the-art methods, using a wide range of evaluation metrics.",
"Goal-oriented dialogue systems which can assist humans in various day-to-day activities have widespread applications in several domains such as e-commerce, entertainment, healthcare, etc.",
"For example, such systems can help humans in scheduling medical appointments, reserving restaurants, booking tickets, etc..",
"From a modeling perspective, one clear advantage of dealing with domain specific goal-oriented dialogues is that the vocabulary is typically limited, the utterances largely follow a fixed set of templates and there is an associated domain knowledge which can be exploited.",
"More specifically, there is some structure associated with the utterances as well as the knowledge base.",
"More formally, the task here is to generate the next response given",
"(i) the previous utterances in the conversation history",
"(ii) the current user utterance (known as the query) and",
"(iii) the entities and relationships in the associated knowledge base.",
"Current state-of-the-art methods BID30 BID23 typically use variants of Recurrent Neural Network BID10 to encode the history and current utterance and an external memory network to store the entities in the knowledge base.",
"The encodings of the utterances and memory elements are then suitably combined using an attention network and fed to the decoder to generate the response, one word at a time.",
"However, these methods do not exploit the structure in the knowledge base as defined by entity-entity relations and the structure in the utterances as defined by a dependency parse.",
"Such structural information can be exploited to improve the performance of the system as demonstrated by recent works on syntax-aware neural machine translation BID13 BID2 BID4 , semantic role labeling and document dating BID35 which use GCNs BID8 BID9 BID19 to exploit sentence structure.In this work, we propose to use such graph structures for goal-oriented dialogues.",
"In particular, we compute the dependency parse tree for each utterance in the conversation and use a GCN to capture the interactions between words.",
"This allows us to capture interactions between distant words in the sentence as long as they are connected by a dependency relation.",
"We also use GCNs to encode the entities of the KB where the entities are treated as nodes and the relations as edges of the graph.",
"Once we have a richer structure aware representation for the utterances and the entities, we use a sequential attention mechanism to compute an aggregated context representation from the GCN node vectors of the query, history and entities.",
"Further, we note that in certain situations, such as, when the conversation is in a code-mixed language or a language for which parsers are not available then it may not be possible to construct a dependency parse for the utterances.",
"To overcome this, we construct a co-occurrence matrix from the entire corpus and use this matrix to impose a graph structure on the utterances.",
"More specifically, we add an edge between two words in a sentence if they co-occur frequently in the corpus.",
"Our experiments suggest that this simple strategy acts as a reasonable substitute for dependency parse trees.We perform experiments with the modified DSTC2 BID3 dataset which contains goal-oriented conversations for reserving restaurants.",
"We also use its recently released code-mixed versions BID1 which contain code-mixed conversations in four different languages, viz., Hindi, Bengali, Gujarati and Tamil.",
"We compare with recent state-of-the-art methods and show that on average the proposed model gives an improvement of 2.8 BLEU points and 2 ROUGE points.Our contributions can be summarized as follows:",
"(i) We use GCNs to incorporate structural information for encoding query, history and KB entities in goal-oriented dialogues",
"(ii) We use a sequential attention mechanism to obtain query aware and history aware context representations",
"(iii) We leverage co-occurrence frequencies and PPMI (positive-pointwise mutual information) values to construct contextual graphs for code-mixed utterances and",
"(iv) We show that the proposed model obtains state-of-the-art results on the modified DSTC2 dataset and its recently released code-mixed versions.",
"In this section we discuss the results of our experiments as summarized in tables 1,2, and 3.",
"We use BLEU BID28 and ROUGE BID21 metrics to evaluate the generation quality of responses.",
"We also report the per-response accuracy which computes the percentage of responses in which the generated response exactly matches the ground truth response.",
"In order to evaluate the model's capability of correctly injecting entities in the generated response, we report the entity F1 measure as defined in .Results",
"on En-DSTC2 : We compare our model with the previous works on the English version of modified DSTC2 in table 1. For",
"most of the retrieval based models, the BLEU or ROUGE scores are not available as they select a candidate from a list of candidates as opposed to generating it. Our model",
"outperforms all of the retrieval and generation based models. We obtain",
"a gain of 0.7 in the per-response accuracy compared to the previous retrieval based state-of-the-art model of BID30 , which is a very strong baseline for our generation based model. We call this",
"a strong baseline because the candidate selection task of this model is easier than the response generation task of our model. We also obtain",
"a gain of 2.8 BLEU points, 2 ROUGE points and 2.5 entity F1 points compared to current state-of-the-art generation based models.Results on code-mixed datasets and effect of using RNNs: The results of our experiments on the code-mixed datasets are reported in table 2. Our model outperforms",
"the baseline models on all the code-mixed languages. One common observation from",
"the results over all the languages (including En-DSTC2) is that RNN+GCN-SeA performs better than GCN-SeA. Similar observations were made",
"by for the task of semantic role labeling.Effect of using Hops: As we increased the number of hops of GCNs, we observed a decrease in the performance. One reason for such a drop in",
"performance could be that the average utterance length is very small (7.76 words). Thus, there isn't much scope",
"for capturing distant neighbourhood information and more hops can add noisy information. Please refer to Appendix B for",
"detailed results about the effect of varying the number of hops.Frequency vs PPMI graphs: We observed that PPMI based contextual graphs were slightly better than frequency based contextual graphs (See Appendix C). In particular, when using PPMI",
"as opposed to frequency based contextual graph, we observed a gain of 0.95 in per-response accuracy, 0.45 in BLEU, 0.64 in ROUGE and 1.22 in entity F1 score when averaged across all the code-mixed languages.Effect of using Random Graphs: GCN-SeA-Random and GCN-SeA-Structure take the token embeddings directly instead of passing them though an RNN. This ensures that the difference",
"in performance of the two models are not influenced by the RNN encodings. The results are shown in table 3",
"and we observe a drop in performance for GCN-Random across all the languages. This Table 3 : GCN-SeA with random",
"graphs and frequency co-occurrence graphs on all DSTC2 datasets.shows that any random graph does not contribute to the performance gain of GCN-SeA and the dependency and contextual structures do play an important role.Ablations : We experiment with replacing the sequential attention by the Bahdanau attention BID0 . We also experiment with various combinations",
"of RNNs and GCNs as encoders.The results are shown in",
"We showed that structure aware representations are useful in goal-oriented dialogue and we obtain state-of-the art performance on the modified DSTC2 dataset and its recently released code-mixed versions.",
"We used GCNs to infuse structural information of dependency graphs and contextual graphs to enrich the representations of the dialogue context and KB.",
"We also proposed a sequential attention mechanism for combining the representations of",
"(i) query (current utterance),",
"(ii) conversation history and",
"(ii) the KB.",
"Finally, we empirically showed that when dependency parsers are not available for certain languages such as code-mixed languages then we can use word co-occurrence frequencies and PPMI values to extract a contextual graph and use such a graph with GCNs for improved performance.",
"south part of town.",
"bot api call R cuisine south moderate api call R cuisine south moderate KB Triples: pizza hut cherry hinton R post code pizza hut cherry hinton post code pizza hut cherry hinton R cuisine italian pizza hut cherry hinton R location south pizza hut cherry hinton R phone pizza hut cherry hinton phone pizza hut cherry hinton R address pizza hut cherry hinton address pizza hut cherry hinton R price moderate pizza hut cherry hinton R rating 3 restaurant alimentum R post code restaurant alimentum post code restaurant alimentum R cuisine european restaurant alimentum R location south restaurant alimentum R phone restaurant alimentum phone restaurant alimentum R address restaurant alimentum address restaurant alimentum R price moderate restaurant alimentum R rating 10 user <SILENCE> <SILENCE> bot restaurant alimentum is a nice restaurant in the south of town serving modern european food."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0
] | [
0.19999998807907104,
0.09090908616781235,
0.07407406717538834,
0.07407406717538834,
0,
0.2448979616165161,
0.10526315867900848,
0.07999999821186066,
0.1249999925494194,
0.04878048226237297,
0.05405404791235924,
0.12765957415103912,
0.10526315122842789,
0.0624999962747097,
0.11764705181121826,
0.06666666269302368,
0,
0,
0,
0,
0.04444443807005882,
0.0952380895614624,
0.05405404791235924,
0.08695651590824127,
0.10526315122842789,
0.05405404791235924,
0.05714285373687744,
0.17391303181648254,
0.08163265138864517,
0.05405404791235924,
0.05882352590560913,
0.21739129722118378,
0.05128204822540283,
0.1304347813129425,
0.1764705777168274,
0.25806450843811035,
0.11764705181121826,
0.1111111044883728,
0,
0.06451612710952759,
0.05882352590560913,
0,
0.1666666567325592,
0.1395348757505417,
0.14814814925193787,
0.2222222238779068,
0.1666666567325592,
0.1090909019112587,
0,
0,
0.0952380895614624,
0,
0.0624999962747097,
0.08510638028383255,
0.05714285373687744,
0,
0.1666666567325592,
0.13333332538604736,
0,
0.1395348757505417,
0.11764705181121826,
0.3571428656578064,
0,
0,
0,
0.11764705181121826,
0,
0.03703703358769417
] | Skz-3j05tm | true | [
"We propose a Graph Convolutional Network based encoder-decoder model with sequential attention for goal-oriented dialogue systems."
] |
[
"Effectively capturing graph node sequences in the form of vector embeddings is critical to many applications.",
"We achieve this by",
"(i) first learning vector embeddings of single graph nodes and",
"(ii) then composing them to compactly represent node sequences.",
"Specifically, we propose SENSE-S (Semantically Enhanced Node Sequence Embedding - for Single nodes), a skip-gram based novel embedding mechanism, for single graph nodes that co-learns graph structure as well as their textual descriptions.",
"We demonstrate that SENSE-S vectors increase the accuracy of multi-label classification tasks by up to 50% and link-prediction tasks by up to 78% under a variety of scenarios using real datasets.",
"Based on SENSE-S, we next propose generic SENSE to compute composite vectors that represent a sequence of nodes, where preserving the node order is important.",
"We prove that this approach is efficient in embedding node sequences, and our experiments on real data confirm its high accuracy in node order decoding.",
"Accurately learning vector embeddings for a sequence of nodes in a graph is critical to many scenarios, e.g., a set of Web pages regarding one specific topic that are linked together.",
"Such a task is challenging as:",
"(i) the embeddings may have to capture graph structure along with any available textual descriptions of the nodes, and moreover,",
"(ii) nodes of interest may be associated with a specific order.",
"For instance,",
"(i) for a set of Wikipedia pages w.r.t. a topic, there exists a recommended reading sequence;",
"(ii) an application may consist of a set of services/functions, which must be executed in a particular order (workflow composability);",
"(iii) in source routing, the sender of a packet on the Internet specifies the path that the packet takes through the network or",
"(iv) the general representation of any path in a graph or a network, e.g., shortest path.",
"Node sequence embedding, thus, requires us to",
"(i) learn embeddings for each individual node of the graph and",
"(ii) compose them together to represent their sequences.",
"To learn the right representation of individual nodes and also their sequences, we need to understand how these nodes are correlated with each other both functionally and structurally.A lot of work has only gone into learning single node embeddings (i.e., where node sequence length is 1), as they are essential in feature representations for applications like multi-label classification or link prediction.",
"For instance, algorithms in BID22 , BID4 , BID28 and others try to extract features purely from the underlying graph structure; algorithms in BID12 , BID19 and others learn vector representations of documents sharing a common vocabulary set.",
"However, many applications would potentially benefit from representations that are able to capture both textual descriptions and the underlying graph structure simultaneously.",
"For example, (1) classification of nodes in a network not only depends on their inter-connections (i.e., graph structure), but also nodes' intrinsic properties (i.e., their textual descriptions); (2) for product recommendations, if the product is new, it may not have many edges since not many users have interacted with it; however, using the textual descriptions along with the graph structure allows for efficient bootstrapping of the recommendation service.",
"For general case of sequence lengths greater than 1, despite the importance in applications like workflow composability described above, there is generally a lack of efficient solutions.",
"Intuitively, we can concatenate or add all involved node vectors; however, such a mechanism either takes too much space or loses the sequence information; thus unable to represent node sequences properly.We aim to learn node sequence embeddings by first first addressing the single node embedding issue, as a special case of node sequence embedding, by considering both the textual descriptions and the graph structure.",
"We seek to answer two questions: How should we combine these two objectives?",
"What framework should we use for feature learning?",
"Works that jointly address these two questions either investigate them under different problem settings BID1 BID32 , under restricted learning models BID13 , ignore the word context within the document BID16 , do not co-learn text and graph patterns or only consider linear combinations of text and graph BID3 ; this is elaborated further in Section 2.",
"In contrast, we propose a generic neural-network-based model called SENSE-S (Semantically Enhanced Node Sequence Embeddings -for Single nodes) for computing vector representations of nodes with additional semantic information in a graph.",
"SENSE-S is built on the foundation of skip-gram models.",
"However, SENSE-S is significantly different from classic skipgram models in the following aspects:",
"(i) For each word φ in the textual description of node v in the given graph, neighboring words of φ within v's textual description and neighboring nodes of v within the graph are sampled at the same time.(ii",
") The text and graph inputs are both reflected in the output layer in the form of probabilities of co-occurrence (in graph or text). (iii",
") Moreover, this joint optimization problem offers an opportunity to leverage the synergy between the graph and text inputs to ensure faster convergence. We",
"evaluate the generated vectors on (i",
") Wikispeedia (2009) to show that our SENSE-S model improves multi-label classification accuracy by up to 50% and (",
"ii) Physics Citation dataset BID14 to show that SENSE-S improves link prediction accuracy by up to 78% over the state-of-the-art.Next, we propose SENSE for general feature representation of a sequence of nodes.",
"This problem is more challenging in that",
"(i) besides the original objectives in SENSE-S, we now face another representation goal, i.e., sequence representation while preserving the node order;",
"(ii) it is important to represent the sequence in a compact manner; and",
"(iii) more importantly, given a sequence vector, we need to be able to decipher which functional nodes are involved and in what order.",
"To this end, we develop efficient schemes to combine individual vectors into complex sequence vectors that address all of the above challenges.",
"The key technique we use here is vector cyclic shifting, and we prove that the different shifted vectors are orthogonal with high probability.",
"This sequence embedding method is also evaluated on the Wikispeedia and Physics Citation datasets, and the accuracy of decoding a node sequence is shown to be close to 100% when the vector dimension is large.",
"We presented SENSE that learns semantically enriched vector representations of graph node sequences.",
"To achieve this, we first developed SENSE-S that learns single node embeddings via a multi-task learning formulation that jointly learns the co-occurrence probabilities of nodes within a graph and words within a node-associated document.",
"We evaluated SENSE-S against state-ofthe-art approaches that leverage both graph and text inputs and showed that SENSE-S improves multi-label classification accuracy in Wikispeedia dataset by up to 50% and link prediction over Physics Citation network by up to 78%.",
"We then developed SENSE that is able to employ provable schemes for vector composition to represent node sequences using the same dimension as the individual node vectors from SENSE-S.",
"We demonstrated that the individual nodes within the sequence can be inferred with a high accuracy (close to 100%) from such composite SENSE vectors.A LEMMA FOR THE PROOF OF THEOREM 2 Proof.",
"Since both x and y are unit vectors, we have x · y = ||x||2||y||2 cos θ = cos θ, where θ is the angle between x and y.",
"Since x and y are not correlated and both x and y are uniformly distributed across the sphere surface, θ is also uniformly distributed, and thus E[x · y] = 0.As x · y is purely determined by the angle θ between x and y, without loss of generality, we select y ="
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.07407406717538834,
0,
0.1904761791229248,
0,
0.19512194395065308,
0.10810810327529907,
0.1111111044883728,
0.1764705777168274,
0.1463414579629898,
0,
0.13333332538604736,
0,
0,
0,
0.06896550953388214,
0.07407406717538834,
0.2222222238779068,
0.1818181723356247,
0,
0.08571428060531616,
0.09302325546741486,
0.24242423474788666,
0.060606058686971664,
0.054054051637649536,
0.19354838132858276,
0,
0,
0.13114753365516663,
0.09756097197532654,
0,
0,
0.10526315122842789,
0.25,
0.1818181723356247,
0,
0.13793103396892548,
0.0952380895614624,
0.1111111044883728,
0.0624999962747097,
0.1666666567325592,
0.12121211737394333,
0.1249999925494194,
0.12121211737394333,
0.1538461446762085,
0.1666666567325592,
0.14999999105930328,
0.23255813121795654,
0.054054051637649536,
0.09302325546741486,
0.1249999925494194,
0.0833333283662796
] | B1x9siCcYQ | true | [
"Node sequence embedding mechanism that captures both graph and text properties."
] |
[
"Recent evidence shows that convolutional neural networks (CNNs) are biased towards textures so that CNNs are non-robust to adversarial perturbations over textures, while traditional robust visual features like SIFT (scale-invariant feature transforms) are designed to be robust across a substantial range of affine distortion, addition of noise, etc with the mimic of human perception nature.",
"This paper aims to leverage good properties of SIFT to renovate CNN architectures towards better accuracy and robustness.",
"We borrow the scale-space extreme value idea from SIFT, and propose EVPNet (extreme value preserving network) which contains three novel components to model the extreme values: (1) parametric differences of Gaussian (DoG) to extract extrema, (2) truncated ReLU to suppress non-stable extrema and (3) projected normalization layer (PNL) to mimic PCA-SIFT like feature normalization.",
"Experiments demonstrate that EVPNets can achieve similar or better accuracy than conventional CNNs, while achieving much better robustness on a set of adversarial attacks (FGSM,PGD,etc) even without adversarial training.",
"Convolutional neural networks (CNNs) evolve very fast ever since AlexNet (Krizhevsky & Hinton, 2012 ) makes a great breakthrough on ImageNet image classification challenge (Deng et al., 2009 ) in 2012.",
"Various network architectures have been proposed to further boost classification performance since then, including VGGNet (Simonyan & Zisserman, 2015) , GoogleNet , ResNet (He et al., 2016) , DenseNet (Huang et al., 2017) and SENet , etc.",
"Recently, people even introduce network architecture search to automatically learn better network architectures (Zoph & Le, 2017; Liu et al., 2018) .",
"However, state-of-the-art CNNs are challenged by their robustness, especially vulnerability to adversarial attacks based on small, human-imperceptible modifications of the input (Szegedy et al., 2014; Goodfellow et al., 2015) .",
"thoroughly study the robustness of 18 well-known ImageNet models using multiple metrics, and reveals that adversarial examples are widely existent.",
"Many methods are proposed to improve network robustness, which can be roughly categorized into three perspectives: (1) modifying input or intermediate features by transformation (Guo et al., 2018) , denoising Jia et al., 2019) , generative models (Samangouei et al., 2018; Song et al., 2018) ; (2) modifying training by changing loss functions (Wong & Kolter, 2018; Elsayed et al., 2018; , network distillation (Papernot et al., 2016) , or adversarial training (Goodfellow et al., 2015; Tramer et al., 2018 ) (3) designing robust network architectures Svoboda et al., 2019; Nayebi & Ganguli, 2017) and possible combinations of these basic categories.",
"For more details of current status, please refer to a recent survey (Akhtar & Mian, 2018) .",
"Although it is known that adversarial examples are widely existent , some fundamental questions are still far from being well studied like what causes it, and how the factor impacts the performance, etc.",
"One of the interesting findings in is that model architecture is a more critical factor to network robustness than model size (e.g. number of layers).",
"Some recent works start to explore much deeper nature.",
"For instance, both (Geirhos et al., 2019; Baker et al., 2018) show that CNNs are trained to be strongly biased towards textures so that CNNs do not distinguish objects contours from other local or even noise edges, thus perform poorly on shape dominating object instances.",
"On the contrary, there are no statistical difference for human behaviors on both texture rich objects and global shape dominating objects in psychophysical trials.",
"Ilyas et al. (2019) further analyze and show that deep convolutional features can be categorized into robust and non-robust features, while non-robust features may even account for good generalization.",
"However, non-robust features are not expected to have good model interpretability.",
"It is thus an interesting topic to disentangle robust and non-robust features with certain kinds of human priors in the network designing or training process.",
"In fact, human priors have been extensively used in handcraft designed robust visual features like SIFT (Lowe, 2004) .",
"SIFT detects scale-space (Lindeberg, 1994) extrema from input images, and selects stable extrema to build robust descriptors with refined location and orientation, which achieves great success for many matching and recognition based vision tasks before CNN being reborn in 2012 (Krizhevsky & Hinton, 2012) .",
"The scale-space extrema are efficiently implemented by using a difference-of-Gaussian (DoG) function to search over all scales and image locations, while the DoG operator is believed to biologically mimic the neural processing in the retina of the eye (Young, 1987) .",
"Unfortunately, there is (at least explicitly) no such scale-space extrema operations in all existing CNNs.",
"Our motivation is to study the possibility of leveraging good properties of SIFT to renovate CNN networks architectures towards better accuracy and robustness.",
"In this paper, we borrow the scale-space extrema idea from SIFT, and propose extreme value preserving networks (EVPNet) to separate robust features from non-robust ones, with three novel architecture components to model the extreme values: (1) parametric DoG (pDoG) to extract extreme values in scale-space for deep networks, (2) truncated ReLU (tReLU) to suppress noise or non-stable extrema and (3) projected normalization layer (PNL) to mimic PCA-SIFT (Ke et al., 2004) like feature normalization.",
"pDoG and tReLU are combined into one block named EVPConv, which could be used to replace all k × k (k > 1) conv-layers in existing CNNs.",
"We conduct comprehensive experiments and ablation studies to verify the effectiveness of each component and the proposed EVPNet.",
"Figure 1 illustrates a comparison of responses for standard convolution + ReLU and EVPConv in ResNet-50 trained on ImageNet, and shows that the proposed EVPConv produces less noises and more responses around object boundary than standard convolution + ReLU, which demonstrates the capability of EVPConv to separate robust features from non-robust ones.",
"Our major contribution are:",
"• To the best of our knowledge, we are the first to explicitly separate robust features from non-robust ones in deep neural networks from an architecture design perspective.",
"• We propose three novel network architecture components to model extreme values in deep networks, including parametric DoG, truncated ReLU, and projected normalization layer, and verify their effectiveness through comprehensive ablation studies.",
"• We propose extreme value preserving networks (EVPNets) to combine those three novel components, which are demonstrated to be not only more accurate, but also more robust to a set of adversarial attacks (FGSM, PGD, etc) even for clean model without adversarial training.",
"This paper mimics good properties of robust visual feature SIFT to renovate CNN architectures with some novel architecture components, and proposes the extreme value preserving networks (EVPNet).",
"Experiments demonstrate that EVPNets can achieve similar or better accuracy over conventional CNNs, while achieving much better robustness to a set of adversarial attacks (FGSM, PGD, etc) even for clean model without any other tricks like adversarial training.",
"top-1 accuracy to near zero, while the EVP-ResNet variants keep 6∼10% top-1 accuracy.",
"The gap in FGSM attacks is even larger.",
"This improvement is remarkable considering that it is by clean model without adversarial training.",
"For the MobileNet case, we also observe notable accuracy and robustness improvement.",
"Please refer to Table 4 for more details.",
"In summary, our solid results and attempts may inspire future new ways for robust network architecture design or even automatic search."
] | [
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.23188404738903046,
0.8947368264198303,
0.11940298229455948,
0.1666666567325592,
0,
0.11320754140615463,
0.1428571343421936,
0.08163265138864517,
0.1463414579629898,
0.12903225421905518,
0.10810810327529907,
0.07692307233810425,
0.13636362552642822,
0.06666666269302368,
0.0634920597076416,
0.045454539358615875,
0.1702127605676651,
0.1875,
0.21739129722118378,
0.25641024112701416,
0.16129031777381897,
0.10526315122842789,
0,
0.6190476417541504,
0.119047611951828,
0.08510638028383255,
0.1621621549129486,
0.158730149269104,
0,
0.1702127605676651,
0.07692307233810425,
0.09999999403953552,
0.5416666865348816,
0.21052631735801697,
0.1249999925494194,
0,
0.05882352590560913,
0.1818181723356247,
0.06896550953388214,
0.0952380895614624
] | H1gHb1rFwr | true | [
"This paper aims to leverage good properties of robust visual features like SIFT to renovate CNN architectures towards better accuracy and robustness."
] |
[
"Compressed representations generalize better (Shamir et al., 2010), which may be crucial when learning from limited or noisy labeled data.",
"The Information Bottleneck (IB) method (Tishby et al. (2000)) provides an insightful and principled approach for balancing compression and prediction in representation learning.",
"The IB objective I(X; Z) − βI(Y ; Z) employs a Lagrange multiplier β to tune this trade-off.",
"However, there is little theoretical guidance for how to select β.",
"There is also a lack of theoretical understanding about the relationship between β, the dataset, model capacity, and learnability.",
"In this work, we show that if β is improperly chosen, learning cannot happen: the trivial representation P(Z|X) = P(Z) becomes the global minimum of the IB objective.",
"We show how this can be avoided, by identifying a sharp phase transition between the unlearnable and the learnable which arises as β varies.",
"This phase transition defines the concept of IB-Learnability.",
"We prove several sufficient conditions for IB-Learnability, providing theoretical guidance for selecting β.",
"We further show that IB-learnability is determined by the largest confident, typical, and imbalanced subset of the training examples.",
"We give a practical algorithm to estimate the minimum β for a given dataset.",
"We test our theoretical results on synthetic datasets, MNIST, and CIFAR10 with noisy labels, and make the surprising observation that accuracy may be non-monotonic in β.",
"Compressed representations generalize better (Shamir et al., 2010) , which is likely to be particularly important when learning from limited or noisy labels, as otherwise we should expect our models to overfit to the noise.",
"Tishby et al. (2000) introduced the Information Bottleneck (IB) objective function which learns a representation Z of observed variables (X, Y ) that retains as little information about X as possible, but simultaneously captures as much information about Y as possible:min IB β (X, Y ; Z) = min I(X; Z) − βI(Y ; Z)I(X; Y ) = dx dy p(x,",
"y)log p(x,y) p(x)p(y",
") is",
"the mutual information. The hyperparameter",
"β controls the trade-off between compression and prediction, in the same spirit as Rate-Distortion Theory (Shannon, 1948) , but with a learned representation function P (Z|X) that automatically captures some part of the \"semantically meaningful\" information, where the semantics are determined by the observed relationship between X and Y .The IB framework has",
"been extended to and extensively studied in a variety of scenarios, including Gaussian variables BID6 ), meta-Gaussians (Rey & Roth (2012) ), continuous variables via variational methods BID3 ; BID5 BID8 ), deterministic scenarios (Strouse & Schwab (2017a) ; BID12 ), geometric clustering (Strouse & Schwab (2017b) ), and is used for learning invariant and disentangled representations in deep neural nets BID0 b) ). However, a core issue",
"remains: how should we select β? In the original work,",
"the authors recommend sweeping β > 1, which can be prohibitively expensive in practice, but also leaves open interesting theoretical questions around the relationship between β, P (Z|X), and the observed data, P (X, Y ). For example, under how",
"much label noise will IB at a given β still be able to learn a useful representation?This work begins to answer",
"some of those questions by characterizing the onset of learning. Specifically:• We show that",
"improperly chosen β may result in a failure to learn: the trivial solution P (Z|X) = P (Z) becomes the global minimum of the IB objective, even for β 1.• We introduce the concept of IB-Learnability, and show that when we vary β, the IB objective will undergo a phase transition from the inability to learn to the ability to learn.• Using the second-order variation",
", we derive sufficient conditions for IB-Learnability, which provide theoretical guidance for choosing a good β.• We show that IB-learnability is",
"determined by the largest confident, typical, and imbalanced subset of the training examples, reveal its relationship with the slope of the Pareto frontier at the origin on the information plane I(Y ; Z) vs. I(X; Z), and discuss its relation with model capacity.We use our main results to demonstrate on synthetic datasets, MNIST (LeCun et al., 1998) , and CIFAR10 BID13 ) under noisy labels that the theoretical prediction for IB-Learnability closely matches experiment. We present an algorithm for estimating",
"the onset of IB-Learnability, and demonstrate that it does a good job of estimating the theoretical predictions and the empirical results. Finally, we observe discontinuities in",
"the Pareto frontier of the information plane as β increases, and those dicontinuities correspond to accuracy decreasing as β increases.",
"In this paper, we have presented theoretical results for predicting the onset of learning, and have shown that it is determined by the largest confident, typical and imbalanced subset of the examples.",
"We gave a practical algorithm for predicting the transition, and showed that those predictions are accurate, even in cases of extreme label noise.",
"We have also observed a surprising non-monotonic relationship between β and accuracy, and shown its relationship to discontinuities in the Pareto frontier of the information plane.",
"We believe these results will provide theoretical and practical guidance for choosing β in the IB framework for balancing prediction and compression.",
"Our work also raises other questions, such as whether there are other phase transitions in learnability that might be identified.",
"We hope to address some of those questions in future work.Mélanie Rey and Volker Roth.",
"Meta-gaussian information bottleneck.",
"In Advances in Neural Information Processing Systems, pp. 1916 Systems, pp.",
"-1924 Systems, pp.",
", 2012 .Ohad",
"Shamir, Sivan Sabato, and Naftali Tishby. Learning",
"and generalization with the information bottleneck. The structure",
"of the Appendix is as follows. In Appendix A",
", we provide preliminaries for the first-order and secondorder variations on functionals. Then we prove",
"Theorem 1 in Appendix B. In Appendix C, we state and prove Sufficient Condition 1 for IB β -learnability. In Appendix D",
", we calculate the first and second variations of IB β [p(z|x)] at the trivial representation p(z|x) = p(z), which is used in proving the Sufficient Condition 2 IB β -learnability (Appendix F). After these",
"preparations, we prove the key result of this paper, Theorem 2, in Appendix G. Then two important corollaries 2.1, 2.2 are proved in Appendix H. We provide additional discussions and insights for Theorem 2 in Appendix I, and Algorithm 1 for estimation of an upper boundβ 0 ≥ β 0 in Appendix J. Finally in Appendix K, we provide details for the experiments.",
"Similarity to information measures.",
"The denominator of Eq. (2) is closely related to mutual information.",
"Using the inequality x − 1 ≥ log(x",
") for x > 0, it becomes: DISPLAYFORM0 whereĨ(Ω x ; Y ) is the mutual information \"density\" at Ω x ⊂ X . Of",
"course, this quantity is also D KL [p(y|Ω x )||p(y)",
"], so we know that the denominator of Eq. FORMULA2 is non-negative. Incidentally",
", E y∼p (y|Ωx) p (y|Ωx) p(y) − 1 is the",
"density of \"rational mutual information\" BID15 DISPLAYFORM1 Similarly, the numerator is related to the self-information of Ω x : DISPLAYFORM2 so we can estimate the phase transition as: DISPLAYFORM3 Since Eq. (22) uses upper bounds on both the numerator and the denominator, it does not give us a bound on β 0 .Multiple phase",
"transitions. Based on this",
"characterization of Ω x , we can hypothesize datasets with multiple learnability phase transitions. Specifically,",
"consider a region Ω x0 that is small but \"typical\", consists of all elements confidently predicted as y 0 by p(y|x), and where y 0 is the least common class. By construction",
", this Ω x0 will dominate the infimum in Eq. (2), resulting in a small value of β 0 . However, the remaining",
"X − Ω x0 effectively form a new dataset, X 1 . At exactly β 0 , we may",
"have that the current encoder, p 0 (z|x), has no mutual information with the remaining classes in X 1 ; i.e., I(Y 1 ; Z 0 ) = 0. In this case, Definition",
"1 applies to p 0 (z|x) with respect to I(X 1 ; Z 1 ). We might expect to see that",
", at β 0 , learning will plateau until we get to some β 1 > β 0 that defines the phase transition for X 1 . Clearly this process could",
"repeat many times, with each new dataset X i being distinctly more difficult to learn than X i−1 . The end of Appendix F gives",
"a more detailed analysis on multiple phase transitions.Estimating model capacity. The observation that a model",
"can't distinguish between cluster overlap in the data and its own lack of capacity gives an interesting way to use IB-Learnability to measure the capacity of a set of models relative to the task they are being used to solve. Learnability and the Information",
"Plane. Many of our results can be interpreted",
"in terms of the geometry of the Pareto frontier illustrated in FIG1 , which describes the trade-off between increasing I(Y ; Z) and decreasing I(X; Z). At any point on this frontier that minimizes",
"IB min β ≡ min I(X; Z) − βI(Y ; Z), the frontier will have slope β −1 if it is differentiable. If the frontier is also concave (has negative",
"second derivative), then this slope β −1 will take its maximum β −1 0 at the origin, which implies IB β -Learnability for β > β 0 , so that the threshold for IB β -Learnability is simply the inverse slope of the frontier at the origin. More generally, as long as the Pareto frontier",
"is differentiable, the threshold for IB β -learnability is the inverse of its maximum slope. Indeed, Theorem 2 gives lower bounds of the slope",
"of the Pareto frontier at the origin. This means that we lack IB β -learnability for β",
"< β 0 , which makes the origin the optimal point. If the frontier is convex, then we achieve optimality",
"at the upper right endpoint if β > β 1 , otherwise on the frontier at the location between the two endpoints where the frontier slope is β −1 ."
] | [
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0,
0.21052631735801697,
0.060606054961681366,
0.07407406717538834,
0.23529411852359772,
0.1428571343421936,
0.3589743673801422,
0.3333333432674408,
0.0714285671710968,
0.1764705777168274,
0.13793103396892548,
0.09756097197532654,
0.04081632196903229,
0.15625,
0,
0.0952380895614624,
0.158730149269104,
0.08571428060531616,
0.07692307233810425,
0.11320754140615463,
0,
0.13793103396892548,
0.21212120354175568,
0.05405404791235924,
0.09638553857803345,
0.15789473056793213,
0.1818181723356247,
0.1860465109348297,
0.20512819290161133,
0.20512819290161133,
0.1666666567325592,
0.05714285373687744,
0.1249999925494194,
0,
0.07999999821186066,
0,
0,
0.08695651590824127,
0.1666666567325592,
0.1666666567325592,
0.19999998807907104,
0.11764705181121826,
0.12765957415103912,
0.13114753365516663,
0,
0.07407406717538834,
0.0833333283662796,
0.10810810327529907,
0,
0.13793103396892548,
0.07692307233810425,
0.16129031777381897,
0,
0.12903225421905518,
0.13333332538604736,
0.1111111044883728,
0,
0.043478257954120636,
0,
0.19512194395065308,
0.05128204822540283,
0.06666666269302368,
0.20000000298023224,
0.0833333283662796,
0.1818181723356247,
0.04878048226237297,
0.11538460850715637,
0.17142856121063232,
0.19354838132858276,
0.05882352590560913,
0.10810810327529907
] | SJePKo5HdV | true | [
"Theory predicts the phase transition between unlearnable and learnable values of beta for the Information Bottleneck objective"
] |
[
" We consider a new class of \\emph{data poisoning} attacks on neural networks, in which the attacker takes control of a model by making small perturbations to a subset of its training data. ",
"We formulate the task of finding poisons as a bi-level optimization problem, which can be solved using methods borrowed from the meta-learning community. ",
"Unlike previous poisoning strategies, the meta-poisoning can poison networks that are trained from scratch using an initialization unknown to the attacker and transfer across hyperparameters.",
"Further we show that our attacks are more versatile: they can cause misclassification of the target image into an arbitrarily chosen class.",
"Our results show above 50% attack success rate when poisoning just 3-10% of the training dataset.",
"We have extended learning-to-learn techniques to adversarial poison example generation, or learning-to-craft.",
"We devised a novel fast algorithm by which to solve the bi-level optimization inherent to poisoning, where the inner training of the network on the perturbed dataset must be performed for every crafting step.",
"Our results, showing the first clean-label poisoning attack that works on networks trained from scratch, demonstrates the effectiveness of this method.",
"Further our attacks are versatile, they have functionality such as the third-party attack which are not possible with previous methods.",
"We hope that our work establishes a strong attack baseline for future work on clean-label data poisoning and also promote caution that these new methods of data poisoning are able to muster a strong attack on industrially-relevant architectures, that even transfers between training runs and hyperparameter choices."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
1
] | [
0.17391303181648254,
0.04999999329447746,
0.09756097197532654,
0.1538461446762085,
0.060606054961681366,
0,
0.12765957415103912,
0.10810810327529907,
0.0555555522441864,
0.2222222238779068
] | SyliaANtwH | true | [
"Generate corrupted training images that are imperceptible yet change CNN behavior on a target during any new training."
] |
[
"We give a new algorithm for learning a two-layer neural network under a very general class of input distributions.",
"Assuming there is a ground-truth two-layer network \n",
"y = A \\sigma(Wx) + \\xi,\n",
"where A, W are weight matrices, \\xi represents noise, and the number of neurons in the hidden layer is no larger than the input or output, our algorithm is guaranteed to recover the parameters A, W of the ground-truth network.",
"The only requirement on the input x is that it is symmetric, which still allows highly complicated and structured input. \n\n",
"Our algorithm is based on the method-of-moments framework and extends several results in tensor decompositions.",
"We use spectral algorithms to avoid the complicated non-convex optimization in learning neural networks.",
"Experiments show that our algorithm can robustly learn the ground-truth neural network with a small number of samples for many symmetric input distributions.",
"Deep neural networks have been extremely successful in many tasks related to images, videos and reinforcement learning.",
"However, the success of deep learning is still far from being understood in theory.",
"In particular, learning a neural network is a complicated non-convex optimization problem, which is hard in the worst-case.",
"The question of whether we can efficiently learn a neural network still remains generally open, even when the data is drawn from a neural network.",
"Despite a lot of recent effort, the class of neural networks that we know how to provably learn in polynomial time is still very limited, and many results require strong assumptions on the input distribution.In this paper we design a new algorithm that is capable of learning a two-layer 1 neural network for a general class of input distributions.",
"Following standard models for learning neural networks, we assume there is a ground truth neural network.",
"The input data (x,",
"y) is generated by first sampling the input x from an input distribution D, then computing y according to the ground truth network that is unknown to the learner.",
"The learning algorithm will try to find a neural network f such that f",
"(x) is as close to y as possible over the input distribution D. Learning a neural network is known to be a hard problem even in some simple settings (Goel et al., 2016; Brutzkus & Globerson, 2017) , so we need to make assumptions on the network structure or the input distribution D, or both.",
"Many works have worked with a simple input distribution (such as Gaussians) and try to learn more and more complex networks (Tian, 2017; Brutzkus & Globerson, 2017; Li & Yuan, 2017; Soltanolkotabi, 2017; Zhong et al., 2017) .",
"However, the input distributions in real life are distributions of very complicated objects such as texts, images or videos.",
"These inputs are highly structured, clearly not Gaussian and do not even have a simple generative model.We consider a type of two-layer neural network, where the output y is generated as y = Aσ(W",
"x) + ξ.Here x ∈ R d is the input, W ∈ R k×d and A ∈ R k×k are two weight matrices 2 .",
"The function σ is the standard ReLU activation function σ(x",
") = max{x, 0} applied entry-wise to the vector W x, and ξ is a noise vector that has E[ξ] = 0 and is independent of x. Although",
"the network only has two layers, learning similar networks is far from trivial: even when the input distribution is Gaussian, Ge et al. (2017b) and Safran & Shamir (2018) showed that standard optimization objective can have bad local optimal solutions. Ge et al.",
"(2017b) gave a new and more complicated objective function that does not have bad local minima.For the input distribution D, our only requirement is that D is symmetric. That is,",
"for any x ∈ R d , the probability of observing x ∼ D is the same as the probability of observing −x ∼ D. A symmetric distribution can still be very complicated and cannot be represented by a finite number of parameters. In practice",
", one can often think of the symmetry requirement as a \"factor-2\" approximation to an arbitrary input distribution: if we have arbitrary training samples, it is possible to augment the input data with their negations to make the input distribution symmetric, and it should take at most twice the effort in labeling both the original and augmented data. In many cases",
"(such as images) the augmented data can be interpreted (for images it will just be negated colors) so reasonable labels can be obtained.",
"Optimizing the parameters of a neural network is a difficult problem, especially since the objective function depends on the input distribution which is often unknown and can be very complicated.",
"In this paper, we design a new algorithm using method-of-moments and spectral techniques to avoid the Published as a conference paper at ICLR 2019 complicated non-convex optimization for neural networks.",
"Our algorithm can learn a network that is of similar complexity as the previous works, while allowing much more general input distributions.There are still many open problems.",
"The current result requires output to have the same (or higher) dimension than the hidden layer, and the hidden layer does not have a bias term.",
"Removing these constraints are are immediate directions for future work.",
"Besides the obvious ones of extending our results to more general distributions and more complicated networks, we are also interested in the relations to optimization landscape for neural networks.",
"In particular, our algorithm shows there is a way to find the global optimal network in polynomial time, does that imply anything about the optimization landscape of the standard objective functions for learning such a neural network, or does it imply there exists an alternative objective function that does not have any local minima?",
"We hope this work can lead to new insights for optimizing a neural network."
] | [
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.625,
0.27272728085517883,
0,
0.12765957415103912,
0.11764705181121826,
0.06666666269302368,
0.20689654350280762,
0.42105263471603394,
0.1249999925494194,
0.06896550953388214,
0.25806450843811035,
0.1621621549129486,
0.29032257199287415,
0.3333333134651184,
0.10526315122842789,
0.20512820780277252,
0.3571428656578064,
0.16949151456356049,
0.1702127605676651,
0.060606054961681366,
0.1702127605676651,
0,
0,
0.05128204822540283,
0.15094339847564697,
0.1818181723356247,
0.1599999964237213,
0.15625,
0,
0.24390242993831635,
0.1818181723356247,
0.1860465109348297,
0.05405404791235924,
0.0833333283662796,
0.09756097197532654,
0.23333333432674408,
0.3448275923728943
] | H1xipsA5K7 | true | [
"We give an algorithm for learning a two-layer neural network with symmetric input distribution. "
] |
[
"Teachers intentionally pick the most informative examples to show their students.",
"However, if the teacher and student are neural networks, the examples that the teacher network learns to give, although effective at teaching the student, are typically uninterpretable.",
"We show that training the student and teacher iteratively, rather than jointly, can produce interpretable teaching strategies.",
"We evaluate interpretability by (1) measuring the similarity of the teacher's emergent strategies to intuitive strategies in each domain and (2) conducting human experiments to evaluate how effective the teacher's strategies are at teaching humans.",
"We show that the teacher network learns to select or generate interpretable, pedagogical examples to teach rule-based, probabilistic, boolean, and hierarchical concepts.",
"Human teachers give informative examples to help their students learn concepts faster and more accurately BID23 BID21 BID4 .",
"For example, suppose a teacher is trying to teach different types of animals to a student.",
"To teach what a \"dog\" is they would not show the student only images of dalmatians.",
"Instead, they would show different types of dogs, so the student generalizes the word \"dog\" to all types of dogs, rather than merely dalmatians.Teaching through examples can be seen as a form of communication between a teacher and a student.",
"Recent work on learning emergent communication protocols in deep-learning based agents has been successful at solving a variety of tasks BID7 BID24 BID18 BID5 BID16 .",
"Unfortunately, the protocols learned by the agents are usually uninterpretable to humans , and thus at the moment have limited potential for communication with humans.We hypothesize that one reason the emergent protocols are uninterpretable is because the agents are typically optimized jointly.",
"Consider how this would play out with a teacher network T that selects or generates examples to give to a student network S. If T and S are optimized jointly, then T and S essentially become an encoder and decoder that can learn any arbitrary encoding.",
"T could encode \"dog\" through a picture of a giraffe and encode \"siamese cat\" through a picture of a hippo.The examples chosen by T, although effective at teaching S, are unintuitive since S does not learn in the way we expect.",
"On the other hand, picking diverse dog images to communicate the concept of \"dog\" is an intuitive strategy because it is the effective way to teach given how we implicitly assume a student would interpret the examples.",
"Thus, we believe that S having an interpretable learning strategy is key to the emergence of an interpretable teaching strategy.This raises the question of whether there is an alternative to jointly optimizing T and S, in which S maintains an interpretable learning strategy, and leads T to learn an interpretable teaching strategy.",
"We would ideally like such an alternative to be domain-agnostic.",
"Drawing on inspiration from the cognitive science work on rational pedagogy (see Section 2.1), we propose a simple change:1.",
"Train S on random examples 2.",
"Train T to pick examples for this fixed S"
] | [
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.06666666269302368,
0.2926829159259796,
0.8888888955116272,
0.1702127605676651,
0.29999998211860657,
0.05405404791235924,
0.1818181723356247,
0.17142856121063232,
0.307692289352417,
0.045454539358615875,
0.11538460850715637,
0.290909081697464,
0.1111111044883728,
0.07843136787414551,
0.15094339847564697,
0.06896550953388214,
0.052631575614213943,
0,
0
] | H1wt9x-RW | true | [
"We show that training a student and teacher network iteratively, rather than jointly, can produce emergent, interpretable teaching strategies."
] |
[
"Stochastic gradient descent (SGD), which trades off noisy gradient updates for computational efficiency, is the de-facto optimization algorithm to solve large-scale machine learning problems.",
"SGD can make rapid learning progress by performing updates using subsampled training data, but the noisy updates also lead to slow asymptotic convergence. ",
"Several variance reduction algorithms, such as SVRG, introduce control variates to obtain a lower variance gradient estimate and faster convergence. ",
"Despite their appealing asymptotic guarantees, SVRG-like algorithms have not been widely adopted in deep learning.",
"The traditional asymptotic analysis in stochastic optimization provides limited insight into training deep learning models under a fixed number of epochs.",
"In this paper, we present a non-asymptotic analysis of SVRG under a noisy least squares regression problem.",
"Our primary focus is to compare the exact loss of SVRG to that of SGD at each iteration t.",
"We show that the learning dynamics of our regression model closely matches with that of neural networks on MNIST and CIFAR-10 for both the underparameterized and the overparameterized models.",
"Our analysis and experimental results suggest there is a trade-off between the computational cost and the convergence speed in underparametrized neural networks.",
"SVRG outperforms SGD after a few epochs in this regime.",
"However, SGD is shown to always outperform SVRG in the overparameterized regime.",
"Many large-scale machine learning problems, especially in deep learning, are formulated as minimizing the sum of loss functions on millions of training examples (Krizhevsky et al., 2012; Devlin et al., 2018) .",
"Computing exact gradient over the entire training set is intractable for these problems.",
"Instead of using full batch gradients, the variants of stochastic gradient descent (SGD) (Robbins & Monro, 1951; Zhang, 2004; Bottou, 2010; Sutskever et al., 2013; Duchi et al., 2011; Kingma & Ba, 2014) evaluate noisy gradient estimates from small mini-batches of randomly sampled training points at each iteration.",
"The mini-batch size is often independent of the training set size, which allows SGD to immediately adapt the model parameters before going through the entire training set.",
"Despite its simplicity, SGD works very well, even in the non-convex non-smooth deep learning problems (He et al., 2016; Vaswani et al., 2017) .",
"However, the optimization performance of the stochastic algorithm near local optima is significantly limited by the mini-batch sampling noise, controlled by the learning rate and the mini-batch size.",
"The sampling variance and the slow convergence of SGD have been studied extensively in the past (Chen et al., 2016; Li et al., 2017; Toulis & Airoldi, 2017) .",
"To ensure convergence, machine learning practitioners have to either increase the mini-batch size or decrease the learning rate toward the end of the training (Smith et al., 2017; Ge et al., 2019) .",
"The minimum loss achieved in real dataset MNIST (a logistic regression model).",
"Our theoretical prediction (a) matched with the training dynamics for real datasets, demonstrating tradeoffs between computational cost and convergence speed.",
"The curves in red are SVRG and curves in blue are SGD.",
"Different markers refer to different per-iteration computational cost, i.e., the number of backpropagation used per iteration on average.",
"their strong theoretical guarantees, SVRG-like algorithms have seen limited success in training deep learning models (Defazio & Bottou, 2018) .",
"Traditional results from stochastic optimization focus on the asymptotic analysis, but in practice, most of deep neural networks are only trained for hundreds of epochs due to the high computational cost.",
"To address the gap between the asymptotic benefit of SVRG and the practical computational budget of training deep learning models, we provide a non-asymptotic study on the SVRG algorithms under a noisy least squares regression model.",
"Although optimizing least squares regression is a basic problem, it has been shown to characterize the learning dynamics of many realistic deep learning models (Zhang et al., 2019; Lee et al., 2019) .",
"Recent works suggest that neural network learning behaves very differently in the underparameterized regime vs the overparameterized regime Vaswani et al., 2019) , characterized by whether the learnt model can achieve zero expected loss.",
"We account for both training regimes in the analysis by assuming a linear target function and noisy labels.",
"In the presence of label noise, the loss is lower bounded by the label variance.",
"In the absence of the noise, the linear predictor can fit each training example perfectly.",
"We summarize the main contributions as follows:",
"• We show the exact expected loss of SVRG and SGD along an optimization trajectory as a function of iterations and computational cost.",
"• Our non-asymptotic analysis provides an insightful comparison of SGD and SVRG by considering their computational cost and learning rate schedule.",
"We discuss the trade-offs between the total computational cost, i.e. the total number of back-propagations performed, and convergence performance.",
"• We consider two different training regimes with and without label noise.",
"Under noisy labels, the analysis suggests SGD only outperforms SVRG under a mild total computational cost.",
"However, SGD always exhibits a faster convergence compared to SVRG when there is no label noise.",
"• Numerical experiments validate our theoretical predictions on both MNIST and CIFAR-10 using various neural network architectures.",
"In particular, we found the comparison of the convergence speed of SGD to that of SVRG in underparameterized neural networks closely matches with our noisy least squares model prediction.",
"Whereas, the effect of overparameterization is captured by the regression model without label noise.",
"In this paper, we studied the convergence properties of SGD and SVRG in the underparameterized and overparameterized settings.",
"We provided a non-asymptotic analysis of both algorithms.",
"We then investigated the question about which algorithm to use under certain total computational cost.",
"We performed numerical simulations of dynamics equations for both methods, as well as extensive experiments on the standard machine learning datasets, MNIST and CIFAR-10.",
"Remarkably, we found in many cases phenomenon predicted by our theory matched with observations in practice.",
"Our experiments suggested there is a trade-off between the computational cost and the convergence speed for underparameterized neural networks.",
"SVRG outperformed SGD after the first few epochs in this regime.",
"In the case of overparameterized model, a setting that matches with modern day neural networks training, SGD strictly dominated SVRG by showing a faster convergence for all computational cost."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0
] | [
0.1395348757505417,
0.1395348757505417,
0.14999999105930328,
0.05714285373687744,
0.1463414579629898,
0.1111111044883728,
0.21621620655059814,
0.1818181723356247,
0.3499999940395355,
0.13333332538604736,
0.1875,
0.12244897335767746,
0.060606054961681366,
0.0952380895614624,
0.1395348757505417,
0.1428571343421936,
0.1904761791229248,
0.260869562625885,
0.08510638028383255,
0.0624999962747097,
0.25,
0.20689654350280762,
0.19999998807907104,
0.051282044500112534,
0.16326530277729034,
0.1599999964237213,
0.07999999821186066,
0.07843136787414551,
0.2631579041481018,
0.1249999925494194,
0.1818181723356247,
0.07407406717538834,
0.24390242993831635,
0.25,
0.3243243098258972,
0.0624999962747097,
0.2222222238779068,
0.1111111044883728,
0.10810810327529907,
0.260869562625885,
0.12121211737394333,
0.3888888955116272,
0.2142857164144516,
0.17142856121063232,
0.1860465109348297,
0.05714285373687744,
0.2631579041481018,
0.19354838132858276,
0.25
] | HyleclHKvS | true | [
"Non-asymptotic analysis of SGD and SVRG, showing the strength of each algorithm in convergence speed and computational cost, in both under-parametrized and over-parametrized settings."
] |
[
"Non-autoregressive machine translation (NAT) systems predict a sequence of output tokens in parallel, achieving substantial improvements in generation speed compared to autoregressive models.",
"Existing NAT models usually rely on the technique of knowledge distillation, which creates the training data from a pretrained autoregressive model for better performance.",
"Knowledge distillation is empirically useful, leading to large gains in accuracy for NAT models, but the reason for this success has, as of yet, been unclear.",
"In this paper, we first design systematic experiments to investigate why knowledge distillation is crucial to NAT training.",
"We find that knowledge distillation can reduce the complexity of data sets and help NAT to model the variations in the output data.",
"Furthermore, a strong correlation is observed between the capacity of an NAT model and the optimal complexity of the distilled data for the best translation quality.",
"Based on these findings, we further propose several approaches that can alter the complexity of data sets to improve the performance of NAT models.",
"We achieve the state-of-the-art performance for the NAT-based models, and close the gap with the autoregressive baseline on WMT14 En-De benchmark.",
"Traditional neural machine translation (NMT) systems (Bahdanau et al., 2015; Gehring et al., 2017; Vaswani et al., 2017) generate sequences in an autoregressive fashion; each target token is predicted step-by-step by conditioning on the previous generated tokens in a monotonic (e.g. left-to-right) order.",
"While such autoregressive translation (AT) models have proven successful, the sequential dependence of decisions precludes taking full advantage of parallelism afforded by modern hardware (e.g. GPUs) at inference time.",
"On the other hand, there is a recent trend of non-autoregressive translation (NAT) models (Gu et al., 2018; , trading the model's capacity for decoding efficiency by making it possible predict the whole sequence or multi-token chunks of the sequence simultaneously.",
"Such a non-autoregressive factorization assumes that the output tokens are independent from each other.",
"However, this assumption obviously does not hold in reality and as a result NAT models generally perform worse than standard AT models.",
"One key ingredient to reducing the performance degradation of NAT models that is used in almost all existing works (Gu et al. (2018) ; ; Stern et al. (2019) , inter alia) is creation of training data through knowledge distillation (Hinton et al., 2015) .",
"More precisely, sequence-level knowledge distillation (Kim & Rush, 2016 ) -a special variant of the original approach -is applied during NAT model training by replacing the target side of training samples with the outputs from a pre-trained AT model trained on the same corpus with a roughly equal number of parameters.",
"It is usually assumed (Gu et al., 2018 ) that knowledge distillation's reduction of the \"modes\" (alternative translations for an input) in the training data is the key reason why distillation benefits NAT training.",
"However, this intuition has not been rigorously tested, leading to three important open questions:",
"• Exactly how does distillation reduce the \"modes\", and how we could we measure this reduction quantitatively?",
"Why does this reduction consistently improve NAT models?",
"• What is the relationship between the NAT model (student) and the AT model (teacher)?",
"Are different varieties of distilled data better for different NAT models?",
"• Due to distillation, the performance of NAT models is largely bounded by the choice of AT teacher.",
"Is there a way to further close the performance gap with standard AT models?",
"In this paper, we aim to answer the three questions above, improving understanding of knowledge distillation through empirical analysis over a variety of AT and NAT models.",
"Specifically, our contributions are as follows:",
"• We first visualize explicitly on a synthetic dataset how modes are reduced by distillation ( §3.1).",
"Inspired by the synthetic experiments, we further propose metrics for measuring complexity and faithfulness for a given training set.",
"Specifically, our metrics are the conditional entropy and KL-divergence of word translation based on an external alignment tool, and we show these are correlated with NAT model performance ( §3.2).",
"• We conduct a systematic analysis ( §4) over four AT teacher models and six NAT student models with various architectures on the standard WMT14 English-German translation benchmark.",
"These experiments find a strong correlation between the capacity of an NAT model and the optimal dataset complexity for the best translation quality.",
"• Inspired by these observations, we propose approaches to further adjust the complexity of the distilled data in order to match the model's capacity ( §5).",
"We also show that we can achieve the state-of-the-art performance for NAT and largely match the performance of the AT model.",
"In this paper, we first systematically examine why knowledge distillation improves the performance of NAT models.",
"We conducted extensive experiments with autoregressive teacher models of different capacity and a wide range of NAT models.",
"Furthermore, we defined metrics that can quantitatively measure the complexity of a parallel data set.",
"Empirically, we find that a higher-capacity NAT model requires a more complex distilled data to achieve better performance.",
"Accordingly, we propose several techniques that can adjust the complexity of a data set to match the capacity of an NAT model for better performance.",
"A EXPERIMENTAL DETAILS"
] | [
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.15686273574829102,
0.26923075318336487,
0.25925925374031067,
0.3478260934352875,
0.40816324949264526,
0.47058823704719543,
0.31372547149658203,
0.1702127605676651,
0.11594202369451523,
0.10344827175140381,
0.2153846174478531,
0.09302324801683426,
0.07999999821186066,
0.27272728085517883,
0.19718308746814728,
0.3333333134651184,
0.04651162400841713,
0.13636362552642822,
0.10810810327529907,
0.24390242993831635,
0.20512820780277252,
0.2222222238779068,
0.1395348757505417,
0.2545454502105713,
0,
0.08510638028383255,
0.21276594698429108,
0.24137930572032928,
0.178571417927742,
0.35999998450279236,
0.3461538553237915,
0.2978723347187042,
0.35555556416511536,
0.2222222238779068,
0.13636362552642822,
0.21739129722118378,
0.38461539149284363,
0
] | BygFVAEKDH | true | [
"We systematically examine why knowledge distillation is crucial to the training of non-autoregressive translation (NAT) models, and propose methods to further improve the distilled data to best match the capacity of an NAT model."
] |
[
"Owing to their ability to both effectively integrate information over long time horizons and scale to massive amounts of data, self-attention architectures have recently shown breakthrough success in natural language processing (NLP), achieving state-of-the-art results in domains such as language modeling and machine translation.",
"Harnessing the transformer's ability to process long time horizons of information could provide a similar performance boost in partially-observable reinforcement learning (RL) domains, but the large-scale transformers used in NLP have yet to be successfully applied to the RL setting.",
"In this work we demonstrate that the standard transformer architecture is difficult to optimize, which was previously observed in the supervised learning setting but becomes especially pronounced with RL objectives.",
"We propose architectural modifications that substantially improve the stability and learning speed of the original Transformer and XL variant.",
"The proposed architecture, the Gated Transformer-XL (GTrXL), surpasses LSTMs on challenging memory environments and achieves state-of-the-art results on the multi-task DMLab-30 benchmark suite, exceeding the performance of an external memory architecture.",
"We show that the GTrXL, trained using the same losses, has stability and performance that consistently matches or exceeds a competitive LSTM baseline, including on more reactive tasks where memory is less critical.",
"GTrXL offers an easy-to-train, simple-to-implement but substantially more expressive architectural alternative to the standard multi-layer LSTM ubiquitously used for RL agents in partially-observable environments. ",
"It has been argued that self-attention architectures (Vaswani et al., 2017) deal better with longer temporal horizons than recurrent neural networks (RNNs): by construction, they avoid compressing the whole past into a fixed-size hidden state and they do not suffer from vanishing or exploding gradients in the same way as RNNs.",
"Recent work has empirically validated these claims, demonstrating that self-attention architectures can provide significant gains in performance over the more traditional recurrent architectures such as the LSTM Radford et al., 2019; Devlin et al., 2019; .",
"In particular, the Transformer architecture (Vaswani et al., 2017) has had breakthrough success in a wide variety of domains: language modeling Radford et al., 2019; , machine translation (Vaswani et al., 2017; Edunov et al., 2018) , summarization (Liu & Lapata) , question answering (Dehghani et al., 2018; , multi-task representation learning for NLP (Devlin et al., 2019; Radford et al., 2019; , and algorithmic tasks (Dehghani et al., 2018) .",
"The repeated success of the transformer architecture in domains where sequential information processing is critical to performance makes it an ideal candidate for partially observable RL problems, where episodes can extend to thousands of steps and the critical observations for any decision often span the entire episode.",
"Yet, the RL literature is dominated by the use of LSTMs as the main mechanism for providing memory to the agent Kapturowski et al., 2019; Mnih et al., 2016) .",
"Despite progress at designing more expressive memory architectures Wayne et al., 2018; that perform better than LSTMs in memory-based tasks and partially-observable environments, they have not seen widespread adoption in RL agents perhaps due to their complex implementation, with the LSTM being seen as the go-to solution for environments where memory is required.",
"In contrast to these other memory architectures, the transformer is well-tested in many challenging domains and has seen several open-source implementations in a variety of deep learning frameworks 1 .",
"Motivated by the transformer's superior performance over LSTMs and the widespread availability of implementations, in this work we investigate the transformer architecture in the RL setting.",
"In particular, we find that the canonical transformer is significantly difficult to optimize, often resulting in performance comparable to a random policy.",
"This difficulty in training transformers exists in the supervised case as well.",
"Typically a complex learning rate schedule is required (e.g., linear warmup or cosine decay) in order to train (Vaswani et al., 2017; , or specialized weight initialization schemes are used to improve performance (Radford et al., 2019) .",
"These measures do not seem to be sufficient for RL.",
"In Mishra et al. (2018) , for example, transformers could not solve even simple bandit tasks and tabular Markov Decision Processes (MDPs), leading the authors to hypothesize that the transformer architecture was not suitable for processing sequential information.",
"However in this work we succeed in stabilizing training with a reordering of the layer normalization coupled with the addition of a new gating mechanism to key points in the submodules of the transformer.",
"Our novel gated architecture, the Gated Transformer-XL (GTrXL) (shown in Figure 1 , Right), is able to learn much faster and more reliably and exhibit significantly better final performance than the canonical transformer.",
"We further demonstrate that the GTrXL achieves state-ofthe-art results when compared to the external memory architecture MERLIN (Wayne et al., 2018) on the multitask DMLab-30 suite (Beattie et al., 2016) .",
"Additionally, we surpass LSTMs significantly on memory-based DMLab-30 levels while matching performance on the reactive set, as well as significantly outperforming LSTMs on memory-based continuous control and navigation environments.",
"We perform extensive ablations on the GTrXL in challenging environments with both continuous actions and high-dimensional observations, testing the final performance of the various components as well as the GTrXL's robustness to seed and hyperparameter sensitivity compared to LSTMs and the canonical transformer.",
"We demonstrate a consistent superior performance while matching the stability of LSTMs, providing evidence that the GTrXL architecture can function as a drop-in replacement to the LSTM networks ubiquitously used in RL.",
"In this paper we provided evidence that confirms previous observations in the literature that standard transformer models are unstable to train in the RL setting and often fail to learn completely (Mishra et al., 2018) .",
"We presented a new architectural variant of the transformer model, the GTrXL, which has increased performance, more stable optimization, and greater robustness to initial seed and hyperparameters than the canonical architecture.",
"The key contributions of the GTrXL are reordered layer normalization modules and a gating layer instead of the standard residual connection.",
"We performed extensive ablation experiments testing the robustness, ease of optimization and final performance of the gating layer variations, as well as the effect of the reordered layer normalization.",
"These results empirically demonstrate that the GRU-type gating performs best across all metrics, exhibiting comparable robustness to hyperparameters and random seeds as an LSTM while still maintaining a performance improvement.",
"Furthermore, the GTrXL (GRU) learns faster, more stably and achieves a higher final performance (even when controlled for parameters) than the other gating variants on the challenging multitask DMLab-30 benchmark suite.",
"Having demonstrated substantial and consistent improvement in DMLab-30, Numpad and Memory Maze over the ubiquitous LSTM architectures currently in use, the GTrXL makes the case for wider adoption of transformers in RL.",
"A core benefit of the transformer architecture is its ability to scale to very large and deep models, and to effectively utilize this additional capacity in larger datasets.",
"In future work, we hope to test the limits of the GTrXL's ability to scale in the RL setting by providing it with a large and varied set of training environments."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0
] | [
0.0952380895614624,
0.20338982343673706,
0.22641508281230927,
0.1463414579629898,
0.31372547149658203,
0.2181818187236786,
0.20408162474632263,
0.10810810327529907,
0.1090909019112587,
0.17142856121063232,
0.21875,
0.20408162474632263,
0.18918918073177338,
0.19230768084526062,
0.3478260934352875,
0.13333332538604736,
0.22857142984867096,
0.06896550953388214,
0.11764705181121826,
0.16949151456356049,
0.2448979616165161,
0.1090909019112587,
0.2745097875595093,
0.21276594698429108,
0.20338982343673706,
0.30188679695129395,
0.178571417927742,
0.19230768084526062,
0.1428571343421936,
0.1304347813129425,
0.2222222238779068,
0.18867923319339752,
0.3529411852359772,
0.20408162474632263,
0.31372547149658203
] | SyxKrySYPr | true | [
"We succeed in stabilizing transformers for training in the RL setting and demonstrate a large improvement over LSTMs on DMLab-30, matching an external memory architecture."
] |
[
"Knowledge distillation is an effective model compression technique in which a smaller model is trained to mimic a larger pretrained model.",
"However in order to make these compact models suitable for real world deployment, not only do\n",
"we need to reduce the performance gap but also we need to make them more robust to commonly occurring and adversarial perturbations.",
"Noise permeates every level of the nervous system, from the perception of sensory signals to the\n",
"generation of motor responses.",
"We therefore believe that noise could be a crucial element in improving neural networks training and addressing the apparently contradictory goals of improving both the generalization and robustness of the\n",
"model.",
"Inspired by trial-to-trial variability in the brain that can result from multiple noise sources, we introduce variability through noise at either the input level or the supervision signals.",
"Our results show that noise can improve both the generalization and robustness of the model.",
"”Fickle Teacher” which uses dropout in teacher model as a source of response variation leads to significant generalization improvement.",
"”Soft Randomization”, which matches the output distribution of\n",
"the student model on the image with Gaussian noise to the output of the teacher on original image, improves the adversarial robustness manifolds compared to the student model trained with Gaussian noise.",
"We further show the surprising effect of random label corruption on a model’s adversarial robustness.",
"The study highlights the benefits of adding constructive noise in the knowledge distillation framework and hopes to inspire further work in the area.",
"The design of Deep Neural Networks (DNNs) for efficient real world deployment involves careful consideration of following key elements: memory and computational requirements, performance, reliability and security.",
"DNNs are often deployed in resource constrained devices or in applications with strict latency requirements such as self driving cars which leads to a necessity for developing compact models that generalizes well.",
"Furthermore, since the environment in which the models are deployed are often constantly changing, it is important to consider their performance on both indistribution data as well as out-of-distribution data.",
"Thereby ensuring the reliability of the models under distribution shift.",
"Finally, the model needs to be robust to malicious attacks by adversaries (Kurakin et al., 2016) .",
"Many techniques have been proposed for achieving high performance in compressed model such as model quantization, model pruning, and knowledge distillation.",
"In our study, we focus on knowledge distillation as an interactive learning method which is more similar to human learning.",
"Knowledge Distillation involves training a smaller network (student) under the supervision of a larger pre-trained network (teacher).",
"In the original formulation, Hinton et al. (2015) proposed mimicking the softened softmax output of the teacher model which consistently improves the performance of the student model compared to the model trained without teacher assistance.",
"However, despite the promising performance gain, there is still a significant performance gap between the student and the teacher model.",
"Consequently an optimal method of capturing knowledge from the larger network and transferring it to a smaller model remains an open question.",
"While reducing this generalization gap is important, in order to truly make these models suitable for real world deployment, it is also pertinent to incorporate methods into the knowledge distillation framework that improve the robustness of the student model to both commonly occurring and malicious perturbations.",
"For our proposed methods, we derive inspiration from studies in neuroscience on how humans learn.",
"A human infant is born with billions of neurons and throughout the course of its life, the connections between these neurons are constantly changing.",
"This neuroplasticity is at the very core of learning (Draganski et al., 2004) .",
"Much of the learning for a child happens not in isolation but rather through collaboration.",
"A child learns by interacting with the environment and understanding it through their own experience as well as observations of others.",
"Two learning theories are central to our approach: cognitive bias and trial-to-trial response variation.",
"Human decision-making shows systematic simplifications and deviations from the tenets of rationality ('heuristics') that may lead to sub-optimal decisional outcomes ('cognitive biases') (Korteling et al., 2018) .",
"These biases are strengthened through repeatedly rewarding a particular response to the same stimuli.",
"Trial-to-trial response variation in the brain, i.e. variation in neural responses to the same stimuli, encodes valuable information about the stimuli (Scaglione et al., 2011) .",
"We hypothesize that introducing constructive noise in the student-teacher collaborative learning framework to mimic the trial-to-trial response variation in humans can act as a deterrent to cognitive bias which is manifested in the form of memorization and over-generalization in neural networks.",
"When viewed from this perspective, noise can be a crucial element in improving learning and addressing the apparent contradictory goals of achieving accurate and robust models.",
"In this work, we present a compelling case for the beneficial effects of introduction of noise in knowledge distillation.",
"We provide a comprehensive study on the effects of noise on model generalization and robustness.",
"Our contributions are as follows:",
"• A comprehensive analysis on the effects of adding a diverse range of noise types in different aspects of the teacher-student collaborative learning framework.",
"Our study aims to motivate further work in exploring how noise can improve both generalization and robustness of the student model.",
"• A novel approach for transferring teacher model's uncertainty to a student using Dropout in teacher model as a source of trial-to-trial response variability which leads to significant generalization improvement.",
"We call this method \"Fickle Teacher\".",
"• A novel approach for using Gaussian noise in the knowledge distillation which improves the adversarial robustness of the student model by an order of magnitude while significantly limiting the drop in generalization.",
"we refer to this method as \"Soft Randomization\".",
"• Random label corruption as a strong deterrent to cognitive bias and demonstrating its surprising ability to significantly improve adversarial robustness with minimal reduction in generalization.",
"Inspired by trial-to-trial variability in the brain, we introduce variability in the knowledge distillation framework through noise at either the input level or the supervision signals.",
"For this purpose, we proposed novel ways of introducing noise at multiple levels and studied their effect on both generalization and robustness.",
"Fickle teacher improves the both in-distribution and out of distribution generalization significantly while also slightly improving robustness to common and adversarial perturbations.",
"Soft randomization improves the adversarial robustness of the student model trained alone with Gaussian noise by a huge margin for lower noise intensities while also reducing the drop in generalization.",
"We also showed the surprising effect of random label corruption alone in increasing the adversarial robustness by an order of magnitude in addition to improving the generalization.",
"Our strong empirical results suggest that injecting noises which increase the trial-to-trial variability in the knowledge distillation framework is a promising direction towards training compact models with good generalization and robustness.",
"A APPENDIX"
] | [
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.09090908616781235,
0.04651162400841713,
0.13333332538604736,
0.09999999403953552,
0,
0.26923075318336487,
0.6666666865348816,
0.3414634168148041,
0.08695651590824127,
0.05714285373687744,
0.1702127605676651,
0.1904761791229248,
0.2978723347187042,
0.038461532443761826,
0.06896550953388214,
0.15094339847564697,
0.0555555522441864,
0.09302324801683426,
0.17391303181648254,
0.17391303181648254,
0.0476190447807312,
0.037735845893621445,
0.09090908616781235,
0.1666666567325592,
0.2647058665752411,
0.1904761791229248,
0.0833333283662796,
0.04878048226237297,
0.1428571343421936,
0.21276594698429108,
0.09756097197532654,
0.14814814925193787,
0.09756097197532654,
0.07999999821186066,
0.25806450843811035,
0.23076923191547394,
0.2666666507720947,
0.2926829159259796,
0,
0.2083333283662796,
0.2916666567325592,
0.14814814925193787,
0,
0.290909081697464,
0.05714285373687744,
0.1538461446762085,
0.5416666865348816,
0.4166666567325592,
0.1666666567325592,
0.2222222238779068,
0.23999999463558197,
0.38596490025520325
] | HkeJjeBFDB | true | [
"Inspired by trial-to-trial variability in the brain that can result from multiple noise sources, we introduce variability through noise in the knowledge distillation framework and studied their effect on generalization and robustness."
] |
[
"We introduce a novel end-to-end approach for learning to cluster in the absence of labeled examples.",
"Our clustering objective is based on optimizing normalized cuts, a criterion which measures both intra-cluster similarity as well as inter-cluster dissimilarity.",
"We define a differentiable loss function equivalent to the expected normalized cuts.",
"Unlike much of the work in unsupervised deep learning, our trained model directly outputs final cluster assignments, rather than embeddings that need further processing to be usable.",
"Our approach generalizes to unseen datasets across a wide variety of domains, including text, and image.",
"Specifically, we achieve state-of-the-art results on popular unsupervised clustering benchmarks (e.g., MNIST, Reuters, CIFAR-10, and CIFAR-100), outperforming the strongest baselines by up to 10.9%.",
"Our generalization results are superior (by up to 21.9%) to the recent top-performing clustering approach with the ability to generalize.",
"Clustering unlabeled data is an important problem from both a scientific and practical perspective.",
"As technology plays a larger role in daily life, the volume of available data has exploded.",
"However, labeling this data remains very costly and often requires domain expertise.",
"Therefore, unsupervised clustering methods are one of the few viable approaches to gain insight into the structure of these massive unlabeled datasets.",
"One of the most popular clustering methods is spectral clustering (Shi & Malik, 2000; Ng et al., 2002; Von Luxburg, 2007) , which first embeds the similarity of each pair of data points in the Laplacian's eigenspace and then uses k-means to generate clusters from it.",
"Spectral clustering not only outperforms commonly used clustering methods, such as k-means (Von Luxburg, 2007) , but also allows us to directly minimize the pairwise distance between data points and solve for the optimal node embeddings analytically.",
"Moreover, it is shown that the eigenvector of the normalized Laplacian matrix can be used to find the approximate solution to the well known normalized cuts problem (Ng et al., 2002; Von Luxburg, 2007) .",
"In this work, we introduce CNC, a framework for Clustering by learning to optimize expected Normalized Cuts.",
"We show that by directly minimizing a continuous relaxation of the normalized cuts problem, CNC enables end-to-end learning approach that outperforms top-performing clustering approaches.",
"We demonstrate that our approach indeed can produce lower normalized cut values than the baseline methods such as SpectralNet, which consequently results in better clustering accuracy.",
"Let us motivate CNC through a simple example.",
"In Figure 1 , we want to cluster 6 images from CIFAR-10 dataset into two clusters.",
"The affinity graph for these data points is shown in Figure 1",
"(a) (details of constructing such graph is discussed in Section 4.2).",
"In this example, it is obvious that the optimal clustering is the result of cutting the edge connecting the two triangles.",
"Cutting this edge will result in the optimal value for the normalized cuts objective.",
"In CNC, we define a new differentiable loss function equivalent to the expected normalized cuts objective.",
"We train a deep learning model to minimize the proposed loss in an unsupervised manner without the need for any labeled datasets.",
"Our trained model directly returns the probabilities of belonging to each cluster (Figure 1(b",
") ). In",
"this example, the optimal normalized cuts is 0.286 (Equation 1), and as we can see, the CNC loss also converges to this value (Figure 1(c)",
"Optimal Normalized cuts #edge cuts = 1 per cluster volume = 2+2+3 = 7 1/7 + 1/7 = 0.286",
"Cluster 2 Cluster 2 Cluster 1",
"Figure 1: Motivational example:",
"(a) affinity graph of 6 images from CIFAR-10, the objective is to cluster these images into two clusters.",
"(b) CNC model is trained to minimize expected normalized cuts in an unsupervised manner without the need for any labeled data.",
"For each data point, our model directly outputs the probabilities of it belonging to each of the clusters.",
"(c) The CNC loss converges to the optimal normalized cuts value.",
"In Algorithm 1 we show how we can scale this approach through a batch processing technique to large datasets.",
"We compare the performance of CNC to several learning-based clustering approaches (SpectralNet , DEC (Xie et al., 2016) , DCN (Yang et al., 2017) , VaDE (Jiang et al., 2017) , DEPICT (Ghasedi Dizaji et al., 2017) , IMSAT (Hu et al., 2017) , and IIC (Ji et al., 2019) ) on four datasets: MNIST, Reuters, CIFAR10, and CIFAR100.",
"Our results show up to 10.9% improvement over the baselines.",
"Moreover, generalizing spectral embeddings to unseen data points, a task commonly referred to as out-of-sample-extension (OOSE), is a non-trivial task (Bengio et al., 2003; Belkin et al., 2006; Mendoza Quispe et al., 2016) .",
"Our results confirm that CNC generalizes to unseen data.",
"Our generalization results are superior (by up to 21.9%) to SpectralNet , the recent top-performing clustering approach with the ability to generalize.",
"We propose CNC (Clustering by learning to optimize Normalized Cuts), a framework for learning to cluster unlabeled examples.",
"We define a differentiable loss function equivalent to the expected normalized cuts and use it to train CNC model that directly outputs final cluster assignments.",
"CNC achieves state-of-the-art results on popular unsupervised clustering benchmarks (MNIST, Reuters, CIFAR-10, and CIFAR-100 and outperforms the strongest baselines by up to 10.9%.",
"CNC also enables generation, yielding up to 21.9% improvement over SpectralNet , the previous best-performing generalizable clustering approach.",
"Table 4 : Generalization results: CNC is trained on VGG and validated on MNIST-conv.",
"During inference, the model is applied to unseen TensorFlow graphs: ResNet.",
"Inception-v3, and AlexNet.",
"The ground truth for AlexNet is Bal = 99%, Cut = 4.6%, for Inception-v3, is Bal = 99%, Cut = 3.7%, and for ResNet is Bal = 99% and Cut = 3.3%.",
"GraphSAGE-on generalizes better than the other models."
] | [
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.800000011920929,
0.09090908616781235,
0.6666666865348816,
0.19607841968536377,
0.19999998807907104,
0.07843136787414551,
0.1428571343421936,
0.052631575614213943,
0.19999998807907104,
0,
0.13636362552642822,
0.1230769157409668,
0.10169491171836853,
0.18518517911434174,
0.2926829159259796,
0.3829787075519562,
0.19999998807907104,
0.0624999962747097,
0.09999999403953552,
0.1111111044883728,
0.1111111044883728,
0.09756097197532654,
0.2702702581882477,
0.550000011920929,
0.4000000059604645,
0.21052631735801697,
0,
0.2083333283662796,
0.10256409645080566,
0,
0,
0.19512194395065308,
0.35555556416511536,
0.1538461446762085,
0.2857142686843872,
0.1428571343421936,
0.1249999925494194,
0.11428570747375488,
0.07843136787414551,
0.060606054961681366,
0.13636362552642822,
0.3499999940395355,
0.5416666865348816,
0.08510638028383255,
0.1395348757505417,
0,
0.11428570747375488,
0,
0.04651162400841713,
0.06451612710952759
] | BklLVAEKvH | true | [
"We introduce a novel end-to-end approach for learning to cluster in the absence of labeled examples. We define a differentiable loss function equivalent to the expected normalized cuts."
] |
[
"We introduce the largest (among publicly available) dataset for Cyrillic Handwritten Text Recognition and the first dataset for Cyrillic Text in the Wild Recognition, as well as suggest a method for recognizing Cyrillic Handwritten Text and Text in the Wild.",
"Based on this approach, we develop a system that can reduce the document processing time for one of the largest mathematical competitions in Ukraine by 12 days and the amount of used paper by 0.5 ton.",
"Text is one of the main ways to transfer the information and it can take many forms.",
"It can be handwritten or printed, in the form of business documents, notes, bills, historical documents, advertisements, logos etc.",
"Therefore, the method for its recognition should be flexible enough to work with different text styles and under the different external conditions.",
"Although for the English language the task of text recognition is well studied [1] , [2] , for Cyrillic languages such studies are almost missing, the main reason being the lack of extensive publicly available datasets.",
"To the best of our knowledge, the only public Cyrillic dataset consists only of individual letters [3] , while others [1] , [4] , [5] , are unavailable.",
"In our research, we will focus on developing a single model for Handwritten Text Recognition and Text Recognition in the Wild, as the extreme case of printed text."
] | [
1,
0,
0,
0,
0,
0,
0,
0
] | [
0.3888888955116272,
0.13333332538604736,
0.0714285671710968,
0,
0.3125,
0.1904761791229248,
0.05882352590560913,
0.1621621549129486
] | Ske6GT9c8r | true | [
"We introduce several datasets for Cyrillic OCR and a method for its recognition"
] |
[
"Most deep learning for NLP represents each word with a single point or single-mode region in semantic space, while the existing multi-mode word embeddings cannot represent longer word sequences like phrases or sentences.",
"We introduce a phrase representation (also applicable to sentences) where each phrase has a distinct set of multi-mode codebook embeddings to capture different semantic facets of the phrase's meaning.",
"The codebook embeddings can be viewed as the cluster centers which summarize the distribution of possibly co-occurring words in a pre-trained word embedding space.",
"We propose an end-to-end trainable neural model that directly predicts the set of cluster centers from the input text sequence (e.g., a phrase or a sentence) during test time.",
"We find that the per-phrase/sentence codebook embeddings not only provide a more interpretable semantic representation but also outperform strong baselines (by a large margin in some tasks) on benchmark datasets for unsupervised phrase similarity, sentence similarity, hypernym detection, and extractive summarization.",
"Many widely-applicable NLP models learn a representation from only co-occurrence statistics in the raw text without any supervision.",
"Examples include word embedding like word2vec (Mikolov et al., 2013) or GloVe (Pennington et al., 2014) , sentence embeddings like skip-thoughts (Kiros et al., 2015) , and contextualized word embedding like ELMo (Peters et al., 2018) and BERT (Devlin et al., 2019) .",
"Most of these models use a single embedding to represent one sentence or one phrase and can only provide symmetric similarity measurement when no annotation is available.",
"However, a word or phrase might have multiple senses, and a sentence can involve multiple topics, which are hard to analyze based on a single embedding without supervision.",
"To address the issue, word sense induction methods (Lau et al., 2012) and recent multi-mode word embeddings (Neelakantan et al., 2014; Athiwaratkun & Wilson, 2017; Singh et al., 2018) represent each target word as multiple points or regions in a distributional semantic space by (explicitly or implicitly) clustering all the words appearing beside the target word.",
"In Figure 1 , the multi-mode representation of real property is illustrated as an example.",
"Real property can be observed in legal documents where it usually means a real estate, while a real property can also mean a true characteristic in philosophic discussions.",
"The previous approaches discover those senses by clustering observed neighboring words (e.g., company and tax).",
"In contrast with topic modeling like LDA (Blei et al., 2003) , the approaches need to solve a distinct clustering problem for every target word while topic modeling finds a single set of clusters by clustering all the words in the corpus.",
"Extending these multi-mode representations to arbitrary sequences like phrases or sentences is difficult due to two efficiency challenges.",
"First, there are usually many more unique phrases and sentences in a corpus than there are words, while the number of parameters for clustering-based approaches is O(|V | × |K| × |E|), where |V | is number of unique sequences, |K| is number of modes/clusters, and |E| is the number of embedding dimensions.",
"Estimating and storing such a large number of parameters take time and space.",
"More important, many unique sequences imply much fewer co-occurring words to be clustered for each sequence, especially for long sequences Figure 1 : The target phrase real property is represented by four clustering centers.",
"The previous work discovers the four modes by finding clustering centers which well compress the embedding of observed co-occurring words.",
"Instead, our compositional model learns to predict the embeddings of cluster centers from the sequence of words in the target phrase so as to reconstruct the (unseen) co-occurring distribution well.",
"like sentences, so an effective model needs to overcome this sample efficient challenge (i.e., sparseness in the co-occurring statistics).",
"However, clustering approaches often have too many parameters to learn the compositional meaning of each sequence without overfitting.",
"Nevertheless, the sentences (or phrases) sharing multiple words tend to have similar cluster centers, so we should be able to compress many redundant parameters in these local clustering problems to circumvent the challenges.",
"In this work, we adopt a neural encoder and decoder to achieve the goal.",
"As shown in Figure 1 , instead of clustering co-occurring words beside a target sequence at test time as in previous approaches, we learn a mapping between the target sequence (i.e., phrases or sentences) and the corresponding cluster centers during training so that we can directly predict those cluster centers using a single forward pass of the neural network for an arbitrary unseen input sequences during testing.",
"To allow the neural network to generate the cluster centers in an arbitrary order, we use a nonnegative and sparse coefficient matrix to dynamically match the sequence of predicted cluster centers and the observed set of co-occurring word embeddings during training.",
"After the coefficient matrix is estimated for each input sequence, the gradients are back-propagated to cluster centers (i.e., codebook embeddings) and weights of decoder and encoder, which allows us to train the whole model jointly and end-to-end.",
"In experiments, we show that the proposed model captures the compositional meanings of words in unsupervised phrase similarity tasks much better than averaging their (contextualized) word embeddings, strong baselines that are widely used in practice.",
"In addition to similarity, our model can also measure asymmetric relations like hypernymy without any supervision.",
"Furthermore, the multimode representation is shown to outperform the single-mode alternatives in sentence representation, especially as demonstrated in our extractive summarization experiment.",
"In this work, we overcome the computational and sampling efficiency challenges of learning the multi-mode representation for long sequences like phrases or sentences.",
"We use a neural encoder to model the compositional meaning of the target sequence and use a neural decoder to predict a set of codebook embeddings as the representation of the sentences or phrases.",
"During training, we use a non-negative sparse coefficient matrix to dynamically match the predicted codebook embeddings to a set of observed co-occurring words and allow the neural decoder to predict the clustering centers with an arbitrary permutation.",
"We demonstrate that the proposed models can learn to predict interpretable clustering centers conditioned on an (unseen) sequence, and the representation outperforms widely-used baselines such as BERT, skip-thoughts and various approaches based on GloVe in several unsupervised benchmarks.",
"The experimental results also suggest that multi-facet embeddings perform the best when the input sequence (e.g., a sentence) involves many aspects, while multi-facet and single-facet embeddings perform similarly good when the input sequence (e.g., a phrase) usually involves only one aspect.",
"In the future, we would like to train a single model which could generate multi-facet embeddings for both phrases and sentences, and evaluate the method as a pre-trained embedding approach for supervised or semi-supervised settings.",
"Furthermore, we plan to apply this method to other unsupervised learning tasks that heavily rely on co-occurrence statistics such as graph embedding or recommendation."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0
] | [
0.1860465109348297,
0.15789473056793213,
0.0555555522441864,
0.1428571343421936,
0.19230769574642181,
0.06451612710952759,
0.09756097197532654,
0.10256409645080566,
0.15789473056793213,
0.10169491171836853,
0.0714285671710968,
0,
0.06666666269302368,
0.08163265138864517,
0.19999998807907104,
0.16326530277729034,
0.07999999821186066,
0.08888888359069824,
0,
0.10526315122842789,
0.11764705181121826,
0.12903225421905518,
0.1395348757505417,
0.14814814925193787,
0.14492753148078918,
0.17391304671764374,
0.12765957415103912,
0.04444444179534912,
0.06896550953388214,
0.060606054961681366,
0.22857142984867096,
0.3243243098258972,
0.17777776718139648,
0.25,
0.08888888359069824,
0.22727271914482117,
0.1111111044883728
] | HkebMlrFPS | true | [
"We propose an unsupervised way to learn multiple embeddings for sentences and phrases "
] |
[
"We present a meta-learning approach for adaptive text-to-speech (TTS) with few data.",
"During training, we learn a multi-speaker model using a shared conditional WaveNet core and independent learned embeddings for each speaker.",
"The aim of training is not to produce a neural network with fixed weights, which is then deployed as a TTS system.",
"Instead, the aim is to produce a network that requires few data at deployment time to rapidly adapt to new speakers.",
"We introduce and benchmark three strategies:\n",
"(i) learning the speaker embedding while keeping the WaveNet core fixed,\n",
"(ii) fine-tuning the entire architecture with stochastic gradient descent, and\n",
"(iii) predicting the speaker embedding with a trained neural network encoder.\n",
"The experiments show that these approaches are successful at adapting the multi-speaker neural network to new speakers, obtaining state-of-the-art results in both sample naturalness and voice similarity with merely a few minutes of audio data from new speakers.",
"Training a large model with lots of data and subsequently deploying this model to carry out classification or regression is an important and common methodology in machine learning.",
"It has been particularly successful in speech recognition , machine translation BID1 and image recognition BID2 BID3 .",
"In this textto-speech (TTS) work, we are instead interested in few-shot meta-learning.",
"Here the objective of training with many data is not to learn a fixed-parameter classifier, but rather to learn a \"prior\" neural network.",
"This prior TTS network can be adapted rapidly, using few data, to produce TTS systems for new speakers at deployment time.",
"That is, the intention is not to learn a fixed final model, but rather to learn a model prior that harnesses few data at deployment time to learn new behaviours rapidly.",
"The output of training is not longer a fixed model, but rather a fast learner.Biology provides motivation for this line of research.",
"It may be argued that evolution is a slow adaptation process that has resulted in biological machines with the ability to adapt rapidly to new data during their lifetimes.",
"These machines are born with strong priors that facilitate rapid learning.We consider a meta-learning approach where the model has two types of parameters: task-dependent parameters and task-independent parameters.",
"During training, we learn all of these parameters but discard the task-dependent parameters for deployment.",
"The goal is to use few data to learn the task-dependent parameters for new tasks rapidly.Task-dependent parameters play a similar role to latent variables in classical probabilistic graphical models.",
"Intuitively, these variables introduce flexibility, thus making it easier to learn the taskindependent parameters.",
"For example, in classical HMMs, knowing the latent variables results in a simple learning problem of estimating the parameters of an exponential-family distribution.",
"In neural networks, this approach also facilitates learning when there is clear data diversity and categorization.",
"We show this for adaptive TTS BID4 BID5 .",
"In this setting, speakers correspond to tasks.",
"During training we have many speakers, and it is therefore helpful to have task-dependent parameters to capture speaker-specific voice styles.",
"At the same time, it is useful to have a large model with shared parameters to capture the generic process of mapping text to speech.",
"To this end, we employ the WaveNet model.WaveNet BID6 is an autoregressive generative model for audio waveforms that has yielded state-of-art performance in speech synthesis.",
"This model was later modified for real-time speech generation via probability density distillation into a feed-forward model BID7 .",
"A fundamental limitation of WaveNet is the need for hours of training data for each speaker.",
"In this paper we describe a new WaveNet training procedure that facilitates adaptation to new speakers, allowing the synthesis of new voices from no more than 10 minutes of data with high sample quality.We propose several extensions of WaveNet for sample-efficient adaptive TTS.",
"First, we present two non-parametric adaptation methods that involve fine-tuning either the speaker embeddings only or all the model parameters given few data from a new speaker.",
"Second, we present a parametric textindependent approach whereby an auxiliary network is trained to predict new speaker embeddings.The experiments will show that all the proposed approaches, when provided with just a few seconds or minutes of recording, can generate high-fidelity utterances that closely resemble the vocal tract characteristics of a demonstration speaker, particularly when the entire model is fine-tuned end-to-end.",
"When fine-tuning by first estimating the speaker embedding and subsequently fine-tuning the entire model, we achieve state-of-the-art results in terms of sample naturalness and voice similarity to target speakers.",
"These results are robust across speech datasets recorded under different conditions and, moreover, we demonstrate that the generated samples are capable of confusing the state-of-the-art text-independent speaker verification system BID8 .TTS",
"techniques require hours of high-quality recordings, collected in controlled environments, for each new voice style. Given",
"this high cost, reducing the length of the training dataset could be valuable. For example",
", it is likely to be very beneficial when attempting to restore the voices of patients who suffer from voice-impairing medical conditions. In these cases",
", long high quality recordings are scarce.",
"This paper studied three variants of meta-learning for sample efficient adaptive TTS.",
"The adaptation method that fine-tunes the entire model, with the speaker embedding vector first optimized, shows impressive performance even with only 10 seconds of audio from new speakers.",
"When adapted with a few minutes of data, our model matches the state-of-the-art performance in sample naturalness.",
"Moreover, it outperforms other recent works in matching the new speaker's voice.",
"We also demon- : ROC curve for real versus generated utterance detection.",
"The utterances were generated using models with 5 and 10 minutes of training data per speaker from LibriSpeech and VCTK respectively.",
"Lower curve indicate that the verification system is having a harder time distinguishing real from generated samples.strated that the generated samples achieved a similar level of voice similarity to real utterances from the same speaker, when measured by a text independent speaker verification model.Our paper considers the adaptation to new voices with clean, high-quality training data collected in a controlled environment.",
"The few-shot learning of voices with noisy data is beyond the scope of this paper and remains a challenging open research problem.A requirement for less training data to adapt the model, however, increases the potential for both beneficial and harmful applications of text-to-speech technologies such as the creation of synthesized media.",
"While the requirements for this particular model (including the high-quality training data collected in a controlled environment and equally high quality data from the speakers to which we adapt, as described in Section 5.1) present barriers to misuse, more research must be conducted to mitigate and detect instances of misuse of text-to-speech technologies in general."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0
] | [
0.2222222238779068,
0.11764705181121826,
0.17142856121063232,
0.29411762952804565,
0,
0.07999999821186066,
0.1599999964237213,
0.2222222238779068,
0.26923075318336487,
0.19512194395065308,
0,
0,
0.22857142984867096,
0.11428570747375488,
0.24390242993831635,
0.0555555522441864,
0.2857142686843872,
0.1860465109348297,
0.06896550953388214,
0.1904761791229248,
0.13793103396892548,
0.11428570747375488,
0,
0,
0.09090908616781235,
0.12121211737394333,
0.2702702581882477,
0.1538461446762085,
0.1249999925494194,
0.06896550953388214,
0.18518517911434174,
0.19999998807907104,
0.1764705777168274,
0.19512194395065308,
0.09090908616781235,
0.19354838132858276,
0.06896550953388214,
0.10256409645080566,
0,
0.07407406717538834,
0.19512194395065308,
0.375,
0.2222222238779068,
0,
0.05714285373687744,
0.2153846174478531,
0.20689654350280762,
0.16129031777381897
] | rkzjUoAcFX | true | [
"Sample efficient algorithms to adapt a text-to-speech model to a new voice style with the state-of-the-art performance."
] |
[
"This paper introduces a probabilistic framework for k-shot image classification.",
"The goal is to generalise from an initial large-scale classification task to a separate task comprising new classes and small numbers of examples.",
"The new approach not only leverages the feature-based representation learned by a neural network from the initial task (representational transfer), but also information about the classes (concept transfer).",
"The concept information is encapsulated in a probabilistic model for the final layer weights of the neural network which acts as a prior for probabilistic k-shot learning.",
"We show that even a simple probabilistic model achieves state-of-the-art on a standard k-shot learning dataset by a large margin.",
"Moreover, it is able to accurately model uncertainty, leading to well calibrated classifiers, and is easily extensible and flexible, unlike many recent approaches to k-shot learning.",
"A child encountering images of helicopters for the first time is able to generalize to instances with radically different appearance from only a handful of labelled examples.",
"This remarkable feat is supported in part by a high-level feature-representation of images acquired from past experience.",
"However, it is likely that information about previously learned concepts, such as aeroplanes and vehicles, is also leveraged (e.g. that sets of features like tails and rotors or objects like pilots/drivers are likely to appear in new images).",
"The goal of this paper is to build machine systems for performing k-shot learning, which leverage both existing feature representations of the inputs and existing class information that have both been honed by learning from large amounts of labelled data.K-shot learning has enjoyed a recent resurgence in the academic community BID6 BID5 BID15 BID12 .",
"Current stateof-the-art methods use complex deep learning architectures and claim that learning good features for k-shot learning entails training for k-shot specifically via episodic training that simulates many k-shot tasks.",
"In contrast, this paper proposes a general framework based upon the combination of a deep feature extractor, trained on batch classification, and traditional probabilistic modelling.",
"It subsumes two existing approaches in this vein BID1 , and is motivated by similar ideas from multi-task learning BID0 .",
"The intuition is that deep learning will learn powerful feature representations, whereas probabilistic inference will transfer top-down conceptual information from old classes.",
"Representational learning is driven by the large number of training examples from the original classes making it amenable to standard deep learning.",
"In contrast, the transfer of conceptual information to the new classes relies on a relatively small number of existing classes and k-shot data points, which means probabilistic inference is appropriate.While generalisation accuracy is often the key objective when training a classifier, calibration is also a fundamental concern in many applications such as decision making for autonomous driving and medicine.",
"Here, calibration refers to the agreement between a classifier's uncertainty and the frequency of its mistakes, which has recently received increased attention.",
"For example, show that the calibration of deep architectures deteriorates as depth and complexity increase.",
"Calibration is closely related to catastrophic forgetting in continual learning.",
"However, to our knowledge, uncertainty has so far been over-looked by the k-shot community even though it is high in this setting.Our basic setup mimics that of the motivating example above: a standard deep convolutional neural network (CNN) is trained on a large labelled training set.",
"This learns a rich representation of images at the top hidden layer of the CNN.",
"Accumulated knowledge about classes is embodied in the top layer softmax weights of the network.",
"This information is extracted by training a probabilistic model on these weights.",
"K-shot learning can then",
"1) use the representation of images provided by the CNN as input to a new softmax function,",
"2) learn the new softmax weights by combining prior information about their likely form derived from the original dataset with the k-shot likelihood.The main contributions of our paper are:",
"1) We propose a probabilistic framework for k-shot learning.",
"It combines deep convolutional features with a probabilistic model that treats the top-level weights of a neural network as data, which can be used to regularize the weights at k-shot time in a principled Bayesian fashion.",
"We show that the framework recovers L 2 -regularised logistic regression, with an automatically determined setting of the regularisation parameter, as a special case.2) We show that our approach achieves state-of-the-art results on the miniImageNet dataset by a wide margin of roughly 6% for 1-and 5-shot learning.",
"We further show that architectures with better batch classification accuracy also provide features which generalize better at k-shot time.",
"This finding is contrary to the current belief that episodic training is necessary for good performance and puts the success of recent complex deep learning approaches to k-shot learning into context.3) We show on miniImageNet and CIFAR-100 that our framework achieves a good trade-off between classification accuracy and calibration, and it strikes a good balance between learning new classes and forgetting the old ones.",
"We present a probabilistic framework for k-shot learning that exploits the powerful features and class information learned by a neural network on a large training dataset.",
"Probabilistic models are then used to transfer information in the network weights to new classes.",
"Experiments on miniImageNet using a simple Gaussian model within our framework achieve state-of-the-art for 1-shot and 5-shot learning by a wide margin, and at the same time return well calibrated predictions.",
"This finding is contrary to the current belief that episodic training is necessary to learn good k-shot features and puts the success of recent complex deep learning approaches to k-shot learning into context.",
"The new approach is flexible and extensible, being applicable to general discriminative models and kshot learning paradigms.",
"For example, preliminary results on online k-shot learning indicate that the probabilistic framework mitigates catastrophic forgetting by automatically balancing performance on the new and old classes.The Gaussian model is closely related to regularised logistic regression, but provides a principled and fully automatic way to regularise.",
"This is particularly important in k-shot learning, as it is a low-data regime, in which cross-validation performs poorly and where it is important to train on all available data, rather than using validation sets.Appendix to \"Discriminative k-shot learning using probabilistic models\"A DETAILS ON THE DERIVATION AND APPROXIMATIONS FROM SEC. 2.1As stated in the main text, the probabilistic k-shot learning approach comprises four phases mirroring the dataflow:Phase 1: Representational learning.",
"The large dataset D is used to train the CNN Φ ϕ using standard deep learning optimisation approaches.",
"This involves learning both the parameters ϕ of the feature extractor up to the last hidden layer, as well as the softmax weights W. The network parameters ϕ are fixed from this point on and shared across phases.",
"This is a standard setup for multitask learning and in the present case it ensures that the features derived from the representational learning can be leveraged for k-shot learning.Phase 2: Concept learning.",
"The softmax weights W are effectively used as data for concept learning by training a probabilistic model that detects structure in these weights which can be transferred for k-shot learning.",
"This approach will be justified in the next section.",
"For the moment, we consider a general class of probabilistic models in which the two sets of weights are generated from shared hyperparameters θ, so that p( W, W, θ) = p(θ)p( W|θ)p(W|θ) (see FIG0 ).Phases",
"3 and 4: k-shot learning and testing. Probabilistic",
"k-shot learning leverages the learned representation Φ ϕ from phase 1 and the probabilistic model p( W, W, θ) from phase 2 to build a (posterior) predictive model for unseen new examples using examples from the small dataset D."
] | [
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.8333333134651184,
0.11428570747375488,
0.04999999701976776,
0.21621620655059814,
0.375,
0.0555555522441864,
0.10256409645080566,
0.12903225421905518,
0.0416666641831398,
0.158730149269104,
0.1621621549129486,
0.21052631735801697,
0,
0.11428570747375488,
0,
0.1230769231915474,
0.05714285373687744,
0.06896550953388214,
0,
0.10344827175140381,
0.14814814925193787,
0,
0.23076923191547394,
0,
0.06666666269302368,
0.0952380895614624,
0.43478259444236755,
0.17391303181648254,
0.2545454502105713,
0.1875,
0.25,
0.31578946113586426,
0,
0.1860465109348297,
0.1463414579629898,
0,
0.2142857164144516,
0.11428570747375488,
0,
0.043478257954120636,
0.24390242993831635,
0.24390242993831635,
0.08695651590824127,
0.1249999925494194,
0.0952380895614624,
0.17777776718139648
] | r1DPFCyA- | true | [
"This paper introduces a probabilistic framework for k-shot image classification that achieves state-of-the-art results"
] |
[
"Building on the recent successes of distributed training of RL agents, in this paper we investigate the training of RNN-based RL agents from distributed prioritized experience replay.",
"We study the effects of parameter lag resulting in representational drift and recurrent state staleness and empirically derive an improved training strategy.",
"Using a single network architecture and fixed set of hyper-parameters, the resulting agent, Recurrent Replay Distributed DQN, quadruples the previous state of the art on Atari-57, and matches the state of the art on DMLab-30.",
"It is the first agent to exceed human-level performance in 52 of the 57 Atari games.",
"Reinforcement Learning (RL) has seen a rejuvenation of research interest recently due to repeated successes in solving challenging problems such as reaching human-level play on Atari 2600 games BID15 , beating the world champion in the game of Go BID21 , and playing competitive 5-player DOTA BID18 .",
"The earliest of these successes leveraged experience replay for data efficiency and stacked a fixed number of consecutive frames to overcome the partial observability in Atari 2600 games.",
"However, with progress towards increasingly difficult, partially observable domains, the need for more advanced memory-based representations increases, necessitating more principled solutions such as recurrent neural networks (RNNs).",
"The use of LSTMs BID8 within RL has been widely adopted to overcome partial observability BID5 BID16 BID3 BID4 .In",
"this paper we investigate the training of RNNs with experience replay. We",
"have three primary contributions. First",
", we demonstrate the effect of experience replay on parameter lag, leading to representational drift and recurrent state staleness. This",
"is potentially exacerbated in the distributed training setting, and ultimately results in diminished training stability and performance. Second",
", we perform an empirical study into the effects of several approaches to RNN training with experience replay, mitigating the aforementioned effects. Third",
", we present an agent that integrates these findings to achieve significant advances in the state of the art on Atari-57 BID1 and matches the state of the art on DMLab-30 BID0 . To the",
"best of our knowledge, our agent, Recurrent Replay Distributed DQN (R2D2), is the first to achieve this using a single network architecture and fixed set of hyper-parameters.",
"Here we take a step back from evaluating performance and discuss our empirical findings in a broader context.",
"There are two surprising findings in our results.First, although zero state initialization was often used in previous works BID5 BID4 , we have found that it leads to misestimated action-values, especially in the early states of replayed sequences.",
"Moreover, without burn-in, updates through BPTT to these early time steps with poorly estimated outputs seem to give rise to destructive updates and hinder the network's ability to recover from sub-optimal initial recurrent states.",
"This suggests that either the context-dependent recurrent state should be stored along with the trajectory in replay, or an initial part of replayed sequences should be reserved for burn-in, to allow the RNN to rely on its recurrent state and exploit long-term temporal dependencies, and the two techniques can also be combined beneficially.",
"We have also observed that the underlying problems of representational drift and recurrent state staleness are potentially exacerbated in the distributed setting (see Appendix), highlighting the importance of robustness to these effects through an adequate training strategy of the RNN.Second, we found that the impact of RNN training goes beyond providing the agent with memory.",
"Instead, RNN training also serves a role not previously studied in RL, potentially by enabling better representation learning, and thereby improves performance even on domains that are fully observable and do not obviously require memory (cf. BREAKOUT results in the feed-forward ablation).Finally",
", taking a broader view on our empirical results, we note that scaling up of RL agents through parallelization and distributed training allows them to benefit from huge experience throughput and achieve ever-increasing results over broad simulated task suites such as Atari-57 and DMLab-30. Impressive",
"as these results are in terms of raw performance, they come at the price of high sample complexity, consuming billions of simulated time steps in hours or days of wall-clock time. One widely",
"open avenue for future work lies in improving the sample efficiency of these agents, to allow applications to domains that do not easily allow fast simulation at similar scales. Another remaining",
"challenge, very apparent in our results on Atari-57, is exploration: Save for the hardest-exploration games from Atari-57, R2D2 surpasses human-level performance on this task suite significantly, essentially 'solving' many of the games therein. Figure 6 : Left:",
"Parameter lag experienced with distributed prioritized replay with (top) 256 and (bottom) 64 actors on four DMLab levels: explore obstructed goals large (eogl), explore object rewards many (eorm), lasertag three opponents small (lots), rooms watermaze (rw). Center: initialstate",
"and Right: final-state Q-value discrepancy for the same set of experiments."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.1904761791229248,
0.1428571343421936,
0.260869562625885,
0.1666666567325592,
0.1249999925494194,
0.2083333283662796,
0.12765957415103912,
0.09756097197532654,
0.1818181723356247,
0,
0.39024388790130615,
0.0555555522441864,
0.1428571343421936,
0.2978723347187042,
0.30434781312942505,
0.052631575614213943,
0.06896550953388214,
0.11764705181121826,
0.15625,
0.1515151411294937,
0.06557376682758331,
0.21875,
0.04081632196903229,
0.07999999821186066,
0.07407406717538834,
0.10526315122842789,
0.1875
] | r1lyTjAqYX | true | [
"Investigation on combining recurrent neural networks and experience replay leading to state-of-the-art agent on both Atari-57 and DMLab-30 using single set of hyper-parameters."
] |
[
"The current state-of-the-art end-to-end semantic role labeling (SRL) model is a deep neural network architecture with no explicit linguistic features. \n",
"However, prior work has shown that gold syntax trees can dramatically improve SRL, suggesting that neural network models could see great improvements from explicit modeling of syntax.\n",
"In this work, we present linguistically-informed self-attention (LISA): a new neural network model that combines \n",
"multi-head self-attention with multi-task learning across dependency parsing, part-of-speech, predicate detection and SRL.",
"For example, syntax is incorporated by training one of the attention heads to attend to syntactic parents for each token.",
"Our model can predict all of the above tasks, but it is also trained such that if a high-quality syntactic parse is already available, it can be beneficially injected at test time without re-training our SRL model.\n",
"In experiments on the CoNLL-2005 SRL dataset LISA achieves an increase of 2.5 F1 absolute over the previous state-of-the-art on newswire with predicted predicates and more than 2.0 F1 on out-of-domain data.",
"On ConLL-2012 English SRL we also show an improvement of more than 3.0 F1, a 13% reduction in error.",
"Semantic role labeling (SRL) extracts a high-level representation of meaning from a sentence, labeling e.g. who did what to whom.",
"Explicit representations of such semantic information have been shown to improve results in challenging downstream tasks such as dialog systems BID63 BID14 , machine reading BID8 BID65 and machine translation BID36 BID5 .Though",
"syntax was long considered an obvious prerequisite for SRL systems BID34 BID51 , recently deep neural network architectures have surpassed syntacticallyinformed models BID69 BID25 BID60 , achieving state-of-the art SRL performance with no explicit modeling of syntax.Still, recent work BID53 BID25 indicates that neural network models could see even higher performance gains by leveraging syntactic information rather than ignoring it. BID25",
"indicate that many of the errors made by a strong syntax-free neural-network on SRL are tied to certain syntactic confusions such as prepositional phrase attachment, and show that while constrained inference using a relatively low-accuracy predicted parse can provide small improvements in SRL accuracy, providing a gold-quality parse leads to very significant gains. incorporate",
"syntax from a highquality parser BID31 using graph convolutional neural networks BID32 , but like BID25 they attain only small increases over a model with no syntactic parse, and even perform worse than a syntax-free model on out-of-domain data. These works",
"suggest that though syntax has the potential to improve neural network SRL models, we have not yet designed an architecture which maximizes the benefits of auxiliary syntactic information.In response, we propose linguistically-informed self-attention (LISA): a model which combines multi-task learning BID12 with stacked layers of multi-head self-attention BID64 trained to act as an oracle providing syntactic parses to downstream parameters tasked with predicting semantic role labels. Our model is",
"endto-end: earlier layers are trained to predict prerequisite parts-of-speech and predicates, which are supplied to later layers for scoring. The model is",
"trained such that, as syntactic parsing models improve, providing high-quality parses at test time can only improve its performance, allowing the model to benefit maximally from improved parsing models without requiring re-training. Unlike previous",
"work, we encode each sentence only once, predict its predicates, part-of-speech tags and syntactic parse, then predict the semantic roles for all predicates in the sentence in parallel, leading to exceptionally fast training and decoding speeds: our model matches state-of-the art accuracy in less than one quarter the training time.In extensive experiments on the CoNLL-2005 and CoNLL-2012 datasets, we show that our linguistically-informed models consistently outperform the syntax-free state-of-the-art for SRL models with predicted predicates. On CoNLL-2005,",
"our single model out-performs the previous state-of-the-art single model on the WSJ test set by nearly 1.5 F1 points absolute using its own predicted parses, and by 2.5 points using a stateof-the-art parse (Dozat and Manning, 2017) . On the challenging",
"out-of-domain Brown test set, our model also improves over the previous state-ofthe-art by more than 2.0 F1. On CoNLL-2012, our",
"model gains 1.4 points with its own parses and more than 3.0 points absolute over previous work: 13% reduction in error. Our single models",
"also out-perform state-of-the-art ensembles across all datasets, up to more than 1.4 F1 over a strong fivemodel ensemble on CoNLL-2012.",
"We present linguistically-informed self-attention: a new multi-task neural network model that effectively incorporates rich linguistic information for semantic role labeling.",
"LISA out-performs the state-of-the-art on two benchmark SRL datasets, including out-of-domain, while training more than 4× faster.",
"Future work will explore improving LISA's parsing accuracy, developing better training techniques and adapting to more tasks.",
"Following previous work BID25 , we evaluate our models on the CoNLL-2012 data split BID49 of OntoNotes 5.0 BID27 .",
"10 This dataset is drawn from seven domains: newswire, web, broadcast news and conversation, magazines, telephone conversations, and text from the bible.",
"The text is annotated with gold part-of-speech, syntactic constituencies, named entities, word sense, speaker, co-reference and semantic role labels based on the PropBank guidelines BID44 .",
"Propositions may be verbal or nominal, and there are 41 distinct semantic role labels, excluding continuation roles and including the predicate.We processed the data as follows: We convert the semantic proposition and role segmentations to BIO boundary-encoded tags, resulting in 129 distinct BIO-encoded tags (including continuation roles).",
"We initialize word embeddings with 100d pre-trained GloVe embeddings trained on 6 billion tokens of Wikipedia and Gigaword BID47 .",
"Following the experimental setup for parsing from Choi et al. FORMULA1 , we convert constituency structure to dependencies using the ClearNLP dependency converter BID17 , use automatic part-of-speech tags assigned by the ClearNLP tagger BID16 , and exclude single-token sentences in our parsing evaluation.",
"10 We constructed the data split following instructions at: BID11 ) is based on the original PropBank corpus BID44 , which labels the Wall Street Journal portion of the Penn TreeBank corpus (PTB) BID42 with predicateargument structures, plus a challenging out-ofdomain test set derived from the Brown corpus (Francis and Kučera, 1964) .",
"This dataset contains only verbal predicates, though some are multiword verbs, and 28 distinct role label types.",
"We obtain 105 SRL labels including continuations after encoding predicate argument segment boundaries with BIO tags.",
"We evaluate the SRL performance of our models using the srl-eval.pl script provided by the CoNLL-2005 shared task, 11 which computes segment-level precision, recall and F1 score.",
"We also report the predicate detection scores output by this script.For CoNLL-2005 we train the same parser as for CoNLL-2012 except on the typical split of the WSJ portion of the PTB using Stanford dependencies (de Marneffe and Manning, 2008) and POS tags from the Stanford CoreNLP left3words model BID62 .",
"We train on WSJ sections 02-21, use section 22 for development and section 23 for test.",
"This corresponds to the same train/test split used for propositions in the CoNLL-2005 dataset, except that section 24 is used for development rather than section 22."
] | [
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.11999999731779099,
0.072727270424366,
0.09090908616781235,
0.1904761791229248,
0.3333333432674408,
0.2539682388305664,
0.3103448152542114,
0.16326530277729034,
0.1249999925494194,
0.1666666567325592,
0.12195121496915817,
0.2857142686843872,
0.1492537260055542,
0.22727271914482117,
0.1666666567325592,
0.16393442451953888,
0.32967033982276917,
0.25806450843811035,
0.08163265138864517,
0.18518517911434174,
0.1599999964237213,
0.16326530277729034,
0.17391303181648254,
0.1304347813129425,
0.16326530277729034,
0.08163265138864517,
0.1111111044883728,
0.11764705181121826,
0.08510638028383255,
0.17910447716712952,
0.10666666179895401,
0.043478257954120636,
0.04444443807005882,
0.2545454502105713,
0.2222222238779068,
0.09302324801683426,
0.19607841968536377
] | Bk7GGldiz | true | [
"Our combination of multi-task learning and self-attention, training the model to attend to parents in a syntactic parse tree, achieves state-of-the-art CoNLL-2005 and CoNLL-2012 SRL results for models using predicted predicates."
] |
[
"Bottleneck structures with identity (e.g., residual) connection are now emerging popular paradigms for designing deep convolutional neural networks (CNN), for processing large-scale features efficiently.",
"In this paper, we focus on the information-preserving nature of identity connection and utilize this to enable a convolutional layer to have a new functionality of channel-selectivity, i.e., re-distributing its computations to important channels.",
"In particular, we propose Selective Convolutional Unit (SCU), a widely-applicable architectural unit that improves parameter efficiency of various modern CNNs with bottlenecks.",
"During training, SCU gradually learns the channel-selectivity on-the-fly via the alternative usage of",
"(a) pruning unimportant channels, and",
"(b) rewiring the pruned parameters to important channels.",
"The rewired parameters emphasize the target channel in a way that selectively enlarges the convolutional kernels corresponding to it.",
"Our experimental results demonstrate that the SCU-based models without any postprocessing generally achieve both model compression and accuracy improvement compared to the baselines, consistently for all tested architectures.",
"Nowadays, convolutional neural networks (CNNs) have become one of the most effective approaches in various fields of artificial intelligence.",
"With a growing interest of CNNs, there has been a lot of works on designing more advanced CNN architectures BID43 BID21 .",
"In particular, the simple idea of adding identity connection in ResNet BID11 has enabled breakthroughs in this direction, as it allows to train substantially deeper/wider networks than before by alleviating existed optimization difficulties in previous CNNs.",
"Recent CNNs can scale over a thousand of layers BID12 or channels BID18 without much overfitting, and most of these \"giant\" models consider identity connections in various ways BID49 BID18 .",
"However, as CNN models grow rapidly, deploying them in the real-world becomes increasingly difficult due to computing resource constraints.",
"This has motivated the recent literature such as network pruning BID9 BID28 BID35 , weight quantization BID36 BID3 , adaptive networks BID47 BID5 BID0 BID19 , and resource-efficient architectures BID17 BID40 BID32 .For",
"designing a resource-efficient CNN architecture, it is important to process succinct representations of large-scale channels. To",
"this end, the identity connections are useful since they allow to reduce the representation dimension to a large extent while \"preserving\" information from the previous layer. Such",
"bottleneck architectures are now widely used in modern CNNs such as ResNet BID11 and DenseNet BID18 for parameter efficiency, and many state-of-the-art mobile-targeted architectures such as SqueezeNet BID20 , ShuffleNet BID53 BID32 , MoblileNet BID16 BID40 , and CondenseNet BID17 commonly address the importance of designing efficient bottlenecks.Contribution. In this",
"paper, we propose Selective Convolutional Unit (SCU), a widely-applicable architectural unit for efficient utilization of parameters in particular as a bottleneck upon identity connection. At a high-level",
", SCU performs a convolutional operation to transform a given input. The main goal",
"of SCU, however, is rather to re-distribute their computations only to selected channels (a) (b)Figure",
"1:",
"(a) An illustration",
"of channel de-allocation and re-allocation procedures. The higher the saturation",
"of the channel color, the higher the ECDS value. (b) The overall structure",
"of SCU.of importance, instead of processing the entire input naively. To this end, SCU has two",
"special operations: (a) de-allocate unnecessary",
"input channels (dealloc), and (b) re-allocate the obstructed",
"channels to other channels of importance (realloc) (see Figure 1a) . They are performed without damaging",
"the network output (i.e., function-preserving operations), and therefore one can call them safely at any time during training. Consequently, training SCU is a process",
"that increases the efficiency of CNN by iteratively pruning or rewiring its parameters on-the-fly along with learning them. In some sense, it is similar to how hippocampus",
"in human brain learn, where new neurons are generated daily, and rewired into the existing network while maintaining them via neuronal apoptosis or pruning BID38 BID49 .We combine several new ideas to tackle technical",
"challenges for such on-demand, efficient trainable SCU. First, we propose expected channel damage score",
"(ECDS), a novel metric of channel importance that is used as the criterion to select channels for dealloc or realloc. Compared to other popular magnitude-based metrics",
"BID28 BID35 , ECDS allows capturing not only low-magnitude channels but also channels of low-contribution under the input distribution. Second, we impose channel-wise spatial shifting bias",
"when a channel is reallocated, providing much diversity in the input distribution. It also has an effect of enlarging the convolutional",
"kernel of SCU. Finally, we place a channel-wise scaling layer inside",
"SCU with sparsity-inducing regularization, which also promotes dealloc (and consequently realloc as well), without further overhead in inference and training.We evaluate the effectiveness of SCU by applying it to several modern CNN models including ResNet BID11 , DenseNet BID18 , and ResNeXt BID49 , on various classification datasets. Our experimental results consistently show that SCU improves",
"the efficiency of bottlenecks both in model size and classification accuracy. For example, SCU reduces the error rates of DenseNet-40 model",
"(without any post-processing) by using even less parameters: 6.57% → 5.95% and 29.97% → 28.64% on CIFAR-10/100 datasets, respectively. We also apply SCU to a mobile-targeted CondenseNet BID17 model",
", and further improve its efficiency: it even outperforms NASNet-C BID54 , an architecture searched with 500 GPUs for 4 days, while our model is constructed with minimal efforts automatically via SCU.There have been significant interests in the literature on discovering which parameters to be pruned during training of neural networks, e.g., see the literature of network sparsity learning BID48 BID25 BID41 BID35 BID30 BID4 . On the other hand, the progress is, arguably, slower for how",
"to rewire the pruned parameters of a given model to maximize its utility. proposed Dense-Sparse-Dense (DSD), a multi-step training flow",
"applicable for a wide range of DNNs showing that re-training with re-initializing the pruned parameters can improve the performance of the original network. Dynamic network surgery BID7 , on the other hand, proposed a",
"methodology of splicing the pruned connections so that mis-pruned ones can be recovered, yielding a better compression performance. In this paper, we propose a new way of rewiring for parameter",
"efficiency, i.e., rewiring for channel-selectivity, and a new architectural framework that enables both pruning and rewiring in a single pass of training without any postprocessing or re-training (as like human brain learning). Under our framework, one can easily set a targeted trade-off",
"between model compression and accuracy improvement depending on her purpose, simply by adjusting the calling policy of dealloc and realloc. We believe that our work sheds a new direction on the important",
"problem of training neural networks efficiently.",
"We demonstrate that CNNs of large-scale features can be trained effectively via channel-selectivity, primarily focusing on bottleneck architectures.",
"The proposed ideas on channel-selectivity, however, would be applicable other than the bottlenecks, which we believe is an interesting future research direction.",
"We also expect that channel-selectivity has a potential to be used for other tasks as well, e.g., interpretability BID42 , robustness BID6 , and memorization BID51 ."
] | [
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.04651162400841713,
0.16326530277729034,
0.19999998807907104,
0,
0,
0.07692307233810425,
0.2222222238779068,
0.17777776718139648,
0.0555555522441864,
0.10810810327529907,
0.07692307233810425,
0.08695651590824127,
0.05405404791235924,
0.04081632196903229,
0.11764705181121826,
0.0952380895614624,
0.032258059829473495,
0.09302324801683426,
0.19354838132858276,
0.0624999962747097,
0,
0,
0,
0,
0,
0,
0.0624999962747097,
0.09302324801683426,
0.13333332538604736,
0.11764705181121826,
0.0624999962747097,
0.1395348757505417,
0,
0.10526315122842789,
0.06896550953388214,
0.1428571343421936,
0,
0.19999998807907104,
0.022727269679307938,
0.1111111044883728,
0.08695651590824127,
0.17391303181648254,
0.13793103396892548,
0.21276594698429108,
0,
0.1666666567325592,
0,
0.1818181723356247
] | SJlt6oA9Fm | true | [
"We propose a new module that improves any ResNet-like architectures by enforcing \"channel selective\" behavior to convolutional layers"
] |
[
"Predictive models that generalize well under distributional shift are often desirable and sometimes crucial to machine learning applications.",
"One example is the estimation of treatment effects from observational data, where a subtask is to predict the effect of a treatment on subjects that are systematically different from those who received the treatment in the data.",
"A related kind of distributional shift appears in unsupervised domain adaptation, where we are tasked with generalizing to a distribution of inputs that is different from the one in which we observe labels.",
"We pose both of these problems as prediction under a shift in design.",
"Popular methods for overcoming distributional shift are often heuristic or rely on assumptions that are rarely true in practice, such as having a well-specified model or knowing the policy that gave rise to the observed data.",
"Other methods are hindered by their need for a pre-specified metric for comparing observations, or by poor asymptotic properties.",
"In this work, we devise a bound on the generalization error under design shift, based on integral probability metrics and sample re-weighting.",
"We combine this idea with representation learning, generalizing and tightening existing results in this space.",
"Finally, we propose an algorithmic framework inspired by our bound and verify is effectiveness in causal effect estimation.",
"A long-term goal in artificial intelligence is for agents to learn how to act.",
"This endeavor relies on accurately predicting and optimizing for the outcomes of actions, and fundamentally involves estimating counterfactuals-what would have happened if the agent acted differently?",
"In many applications, such as the treatment of patients in hospitals, experimentation is infeasible or impractical, and we are forced to learn from biased, observational data.",
"Doing so requires adjusting for the distributional shift between groups of patients that received different treatments.",
"A related kind of distributional shift arises in unsupervised domain adaptation, the goal of which is to learn predictive models for a target domain, observing ground truth only in a source domain.In this work, we pose both domain adaptation and treatment effect estimation as special cases of prediction across shifting designs, referring to changes in both action policy and feature domain.",
"We separate policy from domain as we wish to make causal statements about the policy, but not about the domain.",
"Learning from observational data to predict the counterfactual outcome under treatment B for a patient who received treatment A, one must adjust for the fact that treatment A was systematically given to patients of different characteristics from those who received treatment B. We call this predicting under a shift in policy.",
"Furthermore, if all of our observational data comes from hospital P , but we wish to predict counterfactuals for patients in hospital Q, with a population that differs from P , an additional source of distributional shift is at play.",
"We call this a shift in domain.",
"Together, we refer to the combination of domain and policy as the design.",
"The design for which we observe ground truth is called the source, and the design of interest the target.The two most common approaches for addressing distributional shift are to learn shift-invariant representations of the data BID0 or to perform sample re-weighting or matching (Shimodaira, 2000; BID13 .",
"Representation learning approaches attempt to extract only information from the input that is invariant to a change in design and predictive of the variable of interest.",
"Such representations are typically learned by fitting deep neural networks in which activations of deeper layers are regularized to be distributionally similar across designs BID0 BID15 .",
"Although representation learning can be shown to reduce the error associated to distributional shift BID15 in some cases, standard approaches are biased, even in the limit of infinite data, as they penalize the use also of predictive information.",
"In contrast, re-weighting methods correct for distributional shift by assigning higher weight to samples from the source design that are representative of the target design, often using importance sampling.",
"This idea has been well studied in, for example, the causal inference BID20 , domain adaptation (Shimodaira, 2000) and reinforcement learning BID19 literature.",
"For example, in causal effect estimation, importance sampling is equivalent to re-weighting units by the inverse probability of observed treatments (treatment propensity).",
"Re-weighting with knowledge of importance sampling weights often leads to asymptotically unbiased estimators of the target outcome, but may suffer from high variance in finite samples (Swaminathan & Joachims, 2015) .A",
"significant hurdle in applying re-weighting methods is that optimal weights are rarely known in practice. There",
"are a variety of methods to learn these weights. Weights",
"can be estimated as the inverse of estimated feature or treatment densities BID20 BID7 but this plug-in approach can lead to highly unstable estimates. More stable",
"methods learn weights by minimizing distributional distance metrics BID8 BID13 BID4 Zubizarreta, 2015) . Closely related",
", matching (Stuart, 2010) produces weights by finding units in the source design that are close in some metric to units in the target design. Specifying a distributional",
"or unit-wise metric is challenging, especially if the input space is high-dimensional where no metric incorporating all features can ever be made small. This has inspired heuristics",
"such as first performing variable selection and then finding a matching in the selected covariates.Our key algorithmic contribution is to show how to combine the intuition behind shift-invariant representation learning and re-weighting methods by jointly learning a representation Φ of the input space and a weighting function w(Φ) to minimize a) the re-weighted empirical",
"risk and b) a re-weighted measure of",
"distributional shift between designs. This is useful also for the",
"identity representation Φ(x) = x, as it allows for principled control of the variance of estimators through regularization of the re-weighting function w(x), mitigating the issues of exact importance sampling methods. Further, this allows us to",
"evaluate w on hold-out samples to select hyperparameters or do early stopping. Finally, letting w depend",
"on Φ alleviates the problem of choosing a metric by which to optimize sample weights, as Φ is trained to extract information predictive of the outcome. We capture these ideas in",
"an upper bound on the generalization error under a shift in design and specialize it to the case of treatment effect estimation.",
"We have proposed a theory and an algorithmic framework for learning to predict outcomes of interventions under shifts in design-changes in both intervention policy and feature domain.",
"The framework combines representation learning and sample re-weighting to balance source and target designs, emphasizing information from the source sample relevant for the target.",
"Existing reweighting methods either use pre-defined weights or learn weights based on a measure of distributional distance in the input space.",
"These approaches are highly sensitive to the choice of metric used to measure balance, as the input may be high-dimensional and contain information that is not predictive of the outcome.",
"In contrast, by learning weights to achieve balance in representation space, we base our re-weighting only on information that is predictive of the outcome.",
"In this work, we apply this framework to causal effect estimation, but emphasize that joint representation learning and re-weighting is a general idea that could be applied in many applications with design shift.Our work suggests that distributional shift should be measured and adjusted for in a representation space relevant to the task at hand.",
"Joint learning of this space and the associated re-weighting is attractive, but several challenges remain, including optimization of the full objective and relaxing the invertibility constraint on representations.",
"For example, variable selection methods are not covered by our current theory, as they induce a non-ivertible representation, but a similar intuition holds there-only predictive attributes should be used when measuring imbalance.",
"We believe that addressing these limitations is a fruitful path forward for future work.",
"We denote the re-weighted density p w µ (x, t) := w(x, t)p µ (x, t).Expected",
"& empirical risk We let the (expected) risk of f measured by h under p µ be denoted DISPLAYFORM0 where l h is an appropriate loss function, and the empirical risk over a sample DISPLAYFORM1 We use the superscript w to denote the re-weighted risks DISPLAYFORM2 Definition A1 (Importance sampling). For two",
"distributions p, q on Z, of common support, ∀z ∈ Z : p(z) > 0 ⇐⇒ q(z) > 0, we call DISPLAYFORM3 the importance sampling weights of p and q. Definition",
"2 (Restated). The integral",
"probability metric (IPM) distance, associated with the function family H, between distributions p and q is defined by DISPLAYFORM4 We begin by bounding the expected risk under a distribution p π in terms of the expected risk under p µ and a measure of the discrepancy between p π and p µ . Using definition",
"2 we can show the following result. Lemma 1 (Restated",
"). For hypotheses f",
"with loss f such that f / f H ∈ H, and p µ , p π with common support, there exists a valid re-weighting w of p µ , see Definition 1, such that, DISPLAYFORM5 The first inequality is tight for importance sampling weights, w(x, t) = p π (x, t)/p µ (x, t). The second inequality",
"is not tight for general f , even if f ∈ H, unless p π = p µ .Proof. The results follows",
"immediately",
"from the definition of IPM. DISPLAYFORM6 Further, for importance",
"sampling weights w IS (x, t) = π(t;x) µ(t;x) , for any h ∈ H, DISPLAYFORM7 and the LHS is tight.We could apply Lemma 1 to bound the loss under a distribution q based on the weighted loss under p. Unfortunately, bounding the expected",
"risk in terms of another expectation is not enough to reason about generalization from an empirical sample. To do that we use Corollary 2 of BID6 ,",
"restated as a Theorem below.Theorem A1 (Generalization error of re-weighted loss BID6 ). For a loss function h of any hypothesis",
"h ∈ H ⊆ {h : X → R}, such that d = Pdim({ h : h ∈ H}) where Pdim is the pseudo-dimension, and a weighting function w(x) such that E p [w] = 1, with probability 1 − δ over a sample (x 1 , ..., x n ), with empirical distributionp, DISPLAYFORM8 we get the simpler form DISPLAYFORM9 We will also need the following result about estimating IPMs from finite samples from Sriperumbudur et al. (2009) .Theorem A2 (Estimation of IPMs from empirical",
"samples (Sriperumbudur et al., 2009) ). Let M be a measurable space. Suppose k is measurable",
"kernel such that sup x∈M k(x",
", x) ≤ C ≤ ∞ and H the reproducing kernel Hilbert space induced by k, with ν := sup x∈M,f ∈H f (x) < ∞. Then, witĥ p,q the empirical distributions of p, q",
"from m and n samples respectively, and with probability at least 1 − δ, DISPLAYFORM10 We consider learning twice-differentiable, invertible representations Φ : X → Z, where Z is the representation space, and Ψ : Z → X is the inverse representation, such that Ψ(Φ(x)) = x for all x. Let E denote space of such representation functions",
". For a design π, we let p π,Φ (z, t) be the distribution",
"induced by Φ over Z × T , with p w π,Φ (z, t) := p π,Φ (z, t)w(Ψ(z), t) its re-weighted form andp w π,Φ its re-weighted empirical form, following our previous notation. Note that we do not include t in the representation itself",
", although this could be done in principle. Let G ⊆ {h : Z × T → Y} denote a set of hypotheses h(Φ, t",
") operating on the representation Φ and let F denote the space of all compositions, F = {f = h(Φ(x), t) : h ∈ G, Φ ∈ E}. We now restate and prove Theorem 1."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.1764705777168274,
0.09090908616781235,
0.1304347813129425,
0.13793103396892548,
0.0833333283662796,
0.060606054961681366,
0.1621621549129486,
0.06666666269302368,
0.3529411852359772,
0.13793103396892548,
0.09999999403953552,
0.0476190410554409,
0.1249999925494194,
0.2686567008495331,
0.12121211737394333,
0.1090909019112587,
0.07843136787414551,
0.08695651590824127,
0.1428571343421936,
0.1111111044883728,
0.05128204822540283,
0,
0.04081632196903229,
0.09090908616781235,
0.25641024112701416,
0.10526315122842789,
0.043478257954120636,
0,
0,
0,
0.06451612710952759,
0.05128204822540283,
0,
0.06666666269302368,
0.08695651590824127,
0.1538461446762085,
0.043478257954120636,
0,
0,
0.21621620655059814,
0.3414634168148041,
0.17142856121063232,
0.0555555522441864,
0.0476190410554409,
0,
0.19672130048274994,
0.09999999403953552,
0,
0.06666666269302368,
0,
0.06666666269302368,
0.045454539358615875,
0,
0.07692307233810425,
0,
0,
0.06779660284519196,
0.0555555522441864,
0.07999999821186066,
0.1071428507566452,
0,
0,
0.023529408499598503,
0,
0,
0.0416666604578495,
0.0634920597076416,
0,
0,
0,
0.04444443807005882
] | B1X4DWWRb | true | [
"A theory and algorithmic framework for prediction under distributional shift, including causal effect estimation and domain adaptation"
] |
[
"Deep ensembles have been empirically shown to be a promising approach for improving accuracy, uncertainty and out-of-distribution robustness of deep learning models.",
"While deep ensembles were theoretically motivated by the bootstrap, non-bootstrap ensembles trained with just random initialization also perform well in practice, which suggests that there could be other explanations for why deep ensembles work well.",
"Bayesian neural networks, which learn distributions over the parameters of the network, are theoretically well-motivated by Bayesian principles, but do not perform as well as deep ensembles in practice, particularly under dataset shift.",
"One possible explanation for this gap between theory and practice is that popular scalable approximate Bayesian methods tend to focus on a single mode, whereas deep ensembles tend to explore diverse modes in function space.",
"We investigate this hypothesis by building on recent work on understanding the loss landscape of neural networks and adding our own exploration to measure the similarity of functions in the space of predictions.",
"Our results show that random initializations explore entirely different modes, while functions along an optimization trajectory or sampled from the subspace thereof cluster within a single mode predictions-wise, while often deviating significantly in the weight space.",
"We demonstrate that while low-loss connectors between modes exist, they are not connected in the space of predictions.",
"Developing the concept of the diversity--accuracy plane, we show that the decorrelation power of random initializations is unmatched by popular subspace sampling methods.",
"Consider a typical classification problem, where x n ∈ R D denotes the D-dimensional features and y n ∈ [1, . . . , K] denotes the class label.",
"Assume we have a parametric model p(y|x, θ) for the conditional distribution where θ denotes weights and biases of a neural network, and p(θ) is a prior distribution over parameters.",
"The Bayesian posterior over parameters is given by",
"p(y n |x n , θ).",
"(",
"Computing the exact posterior distribution over θ is computationally expensive (if not impossible) when p(y n |x n , θ) is a deep neural network.",
"A variety of approximations have been developed for Bayesian neural networks, including Laplace approximation (MacKay, 1992) , Markov chain Monte Carlo methods (Neal, 1996; Welling & Teh, 2011; Springenberg et al., 2016) , variational Bayesian methods (Graves, 2011; Blundell et al., 2015; Louizos & Welling, 2017; Wen et al., 2018) and Monte-Carlo dropout (Gal & Ghahramani, 2016; Srivastava et al., 2014) .",
"While computing the posterior is challenging, it is usually easy to perform maximum-a-posteriori (MAP) estimation, which corresponds to a mode of the posterior.",
"The MAP solution can be written as the minimizer of the following loss (negative log likelihood + negative log prior):",
"log p(y n |x n , θ).",
"The MAP solution is computationally efficient, but only gives a point estimate and not a distribution over parameters.",
"Deep ensembles, proposed by Lakshminarayanan et al. (2017) , train an ensemble of neural networks by initializing at M different values and repeating the minimization multiple times which could lead to M different solutions, if the loss is non-convex.",
"(Lakshminarayanan et al. (2017) found adversarial training provides additional benefits in some of their experiments, but we will ignore adversarial training and focus only on ensembles with random initialization in this paper.",
")",
"Given finite training data, many parameter values could equally well explain the observations, and capturing these diverse solutions is crucial for quantifying epistemic uncertainty (Kendall & Gal, 2017) .",
"Bayesian neural networks learn a distribution over weights, and a good posterior approximation should be able to learn multi-modal posterior distributions in theory.",
"Deep ensembles were inspired by the bootstrap (Breiman, 1996) , which has nice theoretical properties.",
"However, it has been empirically observed by Lakshminarayanan et al. (2017) ; Lee et al. (2015) that training individual networks with just random initialization is sufficient in practice and using the bootstrap even hurts performance in some cases (e.g. for small ensemble sizes).",
"Furthermore, Ovadia et al. (2019) and Gustafsson et al. (2019) independently benchmarked existing methods for uncertainty quantification on a variety of datasets and architectures, and observed that ensembles tend to outperform approximate Bayesian neural networks in terms of both accuracy and uncertainty, particularly under dataset shift.",
") on train and validation data.",
"These empirical observations raise an important question: Why do ensembles trained with just random initialization work so well in practice?",
"One possible hypothesis is that ensembles tend to sample from different modes 1 in function space, whereas variational Bayesian methods (which minimize",
")) might fail to explore multiple modes even though they are effective at capturing uncertainty within a single mode.",
"See Figure 1 for a cartoon illustration.",
"Note that while the MAP solution is a local minima for the training loss by definition, it may not necessarily be a local minima for the validation loss.",
"Recent work on understanding loss landscapes (Fort & Jastrzebski, 2019; Draxler et al., 2018; allows us to investigate this hypothesis.",
"Note that prior work on loss landscapes has focused on mode-connectivity and low-loss tunnels, but has not explicitly focused on how diverse the functions from different modes are, beyond an initial exploration in Fort & Jastrzebski (2019) .",
"Our findings show that:",
"• The functions sampled along a single training trajectory or subspace thereof (e.g. diagonal Gaussian, low-rank Gaussian and Dropout subspaces) tend to be very similar in predictions (while potential far away in the weight space), whereas functions sampled from different randomly initialized trajectories tend to be very diverse.",
"• Solution modes are connected in the loss landscape but they are distinct in the space of predictions.",
"Low-loss tunnels create functions with near-identical low values of loss along the path, however these functions tend to be very different in function space, changing significantly in the middle of the tunnel.",
"Our results show that trajectories of randomly initialized neural networks explore different modes in function space, which explains why deep ensembles with random initializations help.",
"They are essentially orthogonal to each other in the space of weights and very diverse in terms of their predictions.",
"While these modes can be connected via optimized low-loss paths between them, we demonstrate that they correspond to distinct functions in terms of their predictions.",
"Therefore the connectivity in the loss landscape does not imply connectivity in the space of functions.",
"Subspace sampling methods such as weight averaging, Monte Carlo dropout, and various versions of local Gaussian approximations, sample functions that might lie relatively far from the starting point in the weight space, however, they remain in the vicinity of their starting point in terms of predictions, giving rise to an insufficiently diverse set of functions.",
"Using the concept of the diversityaccuracy plane, we demonstrate empirically that these subspace sampling methods never reach the combination of diversity and accuracy that independently trained models do, limiting their usefulness for ensembling.",
"A VISUALIZING THE LOSS LANDSCAPE ALONG ORIGINAL DIRECTIONS AND WA DIRECTIONS Figure S1 shows the loss landscape (train as well as the validation set) and the effect of WA.",
"Figure S1 : Loss landscape versus generalization: weights are typically initialized close to 0 and increase radially through the course of training.",
"Top row: we pick two optima from different trajectories as the axes, and plot loss surface.",
"Looking at x and y axes, we observe that while a wide range of radii achieve low loss on training set, the range of optimal radius values is narrower on validation set.",
"Bottom row: we average weights within each trajectory using WA and use them as axes.",
"A wider range of radius values generalize better along the WA directions, which confirms the findings of ."
] | [
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.19607841968536377,
0.19999998807907104,
0.16949151456356049,
0.25806450843811035,
0.28070175647735596,
0.2857142686843872,
0.21276594698429108,
0.4897959232330322,
0.11538460850715637,
0.1818181723356247,
0.10810810327529907,
0,
0.1538461446762085,
0.05128204822540283,
0.2083333283662796,
0.12765957415103912,
0,
0.17391303181648254,
0.1875,
0.17241378128528595,
0.10526315122842789,
0.08163265138864517,
0.13636362552642822,
0.17142856121063232,
0.14705881476402283,
0.05714285373687744,
0.08163265138864517,
0.11764705181121826,
0.1249999925494194,
0.0555555522441864,
0.23999999463558197,
0.03999999538064003,
0.12903225421905518,
0,
0.14084506034851074,
0.22727271914482117,
0.1071428507566452,
0.2222222238779068,
0.1702127605676651,
0.07407406717538834,
0.24390242993831635,
0.16438356041908264,
0.20689654350280762,
0.18867923319339752,
0.19607841968536377,
0.13333332538604736,
0.24137930572032928,
0.045454539358615875,
0.09090908616781235
] | r1xZAkrFPr | true | [
"We study deep ensembles through the lens of loss landscape and the space of predictions, demonstrating that the decorrelation power of random initializations is unmatched by subspace sampling that only explores a single mode."
] |
[
"Existing deep learning approaches for learning visual features tend to extract more information than what is required for the task at hand.",
"From a privacy preservation perspective, the input visual information is not protected from the model; enabling the model to become more intelligent than it is trained to be.",
"Existing approaches for suppressing additional task learning assume the presence of ground truth labels for the tasks to be suppressed during training time.",
"In this research, we propose a three-fold novel contribution:",
"(i) a novel metric to measure the trust score of a trained deep learning model,",
"(ii) a model-agnostic solution framework for trust score improvement by suppressing all the unwanted tasks, and",
"(iii) a simulated benchmark dataset, PreserveTask, having five different fundamental image classification tasks to study the generalization nature of models.",
"In the first set of experiments, we measure and improve the trust scores of five popular deep learning models: VGG16, VGG19, Inception-v1, MobileNet, and DenseNet and demonstrate that Inception-v1 is having the lowest trust score.",
"Additionally, we show results of our framework on color-MNIST dataset and practical applications of face attribute preservation in Diversity in Faces (DiF) and IMDB-Wiki dataset.",
"The primary objective of artificial intelligence is to imitate human intelligence tabular rasa.",
"Especially, with the advent of deep learning (DL), the models are striving to perform composite tasks by learning complex relationships and patterns available in noisy, unstructured data (Ruder, 2017) .",
"With this sudden growth in the consumption of data by models, there has been a lot of study on the privacy and security of the learnt model (Shokri & Shmatikov, 2015) .",
"Data governance and model governance frameworks, control and protect sharing of data and model meta information between two entities and also their social implications (Helbing, 2019) .",
"The premise of model privacy has majorly revolved around preserving the model content from human (man-in-the-middle) adversarial attacks (Abadi et al., 2016) .",
"However, the model itself could learn all the private information from the data and become much more intelligent than the original intent it was trained for.",
"With the strive for model generalization, including techniques for transfer learning and multi-task learning, the model is encouraged to learn more and more generic features from the data that could be used for more than one task (Søgaard & Goldberg, 2016) .",
"Consider the example described in Figure 1 , where a classifier is trained to detect the shape of an object from images.",
"However, using the features extracted by the above classifier, the size and location of the object in the image can also be predicted.",
"Thus, a shape classifier is more intelligent than its objective of only predicting the shape of the object.",
"While in certain applications, this is a required property of classification models (such as in, transfer learning and domain adaptation), in most of the privacy preserving applications, the data and its other visual attributes have to be kept private from the model itself.",
"As an additional real-world example, we train a DL model to predict the gender from a face image.",
"However, the DL model learns most generic features from the face image, enabling it to predict the age and the identity of the person.",
"The input face image could be saved securely from a human attacker, however, there is not much focus on securing from the model itself.",
"Additionally as shown in Figure 1",
"(a), the task of debiasing is to remove the the bias (color) in learning a specific task (shape).",
"This happens due to the high correlation between the color and shapes in the input images.",
"However, as shown in Figure 1",
"(b), our task in model trust is to forcefully The fundamental research motivation in this work is to study if a learning model could be restricted to perform only one or a specific group of tasks.",
"ensure that the model learns to perform only one or few selected tasks (shape) from the input images and unlearn all other tasks (color, size, location).",
"If multi-class classification tasks could be done from the same image, the research question is, \"How can we ensure that the model is learnt only for one or a few tasks (called as, preserved tasks), and is strictly not learnt for the other tasks (called as, suppressed tasks)?\".",
"To pursue research on this problem, there are few evident challenges:",
"(i) there is a lack of a balanced and properly curated image dataset where multiple classification tasks could be performed on the same image,",
"(ii) the complete knowledge of both the preserved tasks and the suppressed tasks should be known apriori, that is, we cannot suppress those tasks that we don't have information about, and",
"(iii) presence of very few model agnostic studies to preserve and suppress different task groups.",
"In this research, we propose a novel framework to measure the trust score of a trained DL model and a solution approach to improve the trust score during training.",
"The major research contributions are summarized as follows:",
"1. A simulated, class-balanced, multi-task dataset, PreserveTask with five tasks that could be performed on each image: shape, size, color, location, and background color classification.",
"2. A novel metric to measure the trustworthiness score of a trained DL model.",
"The trust scores of five popular DL models are measured and compared: VGG16, VGG19, Inception-v1, MobileNet, and DenseNet.",
"A generic model-agnostic solution framework to improve the trust scores of DL models during training by preserving a few tasks and suppressing other tasks on the same image.",
"3. Experimental analysis are performed for the proposed framework in comparison with other existing approaches under different settings.",
"Experimentally, we considered the model with the least trust score, Inception-v1, and showed that the proposed framework aids in improving the overall trust score 1 .",
"4. To demonstrate the practical applications and generalizability of the metric and the solution framework, we show additionally results in colored MNIST dataset and face attribute preservation using two datasets:",
"(i) Diversity in Faces (DiF) (Merler et al.)",
"(ii) IMDBWiki (Rothe et al., 2018).",
"In this research, we showcased a model-agnostic framework for measuring and improving the trustworthiness of a model from a privacy preservation perspective.",
"The proposed framework did not assume the need for the suppression task labels during train time, while, similar performance could be obtained by training using random classification boundaries.",
"A novel simulated benchmark dataset called PreserveTask was created to methodically evaluate and analyze a DL model's capability in suppressing shared task learning.",
"This dataset opens up further research opportunities in this important and practically necessary research domain.",
"Experimentally, it was shown that popular DL models such as VGG16, VGG19, Inception-v1, DenseNet, and MobileNet show poor trust scores and tend to be more intelligent than they were trained for.",
"Also, we show a practical case study of our proposed approach in face attribute classification using:",
"(i) Diversity in Faces (DiF) and",
"(ii) IMDB-Wiki datasets.",
"We would like to extend this work by studying the effect of multi-label classification tasks during suppression."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.1621621549129486,
0.09756097197532654,
0.21052631735801697,
0.1538461446762085,
0.3870967626571655,
0.24242423474788666,
0.10810810327529907,
0.30434781312942505,
0.21052631735801697,
0.06896550953388214,
0.1818181723356247,
0.09090908616781235,
0.052631575614213943,
0,
0.04999999329447746,
0.11999999731779099,
0.10526315122842789,
0.0555555522441864,
0.0624999962747097,
0.14814814925193787,
0.1764705777168274,
0.10810810327529907,
0.04999999329447746,
0,
0.1875,
0.12903225421905518,
0,
0.21276594698429108,
0.09756097197532654,
0.1090909019112587,
0,
0.09999999403953552,
0.09756097197532654,
0.1249999925494194,
0.5,
0,
0.0952380895614624,
0.25806450843811035,
0.11764705181121826,
0.41860464215278625,
0.05714285373687744,
0.21052631735801697,
0.09302324801683426,
0,
0,
0.21621620655059814,
0.13636362552642822,
0.29999998211860657,
0.06451612710952759,
0.12765957415103912,
0.1818181723356247,
0.08695651590824127,
0,
0.11764705181121826
] | B1lf4yBYPr | true | [
"Can we trust our deep learning models? A framework to measure and improve a deep learning model's trust during training."
] |
[
" Learning a policy using only observational data is challenging because the distribution of states it induces at execution time may differ from the distribution observed during training.",
"In this work, we propose to train a policy while explicitly penalizing the mismatch between these two distributions over a fixed time horizon.",
"We do this by using a learned model of the environment dynamics which is unrolled for multiple time steps, and training a policy network to minimize a differentiable cost over this rolled-out trajectory.",
"This cost contains two terms: a policy cost which represents the objective the policy seeks to optimize, and an uncertainty cost which represents its divergence from the states it is trained on.",
"We propose to measure this second cost by using the uncertainty of the dynamics model about its own predictions, using recent ideas from uncertainty estimation for deep networks.",
"We evaluate our approach using a large-scale observational dataset of driving behavior recorded from traffic cameras, and show that we are able to learn effective driving policies from purely observational data, with no environment interaction.",
"In recent years, model-free reinforcement learning methods using deep neural network controllers have proven effective on a wide range of tasks, from playing video or text-based games BID26 BID29 to learning algorithms (Zaremba et al., 2015) and complex locomotion tasks Zhang et al., 2015) .",
"However, these methods often require a large number of interactions with the environment in order to learn.",
"While this is not a problem if the environment is simulated, it can limit the application of these methods in realistic environments where interactions with the environment are slow, expensive or potentially dangerous.",
"Building a simulator where the agent can safely try out policies without facing real consequences can mitigate this problem, but requires human engineering effort which increases with the complexity of the environment being modeled.Model-based reinforcement learning approaches try to learn a model of the environment dynamics, and then use this model to plan actions or train a parameterized policy.",
"A common setting is where an agent alternates between collecting experience by executing actions using its current policy or dynamics model, and then using these experiences to improve its dynamics model.",
"This approach has been shown empirically to significantly reduce the required number of environment interactions needed to obtain an effective policy or planner BID1 BID7 BID28 BID6 .Despite",
"these improvements in sample complexity, there exist settings where even a single poor action executed by an agent in a real environment can have consequences which are not acceptable. At the",
"same time, with data collection becoming increasingly inexpensive, there are many settings where observational data of an environment is abundant. This suggests",
"a need for algorithms which can learn policies primarily from observational data, which can then perform well in a real environment. Autonomous driving",
"is an example of such a setting: on one hand, trajectories of human drivers can be easily collected using traffic cameras BID14 , resulting in an abundance of observational data; on the other hand, learning through interaction with the real environment is not a viable solution.However, learning policies from purely observational data is challenging because the data may only cover a small region of the space over which it is defined. If the observational",
"data consists of stateaction pairs produced by an expert, one option is to use imitation learning BID36 . However, this is well-known",
"to suffer from a mismatch between the states seen at training and execution time BID37 . Another option is to learn",
"a dynamics model from observational data, and then use it to train a policy BID31 . However, the dynamics model",
"may make arbitrary predictions outside the domain it was trained on, which may wrongly be associated with low cost (or high reward) as shown in FIG0 . The policy network may then",
"exploit these errors in the dynamics model and produce actions which lead to wrongly optimistic states. In the interactive setting,",
"this problem is naturally self-correcting, since states where the model predictions are wrongly optimistic will be more likely to be experienced, and thus will correct the dynamics model. However, the problem persists",
"if the dataset of environment interactions which the model is trained on is fixed.In this work, we propose to train a policy while explicitly penalizing the mismatch between the distribution of trajectories it induces and the one reflected in the training data. We use a learned dynamics model",
"which is unrolled for multiple time steps, and train a policy network to minimize a differentiable cost over this rolled-out trajectory. This cost contains two terms: a",
"policy cost which represents the objective the policy seeks to optimize, and an uncertainty cost which represents its divergence from the states it is trained on. We measure this second cost by",
"using the uncertainty of the dynamics model about its own predictions, calculated using dropout. We apply our approach in the context",
"of learning policies to drive an autonomous car in dense traffic, using a large-scale dataset of real-world driving trajectories which we also adapt into an environment for testing learned policies 1 . We show that model-based control using",
"this additional uncertainty regularizer substantially outperforms unregularized control, and enables learning good driving policies using only observational data with no environment interaction or additional labeling by an expert. We also show how to effectively leverage",
"an action-conditional stochastic forward model using a modified posterior distribution, which encourages the model to maintain sensitivity to input actions.",
"In this work, we proposed a general approach for learning policies from purely observational data.",
"The key elements are:",
"i) a learned stochastic dynamics model, which is used to optimize a policy cost over multiple time steps,",
"ii) an uncertainty term which penalizes the divergence of the trajectories induced by the policy from the manifold it was trained on, and",
"iii) a modified posterior distribution which keeps the stochastic model responsive to input actions.",
"We have applied this approach to a large observational dataset of real-world traffic recordings, and shown it can effectively learn policies for navigating in dense traffic, which outperform other approaches which learn from observational data.",
"However, there is still a sizeable gap between the performance of our learned policies and human performance.",
"We release both our dataset and environment, and encourage further research in this area to help narrow this gap.",
"We also believe this provides a useful setting for evaluating generative models in terms of their ability to produce good policies.",
"Finally, our approach is general and could potentially be applied to many other settings where interactions with the environment are expensive or unfeasible, but observational data is plentiful."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.1860465109348297,
0.09999999403953552,
0.1666666567325592,
0.23255813121795654,
0.1395348757505417,
0.35999998450279236,
0.10169491171836853,
0.17142856121063232,
0.04255318641662598,
0.1515151411294937,
0.08695651590824127,
0.08888888359069824,
0.08510638028383255,
0.10256409645080566,
0.3684210479259491,
0.18918918073177338,
0.10526315122842789,
0.21621620655059814,
0.23529411852359772,
0.04255318641662598,
0.10810810327529907,
0.045454539358615875,
0.13793103396892548,
0.1904761791229248,
0.1860465109348297,
0.1111111044883728,
0.23529411852359772,
0.23076923191547394,
0.1666666567325592,
0.42424240708351135,
0,
0.17142856121063232,
0.15789473056793213,
0.1875,
0.35999998450279236,
0.11764705181121826,
0.05714285373687744,
0.1538461446762085,
0.17777776718139648
] | HygQBn0cYm | true | [
"A model-based RL approach which uses a differentiable uncertainty penalty to learn driving policies from purely observational data."
] |
[
"Dynamical system models (including RNNs) often lack the ability to adapt the sequence generation or prediction to a given context, limiting their real-world application.",
"In this paper we show that hierarchical multi-task dynamical systems (MTDSs) provide direct user control over sequence generation, via use of a latent code z that specifies the customization to the\n",
"individual data sequence.",
"This enables style transfer, interpolation and morphing within generated sequences.",
"We show the MTDS can improve predictions via latent code interpolation, and avoid the long-term performance degradation of standard RNN approaches.",
"Time series data often arise as a related 'family' of sequences, where certain characteristic differences exist between the sequences in a dataset.",
"Examples include the style of handwritten text (Graves, 2013) , the response of a patient to an anaesthetic (Bird et al., 2019) , or the style of locomotion in motion capture (mocap) data (Ghosh et al., 2017) .",
"In this paper, we will consider how such variation may be modelled, and effectively controlled by an end user.",
"Such related data is often pooled to train a single dynamical system, despite the internal variation.",
"For a simple model, such as a linear dynamical system (LDS), this will result in learning only an average effect.",
"In contrast, a recurrent neural network (RNN) may model this variation, but in an implicit and opaque manner.",
"Such a 'black-box' approach prohibits end-user control, and may suffer from mode drift, such as in Ghosh et al. (2017) , where a generated mocap sequence performs an unprompted transition from walking to drinking.",
"Some of these problems may be alleviated by appending 'context labels' to the inputs (see e.g. Goodfellow et al., 2016, §10.2.4) which describe the required customization.",
"However, such labels are often unavailable, and the approach may fail to model the variation adequately even when they are.",
"To move beyond these approaches, we consider latent variable models, where a latent variable z characterizes each sequence.",
"This may be seen as a form of multi-task learning (MTL, see Zhang & Yang, 2017) , from which we derive the name multi-task dynamical system (MTDS), with each sequence treated as a task.",
"A straightforward approach is to append the latent z to the inputs of the model, similarly to the 'context label' approach, thereby providing customization of the various bias (or offset) parameters of the model.",
"A number of examples of this have been proposed recently, e.g. in Yingzhen & Mandt (2018) and Miladinović et al. (2019) .",
"Nevertheless, this 'bias customization' has limited expressiveness and is often unsuitable for customizing simple models.",
"In this paper we investigate a more powerful form of customization which modulates all the system and emission parameters.",
"In this approach, the parameters of each task are constrained to lie on a learned low dimensional manifold, indexed by the latent z.",
"Our experiments show that this approach results in improved performance and/or greater data efficiency than existing approaches, as well as greater robustness to unfamiliar test inputs.",
"Further, varying z can generate a continuum of models, allowing interpolation between sequence predictions (see Figure 1b for an example), and potentially morphing of sequence characteristics over time.",
"Contributions In this paper we propose the MTDS, which goes beyond existing work by allowing full adaptation of all parameters of general dynamical systems via use of a learned nonlinear manifold.",
"We show how the approach may be applied to various popular models, and provide general purpose . . . .",
". . learning and inference algorithms. Our experimental studies use synthetic data (sum of two damped harmonic oscillators) and real-world human locomotion mocap data. We illuminate various properties of the MTDS formulation in our experiments, such as data efficiency, user control, and robustness to dataset shift, and show how these go beyond existing approaches to time series modelling. We finally utilize the increased user control in the context of mocap data to demonstrate style morphing.",
"To this end, we introduce the model in Section 2, giving examples and discussing the particular challenges in learning and inference.",
"We discuss the relation to existing work in Section 3.",
"Experimental setup and results are given in Section 4 with a conclusion in Section 5.",
"In this work we have shown how to extend dynamical systems with a general-purpose hierarchical structure for multi-task learning.",
"Our MTDS framework performs customization at the level of all parameters, not just the biases, and adapts all parameters for general classes of dynamical systems.",
"We have seen that the latent code can learn a fine-grained embedding of sequence variation and can be used to modulate predictions.",
"Clearly good predictive performance for sequences requires task inference, whether implicit or explicit.",
"There are three advantages of making this inference explicit.",
"Firstly, it enhances control over predictions.",
"This might be used by animators to control the style of predictions for mocap models, or to express domain knowledge, such as ensuring certain sequences evolve similarly.",
"Secondly, it can improve generalization from small datasets since task interpolation is available out-of-the-box.",
"Thirdly, it can be more robust against changes in distribution at test time than a pooled model: (2014) is a unit Gaussian p(z) = N (0, I).",
"This choice allows simple sampling schemes, and straight-forward posterior approximations.",
"It is also a useful choice for interpolation, since it allows continuous deformation of its outputs.",
"An alternative choice might be a uniform distribution over a compact set, however posterior approximation is more challenging, see Svénsen (1998) for one approach.",
"Sensible default choices for h φ include affine operators and multilayer perceptrons (MLPs).",
"However, when the parameter space R d is large, it may be infeasible to predict d outputs from an MLP.",
"Consider an RNN with 100k parameters.",
"If an MLP has m L−1 = 300 units in the final hidden layer, the expansion to the RNN parameters in the final layer will require 30×10 6 parameters alone.",
"A practical approach is to use a low rank matrix for this transformation, equivalent to adding an extra linear layer of size m L where we must have m L m L−1 to reduce the parameterization sufficiently.",
"Since we will typically need m L to be O(10), we are restricting the parameter manifold of θ to lie in a low dimensional subspace.",
"Since MLP approaches with a large base model will then usually have a restricted final layer, are there any advantages over a simple linear-Gaussian model for the prior p(z) and h φ ?",
"There may indeed be many situations where this simpler model is reasonable.",
"However, we note some advantages of the MLP approach:",
"1. The MLP parameterization can shift the density in parameter space to more appropriate regions via nonlinear transformation.",
"2. A linear space of recurrent model parameters can yield highly non-linear changes even to simple dynamical systems (see e.g. the bifurcations in §8 of Strogatz, 2018).",
"We speculate it might be advantageous to curve the manifold to avoid such phenomena.",
"3. More expressive choices may help utilization of the latent space (e.g. Chen et al., 2017) .",
"This may in fact motivate moving beyond a simple MLP for the h φ ."
] | [
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.1621621549129486,
0.1818181723356247,
0.1111111119389534,
0.07999999821186066,
0.2857142686843872,
0.0555555522441864,
0.045454539358615875,
0.11764705181121826,
0,
0.11764705181121826,
0.12121211737394333,
0.21276594698429108,
0,
0.060606054961681366,
0.12903225421905518,
0.1304347813129425,
0.04999999701976776,
0.0555555522441864,
0.13333332538604736,
0.05882352590560913,
0.05405404791235924,
0.05128204822540283,
0.19512194395065308,
0.045454539358615875,
0.0624999962747097,
0.054794516414403915,
0.060606054961681366,
0,
0.0714285671710968,
0,
0.05405404791235924,
0.277777761220932,
0.0714285671710968,
0.0833333283662796,
0.0952380895614624,
0.09756097197532654,
0.06896550953388214,
0,
0.07999999821186066,
0,
0,
0.0714285671710968,
0.11764705181121826,
0.0952380895614624,
0.05128204822540283,
0.04255318641662598,
0,
0.04444444179534912,
0,
0,
0.060606054961681366,
0,
0,
0.060606054961681366,
0
] | Bkln2a4tPB | true | [
"Tailoring predictions from sequence models (such as LDSs and RNNs) via an explicit latent code."
] |
[
"By injecting adversarial examples into training data, adversarial training is promising for improving the robustness of deep learning models.",
"However, most existing adversarial training approaches are based on a specific type of adversarial attack.",
"It may not provide sufficiently representative samples from the adversarial domain, leading to a weak generalization ability on adversarial examples from other attacks.",
"Moreover, during the adversarial training, adversarial perturbations on inputs are usually crafted by fast single-step adversaries so as to scale to large datasets.",
"This work is mainly focused on the adversarial training yet efficient FGSM adversary.",
"In this scenario, it is difficult to train a model with great generalization due to the lack of representative adversarial samples, aka the samples are unable to accurately reflect the adversarial domain.",
"To alleviate this problem, we propose a novel Adversarial Training with Domain Adaptation (ATDA) method.",
"Our intuition is to regard the adversarial training on FGSM adversary as a domain adaption task with limited number of target domain samples.",
"The main idea is to learn a representation that is semantically meaningful and domain invariant on the clean domain as well as the adversarial domain.",
"Empirical evaluations on Fashion-MNIST, SVHN, CIFAR-10 and CIFAR-100 demonstrate that ATDA can greatly improve the generalization of adversarial training and the smoothness of the learned models, and outperforms state-of-the-art methods on standard benchmark datasets.",
"To show the transfer ability of our method, we also extend ATDA to the adversarial training on iterative attacks such as PGD-Adversial Training (PAT) and the defense performance is improved considerably.",
"Deep learning techniques have shown impressive performance on image classification and many other computer vision tasks.",
"However, recent works have revealed that deep learning models are often vulnerable to adversarial examples BID14 , which are maliciously designed to deceive the target model by generating carefully crafted adversarial perturbations on original clean inputs.",
"Moreover, adversarial examples can transfer across models to mislead other models with a high probability BID9 .",
"How to effectively defense against adversarial attacks is crucial for security-critical computer vision systems, such as autonomous driving.As a promising approach, adversarial training defends from adversarial perturbations by training a target classifier with adversarial examples.",
"Researchers have found BID7 BID11 that adversarial training could increase the robustness of neural networks.",
"However, adversarial training often obtains adversarial examples by taking a specific attack technique (e.g., FGSM) into consideration, so the defense targeted such attack and the trained model exhibits weak generalization ability on adversarial examples from other adversaries BID7 .",
"BID22 showed that the robustness of adversarial training can be easily circumvented by the attack that combines with random perturbation from other models.",
"Accordingly, for most existing adversarial training methods, there is a risk of overfitting to adversarial examples crafted on the original model with the specific attack.In this paper, we propose a novel adversarial training method that is able to improve the generalization of adversarial training.",
"From the perspective of domain adaptation (DA) BID20 , there is a big domain gap between the distribution of clean examples and the distribution of adversarial examples in the high-level representation space, even though adversarial perturbations are imperceptible to humans.",
"showed that adversarial perturbations are progressively amplified along the layer hierarchy of neural networks, which maximizes the distance between the original and adversarial subspace representations.",
"In addition, adversarial training simply injects adversarial examples from a specific attack into the training set, but there is still a large sample space for adversarial examples.",
"Accordingly, training with the classification loss on such a training set will probably lead to overfitting on the adversarial examples from the specific attack.",
"Even though BID24 showed that adversarial training with iterative noisy attacks has stronger robustness than the adversarial training with single-step attacks, iterative attacks have a large computational cost and there is no theoretical analysis to justify that the adversarial examples sampled in such way could be sufficiently representative for the adversarial domain.Our contributions are focused on how to improve the generalization of adversarial training on the simple yet scalable attacks, such as FGSM (Goodfellow et al.) .",
"The key idea of our approach is to formulate the learning procedure as a domain adaptation problem with limited number of target domain samples, where target domain denotes adversarial domain.",
"Specifically, we introduce unsupervised as well as supervised domain adaptation into adversarial training to minimize the gap and increase the similarity between the distributions of clean examples and adversarial examples.",
"In this way, the learned models generalize well on adversarial examples from different ∞ bounded attacks.",
"We evaluate our ATDA method on standard benchmark datasets.",
"Empirical results show that despite a small decay of accuracy on clean data, ATDA significantly improves the generalization ability of adversarial training and has the transfer ability to extend to adversarial training on PGD BID11 .",
"In this study, we regard the adversarial training as a domain adaptation task with limited number of target labeled data.",
"By combining adversarial training on FGSM adversary with unsupervised and supervised domain adaptation, the generalization ability on adversarial examples from various attacks and the smoothness on the learned models can be highly improved for robust defense.",
"In addition, ATDA can easily be extended to adversarial training on iterative attacks (e.g., PGD) to improve the defense performance.",
"The experimental results on several benchmark datasets suggest that the proposed ATDA and its extension PATDA achieve significantly better generalization results as compared with current competing adversarial training methods."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.21052631735801697,
0.22857142984867096,
0.4285714328289032,
0.1428571343421936,
0.23529411852359772,
0.25,
0.277777761220932,
0.3255814015865326,
0.2926829159259796,
0.2448979616165161,
0.23999999463558197,
0.05405404791235924,
0.18518517911434174,
0.2222222238779068,
0.26923075318336487,
0.2222222238779068,
0.3214285671710968,
0.2857142686843872,
0.4363636374473572,
0.23076923191547394,
0.1395348757505417,
0.2790697515010834,
0.39024388790130615,
0.2716049253940582,
0.260869562625885,
0.2666666507720947,
0.37837836146354675,
0.20000000298023224,
0.40816324949264526,
0.3414634168148041,
0.4313725531101227,
0.2380952388048172,
0.3265306055545807
] | SyfIfnC5Ym | true | [
"We propose a novel adversarial training with domain adaptation method that significantly improves the generalization ability on adversarial examples from different attacks."
] |
[
"Character-level language modeling is an essential but challenging task in Natural Language Processing. \n",
"Prior works have focused on identifying long-term dependencies between characters and have built deeper and wider networks for better performance.",
"However, their models require substantial computational resources, which hinders the usability of character-level language models in applications with limited resources.",
"In this paper, we propose a lightweight model, called Group-Transformer, that reduces the resource requirements for a Transformer, a promising method for modeling sequence with long-term dependencies.",
"Specifically, the proposed method partitions linear operations to reduce the number of parameters and computational cost.",
"As a result, Group-Transformer only uses 18.2\\% of parameters compared to the best performing LSTM-based model, while providing better performance on two benchmark tasks, enwik8 and text8.",
"When compared to Transformers with a comparable number of parameters and time complexity, the proposed model shows better performance.",
"The implementation code will be available.",
"Character-level language modeling has become a core task in the field of natural language processing (NLP) such as classification (Zhang et al., 2015) , sequence tagging (Guo et al., 2019a) , question answering (He & Golub, 2016) , and recognition (Baek et al., 2019; Hwang & Sung, 2016) , with its simplicity on generating text and its adaptability to other languages.",
"Along with the development of deep learning in NLP, using recurrent neural networks (RNNs) have been a standard way to solve the problem for many years.",
"Recently, however, a new architecture, Transformer (Vaswani et al., 2017) , have shown promise in addressing this problem and have achieved breakthroughs in general language modeling (Al-Rfou et al., 2019; Dai et al., 2019) .",
"Though this technique has achieved incredible successes, it has led to the huge size of Transformerbased models due to building deeper and wider networks.",
"Transformer-XL (Dai et al., 2019) and GPT-2 , for instance, contain 277M and 1542M parameters, respectively.",
"This trend toward a large size model for performance is not suitable for edge device applications, which require small memory sizes, such as optical character reader (OCR) and speech to text (STT), and for auto-correction and auto-completion applications that need fast real-time responsiveness.",
"To tackle this issue, choosing an appropriately efficient strategy becomes more crucial, especially in the real-world application which requires not only good performance but a lightweight model.",
"In this paper, we introduce a lightweight transformer for character-level language modeling.",
"Our method is one of the factorization methods in that it separates the standard linear layer in transformer architecture using group-wise linear operation and makes sparse connectivity between linear transformations.",
"The proposed model is referred to as Group-Transformer since it is inspired by the group convolution approaches (Zhang et al., 2018; Sandler et al., 2018) that have effectively compressed huge image processing models for usability on mobile devices.",
"While the group strategy reduces parameters and calculations in the proposed modules, its mutually exclusive calculation for the multiple groups compromises performance, caused by the information loss of inter-group correlations.",
"To compensate for this problem, we added two inter-group operations that share a common feature over groups for the group attention layer and linking features in different groups for the group feed-forward layer.",
"By modeling the inter-group information flows, Group-Transformer becomes performant as well as lightweight.",
"We conducted extensive experiments on two benchmark datasets, enwik8 and text8, and found that Group-Transformer with 6M parameters outperformed all LSTM-based models with under 35M parameters.",
"Furthermore, Group-Transformer shows better performance when compared against Transformers with a comparable number of parameters.",
"We provide further analysis to identify the contributions of our proposed modules in detail.",
"To the best of our knowledge, Group-Transformer is the first attempt to build a lightweight Transformer with the group strategy.",
"Recently, remarkable progress has been made in character-level language modeling by Transformer.",
"The advantage of Transformer lies in its effectiveness in modeling long-term dependencies between characters.",
"However, the models have been developed with a huge number of parameters, and the inference of them has required an expensive computational cost.",
"We argue that big models cannot be used in a limited computational environment.",
"Group-Transformer has been developed to prove the effectiveness of Transformer in a lightweight setting.",
"We have grouped features and proposed group-wise operations to reduce the number of parameters and time complexity of Transformer.",
"In addition, to fully realize the advantage of the original Transformer, we have connected the groups to interact with each other.",
"When applying Group-Transformer on enwik8 and text8, we found that Group-Transformer only with 6M parameters achieves better performances than LSTM-based models holding over 30M parameters.",
"Further analysis has proved the effectiveness of the group strategy to reduce computational resources.",
"Julian Georg Zilly, Rupesh Kumar Srivastava, Jan Koutník, and Jürgen Schmidhuber.",
"Recurrent highway networks.",
"In Proceedings of the 34th International Conference on Machine LearningVolume 70, pp. 4189-4198, 2017."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.0714285671710968,
0.0624999962747097,
0.12121211737394333,
0.15789473056793213,
0.06896550953388214,
0.0476190447807312,
0.060606054961681366,
0,
0.0634920597076416,
0.10256409645080566,
0.1428571343421936,
0,
0.06896550953388214,
0.11320754140615463,
0.09756097197532654,
0.38461539149284363,
0.04999999701976776,
0.03999999538064003,
0.04878048226237297,
0.1463414579629898,
0.07692307233810425,
0,
0.06896550953388214,
0,
0.1875,
0.23076923191547394,
0.07407406717538834,
0.05714285373687744,
0.07407406717538834,
0.2142857164144516,
0.19354838132858276,
0,
0,
0,
0,
0,
0
] | rkxdexBYPB | true | [
"This paper proposes a novel lightweight Transformer for character-level language modeling, utilizing group-wise operations."
] |
[
" Domain adaptation tackles the problem of transferring knowledge from a label-rich source domain to an unlabeled or label-scarce target domain.",
"Recently domain-adversarial training (DAT) has shown promising capacity to learn a domain-invariant feature space by reversing the gradient propagation of a domain classifier.",
"However, DAT is still vulnerable in several aspects including (1) training instability due to the overwhelming discriminative ability of the domain classifier in adversarial training, (2) restrictive feature-level alignment, and (3) lack of interpretability or systematic explanation of the learned feature space.",
"In this paper, we propose a novel Max-margin Domain-Adversarial Training (MDAT) by designing an Adversarial Reconstruction Network (ARN).",
"The proposed MDAT stabilizes the gradient reversing in ARN by replacing the domain classifier with a reconstruction network, and in this manner ARN conducts both feature-level and pixel-level domain alignment without involving extra network structures.",
"Furthermore, ARN demonstrates strong robustness to a wide range of hyper-parameters settings, greatly alleviating the task of model selection.",
"Extensive empirical results validate that our approach outperforms other state-of-the-art domain alignment methods.",
"Additionally, the reconstructed target samples are visualized to interpret the domain-invariant feature space which conforms with our intuition.",
"Deep neural networks have gained great success on a wide range of tasks such as visual recognition and machine translation (LeCun et al., 2015) .",
"They usually require a large number of labeled data that can be prohibitively expensive to collect, and even with sufficient supervision their performance can still be poor when being generalized to a new environment.",
"The problem of discrepancy between the training and testing data distribution is commonly referred to as domain shift (Shimodaira, 2000) .",
"To alleviate the effect of such shift, domain adaptation sets out to obtain a model trained in a label-rich source domain to generalize well in an unlabeled target domain.",
"Domain adaptation has benefited various applications in many practical scenarios, including but not limited to object detection under challenging conditions (Chen et al., 2018) , cost-effective learning using only synthetic data to generalize to real-world imagery (Vazquez et al., 2013) , etc.",
"Prevailing methods for unsupervised domain adaptation (UDA) are mostly based on domain alignment which aims to learn domain-invariant features by reducing the distribution discrepancy between the source and target domain using some pre-defined metrics such as maximum mean discrepancy (Tzeng et al., 2014) .",
"Recently, Ganin & Lempitsky (2015) proposed to achieve domain alignment by domainadversarial training (DAT) that reverses the gradients of a domain classifier to maximize domain confusion.",
"Having yielded remarkable performance gain, DAT was employed in many subsequent UDA methods (Long et al., 2018; Shu et al., 2018) .",
"Even so, there still exist three critical issues of DAT that hinder its performance: (1) as the domain classifier has high-capacity to discriminate two domains, the unbalanced adversarial training cannot continuously provide effective gradients, which is usually overcome by manually adjusting the weights of adversarial training according to specific tasks; (2) DAT-based methods cannot deal with pixel-level domain shift (Hoffman et al., 2018) ; (3) the domain-invariant features learned by DAT are only based on intuition but difficult to interpret, which impedes the investigation of the underlying mechanism of adversarial domain adaptation.",
"To overcome the aforementioned difficulties, we propose an innovative DAT approach, namely Max-margin Domain-Adversarial Training (MDAT), to realize stable and comprehensive domain alignment.",
"To demonstrate its effectiveness, we develop an Adversarial Reconstruction Network (ARN) that only utilizes MDAT for UDA.",
"Specifically, ARN consists of a shared feature extractor, a label predictor, and a reconstruction network (i.e. decoder) that serves as a domain classifier.",
"Supervised learning is conducted on source domain, and MDAT helps learn domain-invariant features.",
"In MDAT, the decoder only focuses on reconstructing samples on source domain and pushing the target domain away from a margin, while the feature extractor aims to fool the decoder by learning to reconstruct samples on target domain.",
"In this way, three critical issues can be solved by MDAT: (1) the max-margin loss reduces the discriminative capacity of domain classifier, leading to balanced and thus stable adversarial training; (2) without involving new network structures, MDAT achieves both pixel-level and feature-level domain alignment; (3) visualizing the reconstructed samples reveals how the source and target domains are aligned.",
"We evaluate ARN with MDAT on five visual and non-visual UDA benchmarks.",
"It achieves significant improvement to DAT on all tasks with pixel-level or higher-level domain shift.",
"We also observe that it is insensitive to the choices of hyperparameters and as such is favorable for replication in practice.",
"In principle, our approach is generic and can be used to enhance any UDA methods that leverage domain alignment as an ingredient.",
"Compared with the conventional DAT-based methods that are usually based on a binary logistic network (Ganin & Lempitsky, 2015) , the proposed ARN with MDAT is more attractive and incorporates new merits conceptually and theoretically:",
"(1) Stable training and insensitivity to hyper-parameters.",
"Using the decoder as domain classifier with a margin loss to restrain its overwhelming capacity in adversarial training, the minimax game can continuously provide effective gradients for training the feature extractor.",
"Moreover, through the experiments in Section 4, we discover that our method shows strong robustness to the hyperparameters, i.e. α and m, greatly alleviating the parameters tuning for model selection.",
"(2) Richer information for comprehensive domain alignment.",
"Rather than DAT that uses a bit of domain information, MDAT utilizes the reconstruction network as the domain classifier that could capture more domain-specific and pixel-level features during the unsupervised reconstruction (Bousmalis et al., 2016) .",
"Therefore, MDAT further helps address pixel-level domain shift apart from the feature-level shift, leading to comprehensive domain alignment in a straightforward manner.",
"(3) Feature visualization for method validation.",
"Another key merit of MDAT is that MDAT allows us to visualize the features directly by the reconstruction network.",
"It is crucial to understand to what extent the features are aligned since this helps to reveal the underlying mechanism of adversarial domain adaptation.",
"We will detail the interpretability of these adapted features in Section 4.3.",
"We proposed a new domain alignment approach namely max-margin domain-adversarial training (MDAT) and a MDAT-based network for unsupervised domain adaptation.",
"The proposed method offers effective and stable gradients for the feature learning via an adversarial game between the feature extractor and the reconstruction network.",
"The theoretical analysis provides justifications on how it minimizes the distribution discrepancy.",
"Extensive experiments demonstrate the effectiveness of our method and we further interpret the features by visualization that conforms with our insight.",
"Potential evaluation on semi-supervised learning constitutes our future work."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0
] | [
0.13333332538604736,
0.1818181723356247,
0.125,
0,
0.09756097197532654,
0,
0.1666666567325592,
0,
0.0555555522441864,
0.04878048226237297,
0.19354838132858276,
0.11428570747375488,
0.0416666641831398,
0.15686273574829102,
0.11764705181121826,
0,
0.07058823108673096,
0.23529411852359772,
0.0714285671710968,
0.1249999925494194,
0.0833333283662796,
0.10526315122842789,
0.0952380895614624,
0.08695651590824127,
0.07692307233810425,
0.12903225421905518,
0.1818181723356247,
0.04651162400841713,
0.2222222238779068,
0.14999999105930328,
0.09999999403953552,
0.3333333432674408,
0.0952380895614624,
0.1249999925494194,
0.11764705181121826,
0,
0.1249999925494194,
0,
0.48275861144065857,
0.19354838132858276,
0,
0.06666666269302368,
0
] | BklEF3VFPB | true | [
"A stable domain-adversarial training approach for robust and comprehensive domain adaptation"
] |
Subsets and Splits