id
stringlengths
12
15
title
stringlengths
8
162
content
stringlengths
1
17.6k
prechunk_id
stringlengths
0
15
postchunk_id
stringlengths
0
15
arxiv_id
stringlengths
10
10
references
listlengths
1
1
1605.09782#62
Adversarial Feature Learning
17 Published as a conference paper at ICLR 2017 Query #1 #2 #3 #4 Figure 5: For the query images used in Krähenbühl et al. (2016) (left), nearest neighbors (by minimum cosine distance) from the ImageNet LSVRC (Russakovsky et al., 2015) training set in the fc6 feature space of the ImageNet-trained BiGAN encoder E. (The fc6 weights are set randomly; this space is a random projection of the learned conv5 feature space.) Timing A single epoch (one training pass over the 1.2 million images) of BiGAN training takes roughly 40 minutes on a Titan X GPU. Models are trained for 100 epochs, for a total training time of under 3 days. Nearest neighbors encoder E learned in unsupervised ImageNet training. In Figure 5 we present nearest neighbors in the feature space of the BiGAN
1605.09782#61
1605.09782#63
1605.09782
[ "1605.02688" ]
1605.09782#63
Adversarial Feature Learning
18
1605.09782#62
1605.09782
[ "1605.02688" ]
1605.09090#0
Learning Natural Language Inference using Bidirectional LSTM model and Inner-Attention
2016 6 1 0 2 y a M 0 3 ] L C . s c [ 1 v 0 9 0 9 0 . 5 0 6 1 : v i X r a # Learning Natural Language Inference using Bidirectional LSTM model and Inner-Attention Yang Liu, Chengjie Sun, Lei Lin and Xiaolong Wang Harbin Institute of Technology, Harbin, P.R.China {yliu,cjsun,linl,wangxl}@insun.hit.edu.cn # Abstract In this paper, we proposed a sentence encoding-based model for recognizing text en- tailment. In our approach, the encoding of sentence is a two-stage process. Firstly, av- erage pooling was used over word-level bidi- rectional LSTM (biLSTM) to generate a ï¬ rst- stage sentence representation. Secondly, at- tention mechanism was employed to replace average pooling on the same sentence for bet- ter representations. Instead of using target sentence to attend words in source sentence, we utilized the sentenceâ
1605.09090#1
1605.09090
[ "1512.08422" ]
1605.09090#1
Learning Natural Language Inference using Bidirectional LSTM model and Inner-Attention
s ï¬ rst-stage represen- tation to attend words appeared in itself, which is called â Inner-Attentionâ in our paper . Ex- periments conducted on Stanford Natural Lan- guage Inference (SNLI) Corpus has proved the effectiveness of â Inner-Attentionâ mech- anism. With less number of parameters, our model outperformed the existing best sentence encoding-based approach by a large margin. P The boy is running through a grassy area. The boy is in his room. H A boy is running outside. The boy is in a park.
1605.09090#0
1605.09090#2
1605.09090
[ "1512.08422" ]
1605.09090#2
Learning Natural Language Inference using Bidirectional LSTM model and Inner-Attention
C E N Table 1: Examples of three types of label in RTE, where P stands for Premises and H stands for Hypothesis also explored by many researchers, but not been widely used because of its complexity and domain limitations. Recently published Stanford Natural Language Inference (SNLI1) corpus makes it possible to use deep learning methods to solve RTE problems. So far proposed deep learning approaches can be roughly categorized into two groups: sentence encoding-based models and matching encoding- based models. As the name implies, the encoding of sentence is the core of former methods, while the lat- ter methods directly model the relation between two sentences and didnâ t generate sentence representa- tions at all.
1605.09090#1
1605.09090#3
1605.09090
[ "1512.08422" ]
1605.09090#3
Learning Natural Language Inference using Bidirectional LSTM model and Inner-Attention
# 1 Introduction Given a pair of sentences, the goal of recogniz- ing text entailment (RTE) is to determine whether the hypothesis can reasonably be inferred from the premises. There were three types of relation in RTE, Entailment (inferred to be true), Contradiction (in- ferred to be false) and Neutral (truth unknown).A few examples were given in Table 1. Traditional methods to RTE has been the domin- ion of classiï¬ ers employing hand engineered fea- tures, which heavily relied on natural language pro- cessing pipelines and external resources. Formal (Bos and Markert, 2005) were reasoning methods In view of universality, we focused our efforts on sentence encoding-based model. Existing methods of this kind including: LSTMs-based model, GRUs- based model, TBCNN-based model and SPINN- based model. Single directional LSTMs and GRUs suffer a weakness of not utilizing the contextual information from the future tokens and Convolu- tional Neural Networks didnâ t make full use of in- formation contained in word order. Bidirectional LSTM utilizes both the previous and future context by processing the sequence on two directions which helps to address the drawbacks mentioned above. 1http://nlp.stanford.edu/projects/snli/ >< (c) Sentence Matching Multiplication © Mean Pooling (B) Sentence Encoding Mean Pooling (a) Sentence Input Premise immersed pleasant conversation photograph involved headted discussion Canon Hypothesis Figure 1: Architecture of Bidirectional LSTM model with Inner-Attention # (Tan et al., 2015) A recent work by (Rockt¨aschel et al., 2015) im- proved the performance by applying a neural atten- tion model that didnâ t yield sentence embeddings. In this paper, we proposed a uniï¬ ed deep learning framework for recognizing textual entailment which dose not require any feature engineering, or external resources. The basic model is based on building biL- STM models on both premises and hypothesis. The basic mean pooling encoder can roughly form a intu- ition about what this sentence is talking about. Ob- tained this representation, we extended this model by utilize an Inner-Attention mechanism on both sides. This mechanism helps generate more accurate and focused sentence representations for classiï¬
1605.09090#2
1605.09090#4
1605.09090
[ "1512.08422" ]
1605.09090#4
Learning Natural Language Inference using Bidirectional LSTM model and Inner-Attention
ca- tion. In addition, we introduced a simple effective input strategy that get ride of same words in hypoth- esis and premise, which further boosts our perfor- mance. Without parameter tuning, we improved the art-of-the-state performance of sentence encoding- based model by nearly 2%. # 2 Our approach In our work, we treated RTE task as a supervised three-way classiï¬ cation problem. The overall archi- tecture of our model is shown in Figure 1. The de- sign of this model we follow the idea of Siamese Network, that the two identical sentence encoders share the same set of weights during training, and the two sentence representations then combined to- gether to generated a â relation vectorâ for classiï¬ - cation. As we can see from the ï¬
1605.09090#3
1605.09090#5
1605.09090
[ "1512.08422" ]
1605.09090#5
Learning Natural Language Inference using Bidirectional LSTM model and Inner-Attention
gure, the model mainly consists of three parts. From top to bottom were: (A). The sentence input module; (B). The sen- tence encoding module; (C). The sentence matching module. We will explain the last two parts in detail in the following subsection. And the sentence input module will be introduced in Section 3.3. # 2.1 Sentence Encoding Module Sentence encoding module is the fundamental part of this model. To generate better sentence repre- sentations, we employed a two-step strategy to en- code sentences. Firstly, average pooling layer was built on top of word-level biLSTMs to produce sen- tence vector. This simple encoder combined with the sentence matching module formed the basic ar- chitecture of our model. With much less parame- ters, this basic model alone can outperformed art-of- state method by a small margin. (refer to Table 3). Secondly, attention mechanism was employed on the same sentence, instead of using target sentence representation to attend words in source sentence, we used the representation generated in previous stage to attend words appeared in the sentence it- self, which results in a similar distribution with other attention mechanism weights. More attention was given to important words.2 The idea of â Inner-attentionâ was inspired by the observation that when human read one sentence, people usually can roughly form an intuition about which part of the sentence is more important accord- ing past experience. And we implemented this idea using attention mechanism in our model. The atten- tion mechanism is formalized as follows: M = tanh(W yY + W hRave â eL) α = sof tmax(wT M ) Ratt = Y αT where Y is a matrix consisting of output vectors of biLSTM, Rave is the output of mean pooling layer, α denoted the attention vector and Ratt is the attention-weighted sentence representation. # 2.2 Sentence Matching Module Once the sentence vectors are generated. Three matching methods were applied to extract relations between premise and hypothesis. Concatenation of the two representations â ¢ Element-wise product â ¢ Element-wise difference This matching architecture was ï¬ rst used by (Mou et al., 2015). Finally, we used a SoftMax layer over the output of a non-linear projection of the gen- erated matching vector for classiï¬
1605.09090#4
1605.09090#6
1605.09090
[ "1512.08422" ]
1605.09090#6
Learning Natural Language Inference using Bidirectional LSTM model and Inner-Attention
cation. # 3 Experiments # 3.1 DataSet To evaluate the performance of our model, we conducted on Stanford corpus Natural Language (Bos and Markert, 2005). At 570K pairs, SNLI is two orders of magnitude larger than all other resources of its type. The dataset is constructed by crowdsourced efforts, each sentence written by humans. labels comprise three classes: Entailment, Contradiction, and Neutral (Yang et al., 2016) proposed a Hierarchical At- tention model on the task of document classiï¬ cation also used for but the target representation in attention their mechanism is randomly initialized.
1605.09090#5
1605.09090#7
1605.09090
[ "1512.08422" ]
1605.09090#7
Learning Natural Language Inference using Bidirectional LSTM model and Inner-Attention
(two irrelevant sentences). We applied the standard train/validation/test split, containing 550k, 10k, and 10k samples, respectively. # 3.2 Parameter Setting The training objective of our model is cross-entropy loss, and we use minibatch SGD with the Rmsprop (Tieleman and Hinton, 2012) for optimization. The batch size is 128. A dropout layer was applied in the output of the network with the dropout rate set to 0.25. In our model, we used pretrained 300D Glove 840B vectors (Pennington et al., 2014) to initialize the word embedding. Out-of-vocabulary words in the training set are randomly initialized by sampling values uniformly from (0.05, 0.05). All of these em- bedding are not updated during training .
1605.09090#6
1605.09090#8
1605.09090
[ "1512.08422" ]
1605.09090#8
Learning Natural Language Inference using Bidirectional LSTM model and Inner-Attention
We didnâ t tune representations of words for two reasons: 1. To reduced the number of parameters needed to train. 2. Keep their representation stays close to unseen similar words in inference time, which improved the modelâ s generation ability. The model is imple- mented using open-source framework Keras.3 # 3.3 The Input Strategy In this part, we investigated four strategies to modify the input on our basic model which helps us increase performance, the four strategies are: Inverting Premises (Sutskever et al., 2014) â ¢ Doubling Premises (Zaremba and Sutskever, 2014) â ¢ Doubling Hypothesis â ¢ Differentiating Inputs (Removing same words appeared in premises and hypothesis) Experimental results were illustrated in Table 2. As we can see from it, doubling hypothesis and differentiating inputs both improved our modelâ s performance.While the hypothesises usually much shorter than premises, doubling hypothesis may ab- sorb this difference and emphasize the meaning twice via this strategy. Differentiating input strat- egy forces the model to focus on different part of the two sentences which may help the classiï¬ cation for Neutral and Contradiction examples as we ob- served that our model tended to assign unconï¬ dent instances to Entailment. And the original input sen- tences appeared in Figure 1 are: Premise:
1605.09090#7
1605.09090#9
1605.09090
[ "1512.08422" ]
1605.09090#9
Learning Natural Language Inference using Bidirectional LSTM model and Inner-Attention
Two man in polo shirts and tan pants im- mersed in a pleasant conversation about photograph. 3http://keras.io/ Test Acc. 83.24% 82.60% 83.66% 82.83% 83.72% Input Strategy Original Sequences Inverting Premises Doubling Premises Doubling Hypothesis Differentiating Inputs Table 2: Comparison of different input strategies Hypothesis: Two man in polo shirts and tan pants in- volved in a heated discussion about Canon. Label: Contradiction While most of the words in this pair of sentences are same or close in semantic, It is hard for model to distinguish the difference between them, which resulted in labeling it with Neutral or Entailment. Through differentiating inputs strategy, this kind of problems can be solved. 3.4 Comparison Methods In this part, we compared our model against the fol- lowing art-of-the-state baseline approaches:
1605.09090#8
1605.09090#10
1605.09090
[ "1512.08422" ]
1605.09090#10
Learning Natural Language Inference using Bidirectional LSTM model and Inner-Attention
⠢ LSTM enc: 100D LSTM encoders + MLP. (Bowman et al., 2015) ⠢ GRU enc: 1024D GRU encoders + skip-thoughts + cat, -. (Vendrov et al., 2015) ⠢ TBCNN enc: 300D Tree-based CNN encoders + cat, ⠦ , -. (Mou et al., 2015) ⠢ SPINN enc: 300D SPINN-NP encoders + cat, ⠦ , -. (Bowman et al., 2016) ⠢ Static-Attention: 100D LSTM + static attention. (Rockt¨aschel et al., 2015) ⠢ WbW-Attention: 100D LSTM + word-by-word at- tention. (Rockt¨aschel et al., 2015) The cat refers to concatenation, - and ⠦ denote element-wise difference and product, respectively. Much simpler and easy to understand. # 3.5 Results and Qualitative Analysis
1605.09090#9
1605.09090#11
1605.09090
[ "1512.08422" ]
1605.09090#11
Learning Natural Language Inference using Bidirectional LSTM model and Inner-Attention
Although the classiï¬ cation of RTE example is not solely relying on representations obtained from at- tention, instructive to analysis Inner- Attention mechanism as we witnessed a large per- formance increase after employing it. We hand- picked several examples from the dataset to visual- ize. In order to make the weights more discrimi- nated, we didnâ t use a uniform colour atla cross sen- tences. That is, each sentence have its own color atla, the lightest color and the darkest color de- noted the smallest attention weight the biggest value Params Test Acc. Model Sentence encoding-based models LSTM enc GRU enc TBCNN enc SPINN enc Basic model + Inner-Attention + Diversing Input Other neural network models Static-Attention WbW-Attention 3.0M 15M 3.5M 3.7M 2.0M 2.8M 2.8M 80.6% 81.4% 82.1% 83.2% 83.3% 84.2% 85.0% 242K 252K 82.4% 83.5% Table 3: Performance comparison of different models on SNLI. within the sentence, respectively. Visualizations of Inner-Attention on these examples are depicted in Figure 2.
1605.09090#10
1605.09090#12
1605.09090
[ "1512.08422" ]
1605.09090#12
Learning Natural Language Inference using Bidirectional LSTM model and Inner-Attention
A Two Three men women firefighter is climbing come riding on out a a of moto wooden subway scaffold station Figure 2: Inner-Attention Visualizations. We observed that more attention was given to Nones, Verbs and Adjectives. This conform to our experience that these words are more semantic richer than function words. While mean pooling re- garded each word of equal importance, the attention mechanism helps re-weight words according to their importance. And more focused and accurate sen- tence representations were generated based on pro- duced attention vectors.
1605.09090#11
1605.09090#13
1605.09090
[ "1512.08422" ]
1605.09090#13
Learning Natural Language Inference using Bidirectional LSTM model and Inner-Attention
# 4 Conclusion and Future work In this paper, we proposed a bidirectional LSTM- based model with Inner-Attention to solve the RTE problem. We come up with an idea to utilize attention mechanism within sentence which can teach itself to attend words without the information from another one. The Inner-Attention mechanism helps produce more accurate sentence representa- tions through attention vectors. In addition, the sim- ple effective diversing input strategy introduced by us further boosts our results. And this model can be easily adapted to other sentence-matching models. Our future work including:
1605.09090#12
1605.09090#14
1605.09090
[ "1512.08422" ]
1605.09090#14
Learning Natural Language Inference using Bidirectional LSTM model and Inner-Attention
1. Employ this architecture on other sentence- matching tasks such as Question Answer, Para- phrase and Sentence Text Similarity etc. 2. Try more heuristics matching methods to make full use of the sentence vectors. # Acknowledgments We thank all anonymous reviewers for their hard work! # References [Bos and Markert2005] Johan Bos and Katja Markert. 2005. Recognising textual entailment with logical in- ference. In Proceedings of the conference on Human Language Technology and Empirical Methods in Natu- ral Language Processing, pages 628â
1605.09090#13
1605.09090#15
1605.09090
[ "1512.08422" ]
1605.09090#15
Learning Natural Language Inference using Bidirectional LSTM model and Inner-Attention
635. Association for Computational Linguistics. [Bowman et al.2015] Samuel R Bowman, Gabor Angeli, Christopher Potts, and Christopher D Manning. 2015. A large annotated corpus for learning natural language inference. arXiv preprint arXiv:1508.05326. [Bowman et al.2016] Samuel R Bowman, Jon Gauthier, Abhinav Rastogi, Raghav Gupta, Christopher D Man- ning, and Christopher Potts. 2016.
1605.09090#14
1605.09090#16
1605.09090
[ "1512.08422" ]
1605.09090#16
Learning Natural Language Inference using Bidirectional LSTM model and Inner-Attention
A fast uniï¬ ed model for parsing and sentence understanding. arXiv preprint arXiv:1603.06021. [Mou et al.2015] Lili Mou, Men Rui, Ge Li, Yan Xu, Lu Zhang, Rui Yan, and Zhi Jin. 2015. Recogniz- ing entailment and contradiction by tree-based convo- lution. arXiv preprint arXiv:1512.08422. Richard Socher, and Christopher D Manning. 2014.
1605.09090#15
1605.09090#17
1605.09090
[ "1512.08422" ]
1605.09090#17
Learning Natural Language Inference using Bidirectional LSTM model and Inner-Attention
Glove: Global vectors for word representation. In EMNLP, volume 14, pages 1532â 1543. Edward Grefenstette, Karl Moritz Hermann, Tom´aË s KoË cisk`y, 2015. Reasoning about en- and Phil Blunsom. tailment with neural attention. arXiv preprint arXiv:1509.06664. [Sutskever et al.2014] Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pages 3104â 3112. [Tan et al.2015] Ming Tan, Bing Xiang, and Bowen Lstm-based deep learning models arXiv preprint Zhou. for non-factoid answer selection. arXiv:1511.04108. 2015. [Tieleman and Hinton2012] Tijmen Tieleman and Geof- frey Hinton. 2012. Lecture 6.5-rmsprop. COURS- ERA:
1605.09090#16
1605.09090#18
1605.09090
[ "1512.08422" ]
1605.09090#18
Learning Natural Language Inference using Bidirectional LSTM model and Inner-Attention
Neural networks for machine learning. [Vendrov et al.2015] Ivan Vendrov, Ryan Kiros, Sanja Order- Fidler, and Raquel Urtasun. embeddings of images and language. arXiv preprint arXiv:1511.06361. [Yang et al.2016] Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alex Smola, and Eduard Hovy. 2016. Hierarchical attention networks for document classiï¬
1605.09090#17
1605.09090#19
1605.09090
[ "1512.08422" ]
1605.09090#19
Learning Natural Language Inference using Bidirectional LSTM model and Inner-Attention
- cation. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Com- putational Linguistics: Human Language Technolo- gies. and Ilya Sutskever. 2014. Learning to execute. arXiv preprint arXiv:1410.4615.
1605.09090#18
1605.09090
[ "1512.08422" ]
1605.08803#0
Density estimation using Real NVP
7 1 0 2 b e F 7 2 ] G L . s c [ 3 v 3 0 8 8 0 . 5 0 6 1 : v i X r a Published as a conference paper at ICLR 2017 # DENSITY ESTIMATION USING REAL NVP Laurent Dinhâ Montreal Institute for Learning Algorithms University of Montreal Montreal, QC H3T1J4 Jascha Sohl-Dickstein Google Brain Samy Bengio Google Brain # ABSTRACT
1605.08803#1
1605.08803
[ "1602.05110" ]
1605.08803#1
Density estimation using Real NVP
Unsupervised learning of probabilistic models is a central yet challenging problem in machine learning. Speciï¬ cally, designing models with tractable learning, sam- pling, inference and evaluation is crucial in solving this task. We extend the space of such models using real-valued non-volume preserving (real NVP) transforma- tions, a set of powerful, stably invertible, and learnable transformations, resulting in an unsupervised learning algorithm with exact log-likelihood computation, exact and efï¬ cient sampling, exact and efï¬ cient inference of latent variables, and an interpretable latent space. We demonstrate its ability to model natural images on four datasets through sampling, log-likelihood evaluation, and latent variable manipulations.
1605.08803#0
1605.08803#2
1605.08803
[ "1602.05110" ]
1605.08803#2
Density estimation using Real NVP
# 1 Introduction The domain of representation learning has undergone tremendous advances due to improved super- vised learning techniques. However, unsupervised learning has the potential to leverage large pools of unlabeled data, and extend these advances to modalities that are otherwise impractical or impossible. One principled approach to unsupervised learning is generative probabilistic modeling. Not only do generative probabilistic models have the ability to create novel content, they also have a wide range of reconstruction related applications including inpainting [61, 46, 59], denoising [3], colorization [71], and super-resolution [9]. As data of interest are generally high-dimensional and highly structured, the challenge in this domain is building models that are powerful enough to capture its complexity yet still trainable. We address this challenge by introducing real-valued non-volume preserving (real NVP) transformations, a tractable yet expressive approach to modeling high-dimensional data. This model can perform efï¬ cient and exact inference, sampling and log-density estimation of data points. Moreover, the architecture presented in this paper enables exact and efï¬ cient reconstruction of input images from the hierarchical features extracted by this model. # 2 Related work Substantial work on probabilistic generative models has focused on training models using maximum likelihood. One class of maximum likelihood models are those described by probabilistic undirected graphs, such as Restricted Boltzmann Machines [58] and Deep Boltzmann Machines [53]. These models are trained by taking advantage of the conditional independence property of their bipartite structure to allow efï¬ cient exact or approximate posterior inference on latent variables. However, because of the intractability of the associated marginal distribution over latent variables, their training, evaluation, and sampling procedures necessitate the use of approximations like Mean Field inference and Markov Chain Monte Carlo, whose convergence time for such complex models
1605.08803#1
1605.08803#3
1605.08803
[ "1602.05110" ]
1605.08803#3
Density estimation using Real NVP
â Work was done when author was at Google Brain. 1 Published as a conference paper at ICLR 2017 # Data space X # Latent space Z Inference x â ¼ Ë pX z = f (x) â Generation z â ¼ pZ x = f â 1 (z) â Figure 1: Real NVP learns an invertible, stable, mapping between a data distribution Ë pX and a latent distribution pZ (typically a Gaussian). Here we show a mapping that has been learned on a toy 2-d dataset. The function f (x) maps samples x from the data distribution in the upper left into approximate samples z from the latent distribution, in the upper right. This corresponds to exact inference of the latent state given the data. The inverse function, f â 1 (z), maps samples z from the latent distribution in the lower right into approximate samples x from the data distribution in the lower left. This corresponds to exact generation of samples from the model. The transformation of grid lines in X and Z space is additionally illustrated for both f (x) and f â 1 (z).
1605.08803#2
1605.08803#4
1605.08803
[ "1602.05110" ]
1605.08803#4
Density estimation using Real NVP
remains undetermined, often resulting in generation of highly correlated samples. Furthermore, these approximations can often hinder their performance [7]. Directed graphical models are instead deï¬ ned in terms of an ancestral sampling procedure, which is appealing both for its conceptual and computational simplicity. They lack, however, the conditional independence structure of undirected models, making exact and approximate posterior inference on latent variables cumbersome [56]. Recent advances in stochastic variational inference [27] and amortized inference [13, 43, 35, 49], allowed efï¬ cient approximate inference and learning of deep directed graphical models by maximizing a variational lower bound on the log-likelihood [45]. In particular, the variational autoencoder algorithm [35, 49] simultaneously learns a generative network, that maps gaussian latent variables z to samples x, and a matched approximate inference network that maps samples x to a semantically meaningful latent representation z, by exploiting the reparametrization trick [68]. Its success in leveraging recent advances in backpropagation [51, 39] in deep neural networks resulted in its adoption for several applications ranging from speech synthesis [12] to language modeling [8]. Still, the approximation in the inference process limits its ability to learn high dimensional deep representations, motivating recent work in improving approximate inference [42, 48, 55, 63, 10, 59, 34]. Such approximations can be avoided altogether by abstaining from using latent variables. Auto- regressive models [18, 6, 37, 20] can implement this strategy while typically retaining a great deal of ï¬ exibility. This class of algorithms tractably models the joint distribution by decomposing it into a product of conditionals using the probability chain rule according to a ï¬ xed ordering over dimensions, simplifying log-likelihood evaluation and sampling. Recent work in this line of research has taken advantage of recent advances in recurrent networks [51], in particular long-short term memory [26], and residual networks [25, 24] in order to learn state-of-the-art generative image models [61, 46] and language models [32]. The ordering of the dimensions, although often arbitrary, can be critical to the training of the model [66]. The sequential nature of this model limits its computational efï¬ ciency.
1605.08803#3
1605.08803#5
1605.08803
[ "1602.05110" ]
1605.08803#5
Density estimation using Real NVP
For example, its sampling procedure is sequential and non-parallelizable, which can become cumbersome in applications like speech and music synthesis, or real-time rendering.. Additionally, there is no natural latent representation associated with autoregressive models, and they have not yet been shown to be useful for semi-supervised learning. 2 Published as a conference paper at ICLR 2017 Generative Adversarial Networks (GANs) [21] on the other hand can train any differentiable gen- erative network by avoiding the maximum likelihood principle altogether. Instead, the generative network is associated with a discriminator network whose task is to distinguish between samples and real data. Rather than using an intractable log-likelihood, this discriminator network provides the training signal in an adversarial fashion. Successfully trained GAN models [21, 15, 47] can consistently generate sharp and realistically looking samples [38]. However, metrics that measure the diversity in the generated samples are currently intractable [62, 22, 30]. Additionally, instability in their training process [47] requires careful hyperparameter tuning to avoid diverging behavior. Training such a generative network g that maps latent variable z â ¼ pZ to a sample x â ¼ pX does not in theory require a discriminator network as in GANs, or approximate inference as in variational autoencoders. Indeed, if g is bijective, it can be trained through maximum likelihood using the change of variable formula:
1605.08803#4
1605.08803#6
1605.08803
[ "1602.05110" ]
1605.08803#6
Density estimation using Real NVP
px (2) = pe(2) fae (22). i) OzT This formula has been discussed in several papers including the maximum likelihood formulation of independent components analysis (ICA) [4, 28], gaussianization [14, 11] and deep density models [5, 50, 17, 3]. As the existence proof of nonlinear ICA solutions [29] suggests, auto-regressive models can be seen as tractable instance of maximum likelihood nonlinear ICA, where the residual corresponds to the independent components. However, naive application of the change of variable formula produces models which are computationally expensive and poorly conditioned, and so large scale models of this type have not entered general use. # 3 Model deï¬
1605.08803#5
1605.08803#7
1605.08803
[ "1602.05110" ]
1605.08803#7
Density estimation using Real NVP
nition In this paper, we will tackle the problem of learning highly nonlinear models in high-dimensional continuous spaces through maximum likelihood. In order to optimize the log-likelihood, we introduce a more ï¬ exible class of architectures that enables the computation of log-likelihood on continuous data using the change of variable formula. Building on our previous work in [17], we deï¬ ne a powerful class of bijective functions which enable exact and tractable density evaluation and exact and tractable inference. Moreover, the resulting cost function does not to rely on a ï¬ xed form reconstruction cost such as square error [38, 47], and generates sharper samples as a result. Also, this ï¬ exibility helps us leverage recent advances in batch normalization [31] and residual networks [24, 25] to deï¬ ne a very deep multi-scale architecture with multiple levels of abstraction.
1605.08803#6
1605.08803#8
1605.08803
[ "1602.05110" ]
1605.08803#8
Density estimation using Real NVP
# 3.1 Change of variable formula Given an observed data variable x â X, a simple prior probability distribution pZ on a latent variable z â Z, and a bijection f : X â Z (with g = f â 1), the change of variable formula deï¬ nes a model distribution on X by act (p2(f(0))) Of (x px(2) = pe( f(a) act (9?) 0) log (nx(#)) = 10g (p2(f(0))) +1og (face (S55). @) # where â f (x) â xT is the Jacobian of f at x. Exact samples from the resulting distribution can be generated by using the inverse transform sampling rule [16]. A sample z â ¼ pZ is drawn in the latent space, and its inverse image x = f â 1(z) = g(z) generates a sample in the original space. Computing the density on a point x is accomplished by computing the density of its image f (x) and multiplying by the associated Jacobian determinant det . See also Figure 1. Exact and efï¬ cient inference enables the accurate and fast evaluation of the model.
1605.08803#7
1605.08803#9
1605.08803
[ "1602.05110" ]
1605.08803#9
Density estimation using Real NVP
3 Published as a conference paper at ICLR 2017 (a) Forward propagation (b) Inverse propagation Figure 2: Computational graphs for forward and inverse propagation. A coupling layer applies a simple invertible transformation consisting of scaling followed by addition of a constant offset to one part x2 of the input vector conditioned on the remaining part of the input vector x1. Because of its simple nature, this transformation is both easily invertible and possesses a tractable determinant. However, the conditional nature of this transformation, captured by the functions s and t, signiï¬ cantly increase the ï¬ exibility of this otherwise weak function. The forward and inverse propagation operations have identical computational cost.
1605.08803#8
1605.08803#10
1605.08803
[ "1602.05110" ]
1605.08803#10
Density estimation using Real NVP
# 3.2 Coupling layers Computing the Jacobian of functions with high-dimensional domain and codomain and computing the determinants of large matrices are in general computationally very expensive. This combined with the restriction to bijective functions makes Equation 2 appear impractical for modeling arbitrary distributions. As shown however in [17], by careful design of the function f , a bijective model can be learned which is both tractable and extremely ï¬ exible. As computing the Jacobian determinant of the transformation is crucial to effectively train using this principle, this work exploits the simple observation that the determinant of a triangular matrix can be efï¬ ciently computed as the product of its diagonal terms.
1605.08803#9
1605.08803#11
1605.08803
[ "1602.05110" ]
1605.08803#11
Density estimation using Real NVP
We will build a ï¬ exible and tractable bijective function by stacking a sequence of simple bijections. In each simple bijection, part of the input vector is updated using a function which is simple to invert, but which depends on the remainder of the input vector in a complex way. We refer to each of these simple bijections as an afï¬ ne coupling layer. Given a D dimensional input x and d < D, the output y of an afï¬ ne coupling layer follows the equations y1:d = x1:d (4) Yd+1:D = Td41:D © exp (s(w1:4)) +t(x1:a); (5) where s and ¢ stand for scale and translation, and are functions from R¢ + R?~4, and © is the Hadamard product or element-wise product (see Figure # 3.3 Properties The Jacobian of this transformation is oy | Ta 0 axt Opese diag (exp [s (1:a)] ) (6) where diag ( exp [s (71.a)] ) is the diagonal matrix whose diagonal elements correspond to the vector exp [s (71.a)]. Given the observation that this Jacobian is triangular, we can efficiently compute its determinant as exp | 5> 58 (1:4) | . Since computing the Jacobian determinant of the coupling layer operation does not involve computing the Jacobian of s or t, those functions can be arbitrarily complex. We will make them deep convolutional neural networks. Note that the hidden layers of s and t can have more features than their input and output layers. Another interesting property of these coupling layers in the context of deï¬ ning probabilistic models is their invertibility. Indeed, computing the inverse is no more complex than the forward propagation
1605.08803#10
1605.08803#12
1605.08803
[ "1602.05110" ]
1605.08803#12
Density estimation using Real NVP
4 Published as a conference paper at ICLR 2017 Figure 3: Masking schemes for afï¬ ne coupling layers. On the left, a spatial checkerboard pattern mask. On the right, a channel-wise masking. The squeezing operation reduces the 4 à 4 à 1 tensor (on the left) into a 2 à 2 à 4 tensor (on the right). Before the squeezing operation, a checkerboard pattern is used for coupling layers while a channel-wise masking pattern is used afterward. (see Figure 2(b)),
1605.08803#11
1605.08803#13
1605.08803
[ "1602.05110" ]
1605.08803#13
Density estimation using Real NVP
{ns = 21d (7) Ydt1:D = Td41:pD © exp (s(a1:a)) + t(@1:4) # Td Td = Vid = y 8) r = (yari:p â t(y1:a)) © exp (= s(yi:a)), ( meaning that sampling is as efï¬ cient as inference for this model. Note again that computing the inverse of the coupling layer does not require computing the inverse of s or t, so these functions can be arbitrarily complex and difï¬ cult to invert. # 3.4 Masked convolution Partitioning can be implemented using a binary mask b, and using the functional form for y, y=boOrt(1â -db)o (« © exp (s(b© 2)) + 4(bO x)). (9) We use two partitionings that exploit the local correlation structure of images: spatial checkerboard patterns, and channel-wise masking (see Figure 3). The spatial checkerboard pattern mask has value 1 where the sum of spatial coordinates is odd, and 0 otherwise. The channel-wise mask b is 1 for the ï¬ rst half of the channel dimensions and 0 for the second half. For the models presented here, both s(·) and t(·) are rectiï¬ ed convolutional networks. # 3.5 Combining coupling layers Although coupling layers can be powerful, their forward transformation leaves some components unchanged. This difï¬ culty can be overcome by composing coupling layers in an alternating pattern, such that the components that are left unchanged in one coupling layer are updated in the next (see Figure 4(a)). The Jacobian determinant of the resulting function remains tractable, relying on the fact that
1605.08803#12
1605.08803#14
1605.08803
[ "1602.05110" ]
1605.08803#14
Density estimation using Real NVP
â (fb â ¦ fa) â fb â xT â xT a b det(A · B) = det(A) det(B). (fo © fa) Ofa Ofe , Bar (a) = Farle)» Glee = Lalta)) (10) (11) Similarly, its inverse can be computed easily as a â ¦ f â 1 (fb â ¦ fa)â 1 = f â 1 b . (12) 5 Published as a conference paper at ICLR 2017 OOo Qe. > & â ©) EP RED ED (a) In this alternating pattern, units which remain identical in one transformation are modiï¬ ed in the next. (b) Factoring out variables. At each step, half the vari- ables are directly modeled as Gaussians, while the other half undergo further transfor- mation. Figure 4: Composition schemes for afï¬ ne coupling layers. # 3.6 Multi-scale architecture We implement a multi-scale architecture using a squeezing operation: for each channel, it divides the image into subsquares of shape 2 à 2 à c, then reshapes them into subsquares of shape 1 à 1 à 4c. The squeezing operation transforms an s à s à c tensor into an s 2 à 4c tensor (see Figure 3), effectively trading spatial size for number of channels. At each scale, we combine several operations into a sequence: we ï¬ rst apply three coupling layers with alternating checkerboard masks, then perform a squeezing operation, and ï¬ nally apply three more coupling layers with alternating channel-wise masking. The channel-wise masking is chosen so that the resulting partitioning is not redundant with the previous checkerboard masking (see Figure 3).
1605.08803#13
1605.08803#15
1605.08803
[ "1602.05110" ]
1605.08803#15
Density estimation using Real NVP
For the ï¬ nal scale, we only apply four coupling layers with alternating checkerboard masks. Propagating a D dimensional vector through all the coupling layers would be cumbersome, in terms of computational and memory cost, and in terms of the number of parameters that would need to be trained. For this reason we follow the design choice of [57] and factor out half of the dimensions at regular intervals (see Equation 14). We can deï¬ ne this operation recursively (see Figure 4(b)), h(0) = x (z(i+1), h(i+1)) = f (i+1)(h(i)) z(L) = f (L)(h(Lâ 1)) (13) (14) (15) z = (z(1), . . . , z(L)). (16) In our experiments, we use this operation for i < L. The sequence of coupling-squeezing-coupling operations described above is performed per layer when computing f (i) (Equation 14). At each layer, as the spatial resolution is reduced, the number of hidden layer features in s and t is doubled. All variables which have been factored out at different scales are concatenated to obtain the ï¬ nal transformed output (Equation 16). As a consequence, the model must Gaussianize units which are factored out at a ï¬ ner scale (in an earlier layer) before those which are factored out at a coarser scale (in a later layer). This results in the deï¬ nition of intermediary levels of representation [53, 49] corresponding to more local, ï¬ ne-grained features as shown in Appendix D. Moreover, Gaussianizing and factoring out units in earlier layers has the practical beneï¬ t of distribut- ing the loss function throughout the network, following the philosophy similar to guiding intermediate layers using intermediate classiï¬ ers [40]. It also reduces signiï¬ cantly the amount of computation and memory used by the model, allowing us to train larger models.
1605.08803#14
1605.08803#16
1605.08803
[ "1602.05110" ]
1605.08803#16
Density estimation using Real NVP
6 Published as a conference paper at ICLR 2017 # 3.7 Batch normalization To further improve the propagation of training signal, we use deep residual networks [24, 25] with batch normalization [31] and weight normalization [2, 54] in s and t. As described in Appendix E we introduce and use a novel variant of batch normalization which is based on a running average over recent minibatches, and is thus more robust when training with very small minibatches. We also use apply batch normalization to the whole coupling layer output. The effects of batch normalization are easily included in the Jacobian computation, since it acts as a linear rescaling on each dimension. That is, given the estimated batch statistics Ë Âµ and Ë Ï 2, the rescaling function aj th To (17) has a Jacobian determinant (0 (a? + 0) . (18) a
1605.08803#15
1605.08803#17
1605.08803
[ "1602.05110" ]
1605.08803#17
Density estimation using Real NVP
This form of batch normalization can be seen as similar to reward normalization in deep reinforcement learning [44, 65]. We found that the use of this technique not only allowed training with a deeper stack of coupling layers, but also alleviated the instability problem that practitioners often encounter when training conditional distributions with a scale parameter through a gradient-based approach. # 4 Experiments # 4.1 Procedure The algorithm described in Equation 2] shows how to learn distributions on unbounded space. In general, the data of interest have bounded magnitude. For examples, the pixel values of an image typically lie in (0, 256] after application of the recommended jittering procedure [64] [62]. In order to reduce the impact of boundary effects, we instead model the density of logit(a+(1â a) © 55), where a is picked here as .05. We take into account this transformation when computing log-likelihood and bits per dimension. We also augment the CIFAR-10, CelebA and LSUN datasets during training to also include horizontal flips of the training examples. We train our model on four natural image datasets: CIFAR-10 [36], Imagenet [52], Large-scale Scene Understanding (LSUN) [70], CelebFaces Attributes (CelebA) [41].
1605.08803#16
1605.08803#18
1605.08803
[ "1602.05110" ]
1605.08803#18
Density estimation using Real NVP
More speciï¬ cally, we train on the downsampled to 32 à 32 and 64 à 64 versions of Imagenet [46]. For the LSUN dataset, we train on the bedroom, tower and church outdoor categories. The procedure for LSUN is the same as in [47]: we downsample the image so that the smallest side is 96 pixels and take random crops of 64 à 64. For CelebA, we use the same procedure as in [38]: we take an approximately central crop of 148 à 148 then resize it to 64 à 64.
1605.08803#17
1605.08803#19
1605.08803
[ "1602.05110" ]
1605.08803#19
Density estimation using Real NVP
We use the multi-scale architecture described in Section 3.6 and use deep convolutional residual networks in the coupling layers with rectiï¬ er nonlinearity and skip-connections as suggested by [46]. To compute the scaling functions s, we use a hyperbolic tangent function multiplied by a learned scale, whereas the translation function t has an afï¬ ne output. Our multi-scale architecture is repeated recursively until the input of the last recursion is a 4 à 4 à c tensor. For datasets of images of size 32 à 32, we use 4 residual blocks with 32 hidden feature maps for the ï¬ rst coupling layers with checkerboard masking. Only 2 residual blocks are used for images of size 64 à 64. We use a batch size of 64. For CIFAR-10, we use 8 residual blocks, 64 feature maps, and downscale only once. We optimize with ADAM [33] with default hyperparameters and use an L2 regularization on the weight scale parameters with coefï¬ cient 5 · 10â 5.
1605.08803#18
1605.08803#20
1605.08803
[ "1602.05110" ]
1605.08803#20
Density estimation using Real NVP
We set the prior pZ to be an isotropic unit norm Gaussian. However, any distribution could be used for pZ, including distributions that are also learned during training, such as from an auto-regressive model, or (with slight modiï¬ cations to the training objective) a variational autoencoder. 7 Published as a conference paper at ICLR 2017 Dataset CIFAR-10 Imagenet (32 à 32) Imagenet (64 à 64) LSUN (bedroom) LSUN (tower) LSUN (church outdoor) CelebA PixelRNN [46] Real NVP Conv DRAW [22] 3.00 3.86 (3.83) 3.63 (3.57) 3.49 4.28 (4.26) 3.98 (3.75) 2.72 (2.70) 2.81 (2.78) 3.08 (2.94) 3.02 (2.97) < 3.59 < 4.40 (4.35) < 4.10 (4.04) IAF-VAE [34] < 3.28 Table 1: Bits/dim results for CIFAR-10, Imagenet, LSUN datasets and CelebA. Test results for CIFAR-10 and validation results for Imagenet, LSUN and CelebA (with training results in parenthesis for reference).
1605.08803#19
1605.08803#21
1605.08803
[ "1602.05110" ]
1605.08803#21
Density estimation using Real NVP
Figure 5: On the left column, examples from the dataset. On the right column, samples from the model trained on the dataset. The datasets shown in this ï¬ gure are in order: CIFAR-10, Imagenet (32 à 32), Imagenet (64 à 64), CelebA, LSUN (bedroom). # 4.2 Results We show in Table 1 that the number of bits per dimension, while not improving over the Pixel RNN [46] baseline, is competitive with other generative methods. As we notice that our performance increases with the number of parameters, larger models are likely to further improve performance. For CelebA and LSUN, the bits per dimension for the validation set was decreasing throughout training, so little overï¬
1605.08803#20
1605.08803#22
1605.08803
[ "1602.05110" ]
1605.08803#22
Density estimation using Real NVP
tting is expected. We show in Figure 5 samples generated from the model with training examples from the dataset for comparison. As mentioned in [62, 22], maximum likelihood is a principle that values diversity 8 Published as a conference paper at ICLR 2017 Figure 6: Manifold generated from four examples in the dataset. Clockwise from top left: CelebA, Imagenet (64 à 64), LSUN (tower), LSUN (bedroom). over sample quality in a limited capacity setting. As a result, our model outputs sometimes highly improbable samples as we can notice especially on CelebA. As opposed to variational autoencoders, the samples generated from our model look not only globally coherent but also sharp. Our hypothesis is that as opposed to these models, real NVP does not rely on ï¬ xed form reconstruction cost like an L2 norm which tends to reward capturing low frequency components more heavily than high frequency components. Unlike autoregressive models, sampling from our model is done very efï¬ ciently as it is parallelized over input dimensions. On Imagenet and LSUN, our model seems to have captured well the notion of background/foreground and lighting interactions such as luminosity and consistent light source direction for reï¬ ectance and shadows. We also illustrate the smooth semantically consistent meaning of our latent variables. In the latent space, we define a manifold based on four validation examples 21)» 2(2)» 2(3)> 2(4)> and parametrized by two parameters ¢ and ¢ by, z = cos(¢) (cos(¢â )z(1) + sin(¢â )z(2)) + sin(@) (cos(¢â )z(3) + sin(¢â ) 2(4)) « (19)
1605.08803#21
1605.08803#23
1605.08803
[ "1602.05110" ]
1605.08803#23
Density estimation using Real NVP
We project the resulting manifold back into the data space by computing g(z). Results are shown Figure 6. We observe that the model seems to have organized the latent space with a notion of meaning that goes well beyond pixel space interpolation. More visualization are shown in the Appendix. To further test whether the latent space has a consistent semantic interpretation, we trained a class-conditional model on CelebA, and found that the learned representation had a consistent semantic meaning across class labels (see Appendix F). # 5 Discussion and conclusion In this paper, we have deï¬
1605.08803#22
1605.08803#24
1605.08803
[ "1602.05110" ]
1605.08803#24
Density estimation using Real NVP
ned a class of invertible functions with tractable Jacobian determinant, enabling exact and tractable log-likelihood evaluation, inference, and sampling. We have shown that this class of generative model achieves competitive performances, both in terms of sample quality and log-likelihood. Many avenues exist to further improve the functional form of the transformations, for instance by exploiting the latest advances in dilated convolutions [69] and residual networks architectures [60]. This paper presented a technique bridging the gap between auto-regressive models, variational autoencoders, and generative adversarial networks. Like auto-regressive models, it allows tractable and exact log-likelihood evaluation for training.
1605.08803#23
1605.08803#25
1605.08803
[ "1602.05110" ]
1605.08803#25
Density estimation using Real NVP
It allows however a much more ï¬ exible functional form, similar to that in the generative model of variational autoencoders. This allows for fast and exact sampling from the model distribution. Like GANs, and unlike variational autoencoders, our technique does not require the use of a ï¬ xed form reconstruction cost, and instead deï¬ nes a cost in terms of higher level features, generating sharper images. Finally, unlike both variational 9 Published as a conference paper at ICLR 2017
1605.08803#24
1605.08803#26
1605.08803
[ "1602.05110" ]
1605.08803#26
Density estimation using Real NVP
autoencoders and GANs, our technique is able to learn a semantically meaningful latent space which is as high dimensional as the input space. This may make the algorithm particularly well suited to semi-supervised learning tasks, as we hope to explore in future work. Real NVP generative models can additionally be conditioned on additional variables (for instance class labels) to create a structured output algorithm. More so, as the resulting class of invertible transformations can be treated as a probability distribution in a modular way, it can also be used to improve upon other probabilistic models like auto-regressive models and variational autoencoders. For variational autoencoders, these transformations could be used both to enable a more ï¬ exible reconstruction cost [38] and a more ï¬ exible stochastic inference distribution [48]. Probabilistic models in general can also beneï¬ t from batch normalization techniques as applied in this paper. The deï¬ nition of powerful and trainable invertible functions can also beneï¬ t domains other than generative unsupervised learning. For example, in reinforcement learning, these invertible functions can help extend the set of functions for which an argmax operation is tractable for continuous Q- learning [23] or ï¬ nd representation where local linear Gaussian approximations are more appropriate [67].
1605.08803#25
1605.08803#27
1605.08803
[ "1602.05110" ]
1605.08803#27
Density estimation using Real NVP
# 6 Acknowledgments The authors thank the developers of Tensorï¬ ow [1]. We thank Sherry Moore, David Andersen and Jon Shlens for their help in implementing the model. We thank Aäron van den Oord, Yann Dauphin, Kyle Kastner, Chelsea Finn, Maithra Raghu, David Warde-Farley, Daniel Jiwoong Im and Oriol Vinyals for fruitful discussions. Finally, we thank Ben Poole, Rafal Jozefowicz and George Dahl for their input on a draft of the paper. # References [1] Martın Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, et al. Tensorï¬ ow:
1605.08803#26
1605.08803#28
1605.08803
[ "1602.05110" ]
1605.08803#28
Density estimation using Real NVP
Large-scale machine learning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467, 2016. [2] Vijay Badrinarayanan, Bamdev Mishra, and Roberto Cipolla. Understanding symmetries in deep networks. arXiv preprint arXiv:1511.01029, 2015. [3] Johannes Ballé, Valero Laparra, and Eero P Simoncelli. Density modeling of images using a generalized normalization transformation. arXiv preprint arXiv:1511.06281, 2015. [4] Anthony J Bell and Terrence J Sejnowski. An information-maximization approach to blind separation and blind deconvolution. Neural computation, 7(6):1129â 1159, 1995. [5] Yoshua Bengio.
1605.08803#27
1605.08803#29
1605.08803
[ "1602.05110" ]
1605.08803#29
Density estimation using Real NVP
Artiï¬ cial neural networks and their application to sequence recognition. 1991. [6] Yoshua Bengio and Samy Bengio. Modeling high-dimensional discrete data with multi-layer neural networks. In NIPS, volume 99, pages 400â 406, 1999. [7] Mathias Berglund and Tapani Raiko. Stochastic gradient estimate variance in contrastive divergence and persistent contrastive divergence. arXiv preprint arXiv:1312.6002, 2013. [8] Samuel R Bowman, Luke Vilnis, Oriol Vinyals, Andrew M Dai, Rafal Jozefowicz, and Samy Bengio. Generating sentences from a continuous space. arXiv preprint arXiv:1511.06349, 2015. [9] Joan Bruna, Pablo Sprechmann, and Yann LeCun. Super-resolution with deep convolutional sufï¬ cient statistics. arXiv preprint arXiv:1511.05666, 2015. [10] Yuri Burda, Roger Grosse, and Ruslan Salakhutdinov. Importance weighted autoencoders. arXiv preprint arXiv:1509.00519, 2015. [11] Scott Shaobing Chen and Ramesh A Gopinath. Gaussianization. In Advances in Neural Information Processing Systems, 2000. [12] Junyoung Chung, Kyle Kastner, Laurent Dinh, Kratarth Goel, Aaron C Courville, and Yoshua Bengio. A recurrent latent variable model for sequential data. In Advances in neural information processing systems, pages 2962â 2970, 2015. [13] Peter Dayan, Geoffrey E Hinton, Radford M Neal, and Richard S Zemel. The helmholtz machine. Neural computation, 7(5):889â 904, 1995. [14] Gustavo Deco and Wilfried Brauer. Higher order statistical decorrelation without information loss. In G. Tesauro, D. S. Touretzky, and T. K. Leen, editors, Advances in Neural Information Processing Systems 7, pages 247â 254. MIT Press, 1995. [15] Emily L.
1605.08803#28
1605.08803#30
1605.08803
[ "1602.05110" ]
1605.08803#30
Density estimation using Real NVP
Denton, Soumith Chintala, Arthur Szlam, and Rob Fergus. Deep generative image models using a laplacian pyramid of adversarial networks. In Advances in Neural Information Processing Systems 28: 10 Published as a conference paper at ICLR 2017 Annual Conference on Neural Information Processing Systems 2015, December 7-12, 2015, Montreal, Quebec, Canada, pages 1486â 1494, 2015. [16] Luc Devroye.
1605.08803#29
1605.08803#31
1605.08803
[ "1602.05110" ]
1605.08803#31
Density estimation using Real NVP
Sample-based non-uniform random variate generation. In Proceedings of the 18th conference on Winter simulation, pages 260â 265. ACM, 1986. [17] Laurent Dinh, David Krueger, and Yoshua Bengio. Nice: non-linear independent components estimation. arXiv preprint arXiv:1410.8516, 2014. [18] Brendan J Frey. Graphical models for machine learning and digital communication. MIT press, 1998. [19] Leon A. Gatys, Alexander S. Ecker, and Matthias Bethge.
1605.08803#30
1605.08803#32
1605.08803
[ "1602.05110" ]
1605.08803#32
Density estimation using Real NVP
Texture synthesis using convolutional neural networks. In Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015, December 7-12, 2015, Montreal, Quebec, Canada, pages 262â 270, 2015. [20] Mathieu Germain, Karol Gregor, Iain Murray, and Hugo Larochelle. MADE: masked autoencoder for distribution estimation. CoRR, abs/1502.03509, 2015. [21] Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron C. Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems 2014, December 8-13 2014, Montreal, Quebec, Canada, pages 2672â 2680, 2014. [22] Karol Gregor, Frederic Besse, Danilo Jimenez Rezende, Ivo Danihelka, and Daan Wierstra. Towards conceptual compression. arXiv preprint arXiv:1604.08772, 2016. [23] Shixiang Gu, Timothy Lillicrap, Ilya Sutskever, and Sergey Levine. Continuous deep q-learning with model-based acceleration. arXiv preprint arXiv:1603.00748, 2016. [24] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. CoRR, abs/1512.03385, 2015. [25] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual networks. CoRR, abs/1603.05027, 2016. [26] Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural Computation, 9(8):1735â 1780, 1997. [27] Matthew D Hoffman, David M Blei, Chong Wang, and John Paisley. Stochastic variational inference. The Journal of Machine Learning Research, 14(1):1303â 1347, 2013.
1605.08803#31
1605.08803#33
1605.08803
[ "1602.05110" ]
1605.08803#33
Density estimation using Real NVP
[28] Aapo Hyvärinen, Juha Karhunen, and Erkki Oja. Independent component analysis, volume 46. John Wiley & Sons, 2004. [29] Aapo Hyvärinen and Petteri Pajunen. Nonlinear independent component analysis: Existence and uniqueness results. Neural Networks, 12(3):429â 439, 1999. [30] Daniel Jiwoong Im, Chris Dongjoo Kim, Hui Jiang, and Roland Memisevic.
1605.08803#32
1605.08803#34
1605.08803
[ "1602.05110" ]
1605.08803#34
Density estimation using Real NVP
Generating images with recurrent adversarial networks. arXiv preprint arXiv:1602.05110, 2016. [31] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015. [32] Rafal Józefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, and Yonghui Wu. Exploring the limits of language modeling. CoRR, abs/1602.02410, 2016. [33] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. [34] Diederik P Kingma, Tim Salimans, and Max Welling. Improving variational inference with inverse autoregressive ï¬ ow. arXiv preprint arXiv:1606.04934, 2016. [35] Diederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013. [36] Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images, 2009. [37] Hugo Larochelle and Iain Murray. The neural autoregressive distribution estimator. In AISTATS, 2011. [38] Anders Boesen Lindbo Larsen, Søren Kaae Sønderby, and Ole Winther. Autoencoding beyond pixels using a learned similarity metric. CoRR, abs/1512.09300, 2015. [39] Yann A LeCun, Léon Bottou, Genevieve B Orr, and Klaus-Robert Müller. Efï¬ cient backprop. In Neural networks: Tricks of the trade, pages 9â 48. Springer, 2012. [40] Chen-Yu Lee, Saining Xie, Patrick Gallagher, Zhengyou Zhang, and Zhuowen Tu. Deeply-supervised nets. arXiv preprint arXiv:1409.5185, 2014.
1605.08803#33
1605.08803#35
1605.08803
[ "1602.05110" ]
1605.08803#35
Density estimation using Real NVP
[41] Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Deep learning face attributes in the wild. In Proceedings of International Conference on Computer Vision (ICCV), December 2015. [42] Lars Maaløe, Casper Kaae Sønderby, Søren Kaae Sønderby, and Ole Winther. Auxiliary deep generative models. arXiv preprint arXiv:1602.05473, 2016. [43] Andriy Mnih and Karol Gregor. Neural variational inference and learning in belief networks. arXiv preprint arXiv:1402.0030, 2014. [44] Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-level control through deep reinforcement learning. Nature, 518(7540):529â 533, 2015. [45] Radford M Neal and Geoffrey E Hinton. A view of the em algorithm that justiï¬ es incremental, sparse, and other variants. In Learning in graphical models, pages 355â 368. Springer, 1998. 11 Published as a conference paper at ICLR 2017 [46] Aaron van den Oord, Nal Kalchbrenner, and Koray Kavukcuoglu. Pixel recurrent neural networks. arXiv preprint arXiv:1601.06759, 2016. [47] Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. CoRR, abs/1511.06434, 2015. [48] Danilo Jimenez Rezende and Shakir Mohamed.
1605.08803#34
1605.08803#36
1605.08803
[ "1602.05110" ]
1605.08803#36
Density estimation using Real NVP
Variational inference with normalizing ï¬ ows. arXiv preprint arXiv:1505.05770, 2015. [49] Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approxi- mate inference in deep generative models. arXiv preprint arXiv:1401.4082, 2014. [50] Oren Rippel and Ryan Prescott Adams. High-dimensional probability estimation with deep density models. arXiv preprint arXiv:1302.5125, 2013. [51] David E Rumelhart, Geoffrey E Hinton, and Ronald J Williams. Learning representations by back- propagating errors. Cognitive modeling, 5(3):1, 1988. [52] Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al. Imagenet large scale visual recognition challenge. International Journal of Computer Vision, 115(3):211â 252, 2015. [53] Ruslan Salakhutdinov and Geoffrey E Hinton. Deep boltzmann machines.
1605.08803#35
1605.08803#37
1605.08803
[ "1602.05110" ]
1605.08803#37
Density estimation using Real NVP
In International conference on artiï¬ cial intelligence and statistics, pages 448â 455, 2009. [54] Tim Salimans and Diederik P Kingma. Weight normalization: A simple reparameterization to accelerate training of deep neural networks. arXiv preprint arXiv:1602.07868, 2016. [55] Tim Salimans, Diederik P Kingma, and Max Welling. Markov chain monte carlo and variational inference: Bridging the gap. arXiv preprint arXiv:1410.6460, 2014. [56] Lawrence K Saul, Tommi Jaakkola, and Michael I Jordan. Mean ï¬ eld theory for sigmoid belief networks. Journal of artiï¬ cial intelligence research, 4(1):61â 76, 1996. [57] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recogni- tion. arXiv preprint arXiv:1409.1556, 2014. [58] Paul Smolensky. Information processing in dynamical systems: Foundations of harmony theory.
1605.08803#36
1605.08803#38
1605.08803
[ "1602.05110" ]
1605.08803#38
Density estimation using Real NVP
Technical report, DTIC Document, 1986. [59] Jascha Sohl-Dickstein, Eric A. Weiss, Niru Maheswaranathan, and Surya Ganguli. Deep unsupervised learning using nonequilibrium thermodynamics. In Proceedings of the 32nd International Conference on Machine Learning, ICML 2015, Lille, France, 6-11 July 2015, pages 2256â 2265, 2015. [60] Sasha Targ, Diogo Almeida, and Kevin Lyman.
1605.08803#37
1605.08803#39
1605.08803
[ "1602.05110" ]
1605.08803#39
Density estimation using Real NVP
Resnet in resnet: Generalizing residual architectures. CoRR, abs/1603.08029, 2016. [61] Lucas Theis and Matthias Bethge. Generative image modeling using spatial lstms. In Advances in Neural Information Processing Systems, pages 1918â 1926, 2015. [62] Lucas Theis, Aäron Van Den Oord, and Matthias Bethge. A note on the evaluation of generative models. CoRR, abs/1511.01844, 2015. [63] Dustin Tran, Rajesh Ranganath, and David M Blei. Variational gaussian process. arXiv preprint arXiv:1511.06499, 2015. [64] Benigno Uria, Iain Murray, and Hugo Larochelle.
1605.08803#38
1605.08803#40
1605.08803
[ "1602.05110" ]
1605.08803#40
Density estimation using Real NVP
Rnade: The real-valued neural autoregressive density- estimator. In Advances in Neural Information Processing Systems, pages 2175â 2183, 2013. [65] Hado van Hasselt, Arthur Guez, Matteo Hessel, and David Silver. Learning functions across many orders of magnitudes. arXiv preprint arXiv:1602.07714, 2016. [66] Oriol Vinyals, Samy Bengio, and Manjunath Kudlur. Order matters: Sequence to sequence for sets. arXiv preprint arXiv:1511.06391, 2015. [67] Manuel Watter, Jost Springenberg, Joschka Boedecker, and Martin Riedmiller.
1605.08803#39
1605.08803#41
1605.08803
[ "1602.05110" ]
1605.08803#41
Density estimation using Real NVP
Embed to control: A locally linear latent dynamics model for control from raw images. In Advances in Neural Information Processing Systems, pages 2728â 2736, 2015. [68] Ronald J Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229â 256, 1992. [69] Fisher Yu and Vladlen Koltun. Multi-scale context aggregation by dilated convolutions. arXiv preprint arXiv:1511.07122, 2015. [70] Fisher Yu, Yinda Zhang, Shuran Song, Ari Seff, and Jianxiong Xiao. Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365, 2015. [71] Richard Zhang, Phillip Isola, and Alexei A Efros. Colorful image colorization. arXiv preprint arXiv:1603.08511, 2016.
1605.08803#40
1605.08803#42
1605.08803
[ "1602.05110" ]
1605.08803#42
Density estimation using Real NVP
12 Published as a conference paper at ICLR 2017 A Samples Figure 7: Samples from a model trained on Imagenet (64 Ã 64). 13 Published as a conference paper at ICLR 2017 Figure 8: Samples from a model trained on CelebA. 14 Published as a conference paper at ICLR 2017 iaâ . cee: Tae fe Ft Figure 9: Samples from a model trained on LSUN (bedroom category). 15 Published as a conference paper at ICLR 2017 â a Figure 10:
1605.08803#41
1605.08803#43
1605.08803
[ "1602.05110" ]
1605.08803#43
Density estimation using Real NVP
Samples from a model trained on LSUN (church outdoor category). 16 Published as a conference paper at ICLR 2017 Figure 11: Samples from a model trained on LSUN (tower category). 17 Published as a conference paper at ICLR 2017 B Manifold Figure 12: Manifold from a model trained on Jmagenet (64 x 64). Images with red borders are taken from the validation set, and define the manifold. The manifold was computed as described in Equation Tx where the x-axis corresponds to ¢, and the y-axis to ¢â , and where , ¢â â ¬ {0,7,---, G}.- 18
1605.08803#42
1605.08803#44
1605.08803
[ "1602.05110" ]
1605.08803#44
Density estimation using Real NVP
Published as a conference paper at ICLR 2017 Figure 13: Manifold from a model trained on CelebA. Images with red borders are taken from the training set, and define the manifold. The manifold was computed as described in Equation}19} where the x-axis corresponds to ¢, and the y-axis to ¢â , and where ¢, 0â â ¬ {0,4,---, = : 19 Published as a conference paper at ICLR 2017 Figure 14: Manifold from a model trained on LSUN (bedroom category). Images with red bor- ders are taken from the validation set, and define the manifold. The manifold was computed as described in Equation [19] where the x-axis corresponds to ¢, and the y-axis to ¢â , and where b,9 â ¬{0, 5,0, F}. 20
1605.08803#43
1605.08803#45
1605.08803
[ "1602.05110" ]
1605.08803#45
Density estimation using Real NVP
Published as a conference paper at ICLR 2017 Figure 15: Manifold from a model trained on LSUN (church outdoor category). Images with red borders are taken from the validation set, and define the manifold. The manifold was computed as described in Equation[19] where the x-axis corresponds to ¢, and the y-axis to ¢â , and where b,9 â ¬{0, 5,05, Fh. 21 Published as a conference paper at ICLR 2017 Figure 16: Manifold from a model trained on LSUN (tower category). Images with red bor- ders are taken from the validation set, and define the manifold. The manifold was computed as described in Equation|19| where the x-axis corresponds to ¢, and the y-axis to ¢â , and where b,9 â ¬ {05,05 , Fh. # C Extrapolation Inspired by the texture generation work by [19, 61] and extrapolation test with DCGAN [47], we also evaluate the statistics captured by our model by generating images twice or ten times as large as present in the dataset. As we can observe in the following ï¬ gures, our model seems to successfully create a â textureâ representation of the dataset while maintaining a spatial smoothness through the image. Our convolutional architecture is only aware of the position of considered pixel through edge effects in convolutions, therefore our model is similar to a stationary process. This also explains why these samples are more consistent in LSUN, where the training data was obtained using random crops.
1605.08803#44
1605.08803#46
1605.08803
[ "1602.05110" ]
1605.08803#46
Density estimation using Real NVP
22 Published as a conference paper at ICLR 2017 (a) Ã 2 (b) Ã 10 Figure 17: We generate samples a factor bigger than the training set image size on Imagenet (64Ã 64). 23 Published as a conference paper at ICLR 2017 (a) Ã 2 (b) Ã 10 Figure 18: We generate samples a factor bigger than the training set image size on CelebA. 24 Published as a conference paper at ICLR 2017 (a) Ã 2 (b) Ã 10 Figure 19:
1605.08803#45
1605.08803#47
1605.08803
[ "1602.05110" ]
1605.08803#47
Density estimation using Real NVP
We generate samples a factor bigger than the training set image size on LSUN (bedroom category). 25 Published as a conference paper at ICLR 2017 (a) Ã 2 (b) Ã 10 Figure 20: We generate samples a factor bigger than the training set image size on LSUN (church outdoor category). 26 Published as a conference paper at ICLR 2017 (a) Ã 2 (b) Ã 10 Figure 21: We generate samples a factor bigger than the training set image size on LSUN (tower category). 27
1605.08803#46
1605.08803#48
1605.08803
[ "1602.05110" ]
1605.08803#48
Density estimation using Real NVP
Published as a conference paper at ICLR 2017 # D Latent variables semantic As in [22], we further try to grasp the semantic of our learned layers latent variables by doing ablation tests. We infer the latent variables and resample the lowest levels of latent variables from a standard gaussian, increasing the highest level affected by this resampling. As we can see in the following ï¬ gures, the semantic of our latent space seems to be more on a graphic level rather than higher level concept. Although the heavy use of convolution improves learning by exploiting image prior knowledge, it is also likely to be responsible for this limitation. Figure 22: Conceptual compression from a model trained on Imagenet (64 à 64). The leftmost column represent the original image, the subsequent columns were obtained by storing higher level latent variables and resampling the others, storing less and less as we go right. From left to right: 100%, 50%, 25%, 12.5% and 6.25% of the latent variables are kept. Figure 23: Conceptual compression from a model trained on CelebA. The leftmost column represent the original image, the subsequent columns were obtained by storing higher level latent variables and resampling the others, storing less and less as we go right. From left to right: 100%, 50%, 25%, 12.5% and 6.25% of the latent variables are kept.
1605.08803#47
1605.08803#49
1605.08803
[ "1602.05110" ]
1605.08803#49
Density estimation using Real NVP
28 Published as a conference paper at ICLR 2017 Figure 24: Conceptual compression from a model trained on LSUN (bedroom category). The leftmost column represent the original image, the subsequent columns were obtained by storing higher level latent variables and resampling the others, storing less and less as we go right. From left to right: 100%, 50%, 25%, 12.5% and 6.25% of the latent variables are kept. Figure 25: Conceptual compression from a model trained on LSUN (church outdoor category). The leftmost column represent the original image, the subsequent columns were obtained by storing higher level latent variables and resampling the others, storing less and less as we go right. From left to right: 100%, 50%, 25%, 12.5% and 6.25% of the latent variables are kept.
1605.08803#48
1605.08803#50
1605.08803
[ "1602.05110" ]
1605.08803#50
Density estimation using Real NVP
29 Published as a conference paper at ICLR 2017 Figure 26: Conceptual compression from a model trained on LSUN (tower category). The leftmost column represent the original image, the subsequent columns were obtained by storing higher level latent variables and resampling the others, storing less and less as we go right. From left to right: 100%, 50%, 25%, 12.5% and 6.25% of the latent variables are kept. # E Batch normalization We further experimented with batch normalization by using a weighted average of a moving average of the layer statistics Ë Âµt, Ë Ï 2 Ë Âµt+1 = Ï Ë Âµt + (1 â Ï )Ë Âµt t + (1 â Ï )Ë Ï 2 t+1 = Ï Ë Ï 2 Ë Ï 2 t , (20) (21) where Ï is the momentum. When using Ë Âµt+1, Ë Ï 2 statistics Ë Âµt, Ë Ï
1605.08803#49
1605.08803#51
1605.08803
[ "1602.05110" ]
1605.08803#51
Density estimation using Real NVP
2 t+1, we only propagate gradient through the current batch t . We observe that using this lag helps the model train with very small minibatches. We used batch normalization with a moving average for our results on CIFAR-10. # F Attribute change Additionally, we exploit the attribute information y in CelebA to build a conditional model, i.e. the invertible function f from image to latent variable uses the labels in y to define its parameters. In order to observe the information stored in the latent variables, we choose to encode a batch of images x with their original attribute y and decode them using a new set of attributes yâ , build by shuffling the original attributes inside the batch. We obtain the new images xâ = g(f(a;y);yâ )- We observe that, although the faces are changed as to respect the new attributes, several properties remain unchanged like position and background. 30
1605.08803#50
1605.08803#52
1605.08803
[ "1602.05110" ]
1605.08803#52
Density estimation using Real NVP
Published as a conference paper at ICLR 2017 ~~ oe. Ge Di pei oe F ete 2 ab «> wp T ep ie pon oe o BOcr: ~ rN Pwe % =ees Figure 27: Examples x from the CelebA dataset. 31 Published as a conference paper at ICLR 2017 Figure 28: From a model trained on pairs of images and attributes from the CelebA dataset, we encode a batch of images with their original attributes before decoding them with a new set of attributes. We notice that the new images often share similar characteristics with those in Fig 27, including position and background.
1605.08803#51
1605.08803#53
1605.08803
[ "1602.05110" ]
1605.08803#53
Density estimation using Real NVP
32
1605.08803#52
1605.08803
[ "1602.05110" ]
1605.07725#0
Adversarial Training Methods for Semi-Supervised Text Classification
1 2 0 2 # v o N 6 1 ] L M . t a t s [ arXiv:1605.07725v4 4 v 5 2 7 7 0 . 5 0 6 1 : v i X r a Published as a conference paper at ICLR 2017 # ADVERSARIAL TRAINING METHODS FOR SEMI-SUPERVISED TEXT CLASSIFICATION Takeru Miyato1,2â , Andrew M Dai2, Ian Goodfellow3 [email protected], [email protected], [email protected] 1 Preferred Networks, Inc., ATR Cognitive Mechanisms Laboratories, Kyoto University 2 Google Brain 3 OpenAI # ABSTRACT Adversarial training provides a means of regularizing supervised learning al- gorithms while virtual adversarial training is able to extend supervised learn- ing algorithms to the semi-supervised setting. However, both methods require making small perturbations to numerous entries of the input vector, which is inappropriate for sparse high-dimensional inputs such as one-hot word rep- resentations. We extend adversarial and virtual adversarial training to the text domain by applying perturbations to the word embeddings in a recur- rent neural network rather than to the original input itself. The proposed method achieves state of the art results on multiple benchmark semi-supervised and purely supervised tasks. We provide visualizations and analysis show- ing that the learned word embeddings have improved in quality and that while training, the model is less prone to overï¬
1605.07725#1
1605.07725
[ "1603.04467" ]
1605.07725#1
Adversarial Training Methods for Semi-Supervised Text Classification
tting. Code is available at https://github.com/tensorï¬ ow/models/tree/master/research/adversarial_text. # INTRODUCTION Adversarial examples are examples that are created by making small perturbations to the input de- signed to signiï¬ cantly increase the loss incurred by a machine learning model (Szegedy et al., 2014; Goodfellow et al., 2015). Several models, including state of the art convolutional neural networks, lack the ability to classify adversarial examples correctly, sometimes even when the adversarial perturbation is constrained to be so small that a human observer cannot perceive it. Adversarial training is the process of training a model to correctly classify both unmodiï¬ ed examples and ad- versarial examples. It improves not only robustness to adversarial examples, but also generalization performance for original examples. Adversarial training requires the use of labels when training models that use a supervised cost, because the label appears in the cost function that the adversarial perturbation is designed to maximize. Virtual adversarial training (Miyato et al., 2016) extends the idea of adversarial training to the semi-supervised regime and unlabeled examples. This is done by regularizing the model so that given an example, the model will produce the same output distribution as it produces on an adversarial perturbation of that example. Virtual adversarial training achieves good generalization performance for both supervised and semi-supervised learning tasks. Previous work has primarily applied adversarial and virtual adversarial training to image classiï¬ ca- tion tasks. In this work, we extend these techniques to text classiï¬ cation tasks and sequence models. Adversarial perturbations typically consist of making small modiï¬ cations to very many real-valued inputs. For text classiï¬ cation, the input is discrete, and usually represented as a series of high- dimensional one-hot vectors. Because the set of high-dimensional one-hot vectors does not admit inï¬ nitesimal perturbation, we deï¬ ne the perturbation on continuous word embeddings instead of dis- crete word inputs.
1605.07725#0
1605.07725#2
1605.07725
[ "1603.04467" ]
1605.07725#2
Adversarial Training Methods for Semi-Supervised Text Classification
Traditional adversarial and virtual adversarial training can be interpreted both as a regularization strategy (Szegedy et al., 2014; Goodfellow et al., 2015; Miyato et al., 2016) and as de- fense against an adversary who can supply malicious inputs (Szegedy et al., 2014; Goodfellow et al., 2015). Since the perturbed embedding does not map to any word and the adversary presumably does not have access to the word embedding layer, our proposed training strategy is no longer intended as
1605.07725#1
1605.07725#3
1605.07725
[ "1603.04467" ]
1605.07725#3
Adversarial Training Methods for Semi-Supervised Text Classification
â This work was done when the author was at Google Brain. 1 Published as a conference paper at ICLR 2017 a defense against an adversary. We thus propose this approach exclusively as a means of regularizing a text classiï¬ er by stabilizing the classiï¬ cation function. We show that our approach with neural language model unsupervised pretraining as proposed by Dai & Le (2015) achieves state of the art performance for multiple semi-supervised text clas- siï¬ cation tasks, including sentiment classiï¬ cation and topic classiï¬ cation.
1605.07725#2
1605.07725#4
1605.07725
[ "1603.04467" ]
1605.07725#4
Adversarial Training Methods for Semi-Supervised Text Classification
We emphasize that opti- mization of only one additional hyperparameter Ç«, the norm constraint limiting the size of the adver- sarial perturbations, achieved such state of the art performance. These results strongly encourage the use of our proposed method for other text classiï¬ cation tasks. We believe that text classiï¬ ca- tion is an ideal setting for semi-supervised learning because there are abundant unlabeled corpora for semi-supervised learning algorithms to leverage. This work is the ï¬ rst work we know of to use adversarial and virtual adversarial training to improve a text or RNN model. We also analyzed the trained models to qualitatively characterize the effect of adversarial and vir- tual adversarial training. We found that adversarial and virtual adversarial training improved word embeddings over the baseline methods.
1605.07725#3
1605.07725#5
1605.07725
[ "1603.04467" ]
1605.07725#5
Adversarial Training Methods for Semi-Supervised Text Classification
# 2 MODEL We denote a sequence of T words as {w(t)|t = 1, . . . , T }, and a corresponding target as y. To transform a discrete word input to a continuous vector, we deï¬ ne the word embedding matrix V â R(K+1)à D where K is the number of words in the vocabulary and each row vk corresponds to the word embedding of the i-th word. Note that the (K + 1)-th word embedding is used as an embedding of an â
1605.07725#4
1605.07725#6
1605.07725
[ "1603.04467" ]
1605.07725#6
Adversarial Training Methods for Semi-Supervised Text Classification
end of sequence (eos)â token, veos. As a text classiï¬ cation model, we used a simple LSTM-based neural network model, shown in Figure 1a. At time step t, the input is the discrete word w(t), and the corresponding word embedding is v(t). We additionally tried the bidirectional y LSTM LSTM v(1) v(2) v(3) veos ¯v(1) r(1) ¯v(2) r(2) ¯v(3) r(3) w(1) w(2) w(3) weos w(1) w(2) w(3) (a) LSTM-based text classiï¬ cation model. (b) The model with perturbed embeddings. y veos weos Figure 1: Text classiï¬ cation models with clean embeddings (a) and with perturbed embeddings (b). LSTM architecture (Graves & Schmidhuber, 2005) since this is used by the current state of the art method (Johnson & Zhang, 2016b). For constructing the bidirectional LSTM model for text classiï¬ cation, we add an additional LSTM on the reversed sequence to the unidirectional LSTM model described in Figure 1. The model then predicts the label on the concatenated LSTM outputs of both ends of the sequence. In adversarial and virtual adversarial training, we train the classiï¬ er to be robust to perturbations of the embeddings, shown in Figure 1b. These perturbations are described in detail in Section 3.
1605.07725#5
1605.07725#7
1605.07725
[ "1603.04467" ]
1605.07725#7
Adversarial Training Methods for Semi-Supervised Text Classification
At present, it is sufï¬ cient to understand that the perturbations are of bounded norm. The model could trivially learn to make the perturbations insigniï¬ cant by learning embeddings with very large norm. To prevent this pathological solution, when we apply adversarial and virtual adversarial training to the model we deï¬ ned above, we replace the embeddings vk with normalized embeddings ¯vk, deï¬ ned as: ¯vk = vk â E(v) Var(v) where E(v) = K X j=1 fjvj, Var(v) = K X j=1 fj (vj â E(v))2 , (1) # p
1605.07725#6
1605.07725#8
1605.07725
[ "1603.04467" ]
1605.07725#8
Adversarial Training Methods for Semi-Supervised Text Classification
where fi is the frequency of the i-th word, calculated within all training examples. 2 Published as a conference paper at ICLR 2017 # 3 ADVERSARIAL AND VIRTUAL ADVERSARIAL TRAINING Adversarial training (Goodfellow et al., 2015) is a novel regularization method for classiï¬ ers to improve robustness to small, approximately worst case perturbations. Let us denote x as the input and θ as the parameters of a classiï¬ er. When applied to a classiï¬ er, adversarial training adds the following term to the cost function: â log p(y | x + radv; θ) where radv = arg min r,krk⠤ǫ log p(y | x + r; Ë Î¸) (2) where r is a perturbation on the input and Ë Î¸ is a constant set to the current parameters of a classiï¬ er. The use of the constant copy Ë Î¸ rather than θ indicates that the backpropagation algorithm should not be used to propagate gradients through the adversarial example construction process. At each step of training, we identify the worst case perturbations radv against the current model p(y|x; Ë Î¸) in Eq. (2), and train the model to be robust to such perturbations through minimizing Eq. (2) with respect to θ. However, we cannot calculate this value exactly in general, because exact minimization with respect to r is intractable for many interesting models such as neural networks. Goodfellow et al. (2015) proposed to approximate this value by linearizing log p(y | x; Ë Î¸) around x. With a linear approximation and a L2 norm constraint in Eq.(2), the resulting adversarial perturbation is # radv = â Ç«g/kgk2 where g = â x log p(y | x; Ë Î¸).
1605.07725#7
1605.07725#9
1605.07725
[ "1603.04467" ]
1605.07725#9
Adversarial Training Methods for Semi-Supervised Text Classification
This perturbation can be easily computed using backpropagation in neural networks. Virtual adversarial training (Miyato et al., 2016) is a regularization method closely related to adver- sarial training. The additional cost introduced by virtual adversarial training is the following: KL[p(· | x; Ë Î¸)||p(· | x + rv-adv; θ)] (3) where rv-adv = arg max r,krk⠤ǫ KL[p(· | x; Ë Î¸)||p(· | x + r; Ë Î¸)] (4) where KL[p||q] denotes the KL divergence between distributions p and q. By minimizing Eq.(3), a classiï¬
1605.07725#8
1605.07725#10
1605.07725
[ "1603.04467" ]
1605.07725#10
Adversarial Training Methods for Semi-Supervised Text Classification
er is trained to be smooth. This can be considered as making the classiï¬ er resistant to pertur- bations in directions to which it is most sensitive on the current model p(y|x; Ë Î¸). Virtual adversarial loss Eq.(3) requires only the input x and does not require the actual label y while adversarial loss deï¬ ned in Eq.(2) requires the label y. This makes it possible to apply virtual adversarial training to semi-supervised learning. Although we also in general cannot analytically calculate the virtual adversarial loss, Miyato et al. (2016) proposed to calculate the approximated Eq.(3) efï¬ ciently with backpropagation. As described in Sec. 2, in our work, we apply the adversarial perturbation to word embeddings, rather than directly to the input.
1605.07725#9
1605.07725#11
1605.07725
[ "1603.04467" ]
1605.07725#11
Adversarial Training Methods for Semi-Supervised Text Classification
To deï¬ ne adversarial perturbation on the word embeddings, let us denote a concatenation of a sequence of (normalized) word embedding vectors [¯v(1), ¯v(2), . . . , ¯v(T )] as s, and the model conditional probability of y given s as p(y|s; θ) where θ are model parameters. Then we deï¬ ne the adversarial perturbation radv on s as: radv = â Ç«g/kgk2 where g = â s log p(y | s; Ë Î¸). (5) To be robust to the adversarial perturbation deï¬ ned in Eq.(5), we deï¬ ne the adversarial loss by Ladv(θ) = â 1 N N X n=1 log p(yn | sn + radv,n; θ) (6) where N is the number of labeled examples. minimizing the negative log-likelihood plus Ladv with stochastic gradient descent. In virtual adversarial training on our text classiï¬ cation model, at each training step, we calculate the below approximated virtual adversarial perturbation: rv-adv = Ç«g/kgk2 where g = â s+dKL hp(· | s; Ë Î¸)||p(· | s + d; Ë Î¸)i (7) 3
1605.07725#10
1605.07725#12
1605.07725
[ "1603.04467" ]
1605.07725#12
Adversarial Training Methods for Semi-Supervised Text Classification
Published as a conference paper at ICLR 2017 where d is a T D-dimensional small random vector. This approximation corresponds to a 2nd- order Taylor expansion and a single iteration of the power method on Eq.(3) as in previous work (Miyato et al., 2016). Then the virtual adversarial loss is deï¬ ned as: N â ² Lv-adv(θ) = 1 N â ² X nâ ²=1 KL hp(· | snâ ² ; Ë Î¸)||p(· | snâ ² + rv-adv,nâ ²; θ)i (8) where N â ² is the number of both labeled and unlabeled examples.
1605.07725#11
1605.07725#13
1605.07725
[ "1603.04467" ]
1605.07725#13
Adversarial Training Methods for Semi-Supervised Text Classification
See Warde-Farley & Goodfellow (2016) for a recent review of adversarial training methods. # 4 EXPERIMENTAL SETTINGS All experiments used TensorFlow (Abadi et al., 2016) on GPUs. To compare our method with other text classiï¬ cation methods, we tested on 5 different text datasets. We summarize information about each dataset in Table 1. IMDB (Maas et al., 2011)1 is a standard benchmark movie review dataset for sentiment classiï¬
1605.07725#12
1605.07725#14
1605.07725
[ "1603.04467" ]
1605.07725#14
Adversarial Training Methods for Semi-Supervised Text Classification
ca- tion. Elec (Johnson & Zhang, 2015b)2 3 is an Amazon electronic product review dataset. Rotten Tomatoes (Pang & Lee, 2005) consists of short snippets of movie reviews, for sentiment classiï¬ - cation. The Rotten Tomatoes dataset does not come with separate test sets, thus we divided all examples randomly into 90% for the training set, and 10% for the test set. We repeated train- ing and evaluation ï¬ ve times with different random seeds for the division. For the Rotten Toma- toes dataset, we also collected unlabeled examples using movie reviews from the Amazon Re- views dataset (McAuley & Leskovec, 2013) 4. DBpedia (Lehmann et al., 2015; Zhang et al., 2015) is a dataset of Wikipedia pages for category classiï¬
1605.07725#13
1605.07725#15
1605.07725
[ "1603.04467" ]
1605.07725#15
Adversarial Training Methods for Semi-Supervised Text Classification
cation. Because the DBpedia dataset has no additional unlabeled examples, the results on DBpedia are for the supervised learning task only. RCV1 (Lewis et al., 2004) consists of news articles from the Reuters Corpus. For the RCV1 dataset, we followed previous works (Johnson & Zhang, 2015b) and we conducted a single topic classiï¬ ca- tion task on the second level topics. We used the same division into training, test and unlabeled sets as Johnson & Zhang (2015b). Regarding pre-processing, we treated any punctuation as spaces. We converted all words to lower-case on the Rotten Tomatoes, DBpedia, and RCV1 datasets. We removed words which appear in only one document on all datasets. On RCV1, we also removed words in the English stop-words list provided by Lewis et al. (2004)5. Table 1: Summary of datasets. Note that unlabeled examples for the Rotten Tomatoes dataset are not provided so we instead use the unlabeled Amazon reviews dataset. Classes Train Test Unlabeled Avg. T Max T IMDB Elec Rotten Tomatoes DBpedia RCV1 2 2 2 14 55 25,000 24,792 9596 560,000 15,564 25,000 24,897 1066 70,000 49,838 50,000 197,025 7,911,684 â 668,640 239 110 20 49 153 2,506 5,123 54 953 9,852
1605.07725#14
1605.07725#16
1605.07725
[ "1603.04467" ]
1605.07725#16
Adversarial Training Methods for Semi-Supervised Text Classification
4.1 RECURRENT LANGUAGE MODEL PRE-TRAINING Following Dai & Le (2015), we initialized the word embedding matrix and LSTM weights with a pre-trained recurrent language model (Bengio et al., 2006; Mikolov et al., 2010) that was trained on 1http://ai.stanford.edu/~amaas/data/sentiment/ 2http://riejohnson.com/cnn_data.html 3There are some duplicated reviews in the original Elec dataset, and we used the dataset with removal of the duplicated reviews, provided by Johnson & Zhang (2015b), thus there are slightly fewer examples shown in Table 1 than the ones in previous works(Johnson & Zhang, 2015b; 2016b). # 4http://snap.stanford.edu/data/web-Amazon.html 5http://www.ai.mit.edu/projects/jmlr/papers/volume5/lewis04a/lyrl2004_rcv1v2_README.htm 4 Published as a conference paper at ICLR 2017 both labeled and unlabeled examples. We used a unidirectional single-layer LSTM with 1024 hidden units. The word embedding dimension D was 256 on IMDB and 512 on the other datasets. We used a sampled softmax loss with 1024 candidate samples for training. For the optimization, we used the Adam optimizer (Kingma & Ba, 2015), with batch size 256, an initial learning rate of 0.001, and a 0.9999 learning rate exponential decay factor at each training step. We trained for 100,000 steps. We applied gradient clipping with norm set to 1.0 on all the parameters except word embeddings. To reduce runtime on GPU, we used truncated backpropagation up to 400 words from each end of the sequence. For regularization of the recurrent language model, we applied dropout (Srivastava et al., 2014) on the word embedding layer with 0.5 dropout rate. For the bidirectional LSTM model, we used 512 hidden units LSTM for both the standard order and reversed order sequences, and we used 256 dimensional word embeddings which are shared with both of the LSTMs. The other hyperparameters are the same as for the unidirectional LSTM. We tested the bidirectional LSTM model on IMDB, Elec and RCV because there are relatively long sentences in the datasets.
1605.07725#15
1605.07725#17
1605.07725
[ "1603.04467" ]
1605.07725#17
Adversarial Training Methods for Semi-Supervised Text Classification
Pretraining with a recurrent language model was very effective on classiï¬ cation performance on all the datasets we tested on and so our results in Section 5 are with this pretraining. 4.2 TRAINING CLASSIFICATION MODELS After pre-training, we trained the text classiï¬ cation model shown in Figure 1a with adversarial and virtual adversarial training as described in Section 3. Between the softmax layer for the target y and the ï¬ nal output of the LSTM, we added a hidden layer, which has dimension 30 on IMDB, Elec and Rotten Tomatoes, and 128 on DBpedia and RCV1. The activation function on the hidden layer was ReLU(Jarrett et al., 2009; Nair & Hinton, 2010; Glorot et al., 2011). For optimization, we again used the Adam optimizer, with 0.0005 initial learning rate 0.9998 exponential decay. Batch sizes are 64 on IMDB, Elec, RCV1, and 128 on DBpedia. For the Rotten Tomatoes dataset, for each step, we take a batch of size 64 for calculating the loss of the negative log-likelihood and adversarial training, and 512 for calculating the loss of virtual adversarial training. Also for Rotten Tomatoes, we used texts with lengths T less than 25 in the unlabeled dataset. We iterated 10,000 training steps on all datasets except IMDB and DBpedia, for which we used 15,000 and 20,000 training steps respectively. We again applied gradient clipping with the norm as 1.0 on all the parameters except the word embedding. We also used truncated backpropagation up to 400 words, and also generated the adversarial and virtual adversarial perturbation up to 400 words from each end of the sequence. We found the bidirectional LSTM to converge more slowly, so we iterated for 15,000 training steps when training the bidirectional LSTM classiï¬
1605.07725#16
1605.07725#18
1605.07725
[ "1603.04467" ]
1605.07725#18
Adversarial Training Methods for Semi-Supervised Text Classification
cation model. For each dataset, we divided the original training set into training set and validation set, and we roughly optimized some hyperparameters shared with all of the methods; (model architecture, batch- size, training steps) with the validation performance of the base model with embedding dropout. For each method, we optimized two scalar hyperparameters with the validation set. These were the dropout rate on the embeddings and the norm constraint Ç« of adversarial and virtual adversarial training. Note that for adversarial and virtual adversarial training, we generate the perturbation after applying embedding dropout, which we found performed the best. We did not do early stopping with these methods. The method with only pretraining and embedding dropout is used as the baseline (referred to as Baseline in each table).
1605.07725#17
1605.07725#19
1605.07725
[ "1603.04467" ]
1605.07725#19
Adversarial Training Methods for Semi-Supervised Text Classification
5 RESULTS 5.1 TEST PERFORMANCE ON IMDB DATASET AND MODEL ANALYSIS Figure 2 shows the learning curves on the IMDB test set with the baseline method (only embedding dropout and pretraining), adversarial training, and virtual adversarial training. We can see in Fig- ure 2a that adversarial and virtual adversarial training achieved lower negative log likelihood than the baseline. Furthermore, virtual adversarial training, which can utilize unlabeled data, maintained this low negative log-likelihood while the other methods began to overï¬ t later in training. Regarding adversarial and virtual adversarial loss in Figure 2b and 2c, we can see the same tendency as for negative log likelihood; virtual adversarial training was able to keep these values lower than other
1605.07725#18
1605.07725#20
1605.07725
[ "1603.04467" ]
1605.07725#20
Adversarial Training Methods for Semi-Supervised Text Classification
5 Published as a conference paper at ICLR 2017 methods. Because adversarial training operates only on the labeled subset of the training data, it eventually overï¬ ts even the task of resisting adversarial perturbations. d o o h i l e k i l g o l e v i t a g e n t s e T 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0.0 0 1000 2000 Baseline Adversarial Virtual adversarial 3000 4000 5000 s s o l l a i r a s r e v d a t s e T 2.5 2.0 1.5 1.0 0.5 0.0 0 Baseline Adversarial Virtual adversarial 1000 2000 3000 4000 5000 s s o l l a i r a s r e v d a l a u t r i v t s e T 1.0 0.8 0.6 0.4 0.2 0.0 0 Baseline Adversarial Virtual adversarial 1000 2000 3000 4000 Step Step Step (a) Negative log likelihood (b) Ladv(θ) (c) Lv-adv(θ) 5000 Figure 2: Learning curves of (a) negative log likelihood, (b) adversarial loss (deï¬ ned in Eq.(6)) and (c) virtual adversarial loss (deï¬ ned in Eq.(8)) on IMDB. All values were evaluated on the test set. Adversarial and virtual adversarial loss were evaluated with Ç« = 5.0. The optimal value of Ç« differs between adversarial training and virtual adversarial training, but the value of 5.0 performs very well for both and provides a consistent point of comparison.
1605.07725#19
1605.07725#21
1605.07725
[ "1603.04467" ]
1605.07725#21
Adversarial Training Methods for Semi-Supervised Text Classification
Table 2 shows the test performance on IMDB with each training method. â Adversarial + Virtual Ad- versarialâ means the method with both adversarial and virtual adversarial loss with the shared norm constraint Ç«. With only embedding dropout, our model achieved a 7.39% error rate. Adversarial and virtual adversarial training improved the performance relative to our baseline, and virtual adversarial training achieved performance on par with the state of the art, 5.91% error rate. This is despite the fact that the state of the art model requires training a bidirectional LSTM whereas our model only uses a unidirectional LSTM. We also show results with a bidirectional LSTM. Our bidirectional LSTM model has the same performance as a unidirectional LSTM with virtual adversarial training. A common misconception is that adversarial training is equivalent to training on noisy examples. Noise is actually a far weaker regularizer than adversarial perturbations because, in high dimensional input spaces, an average noise vector is approximately orthogonal to the cost gradient. Adversarial perturbations are explicitly chosen to consistently increase the cost. To demonstrate the superiority of adversarial training over the addition of noise, we include control experiments which replaced adversarial perturbations with random perturbations from a multivariate Gaussian with scaled norm, on each embedding in the sequence.
1605.07725#20
1605.07725#22
1605.07725
[ "1603.04467" ]
1605.07725#22
Adversarial Training Methods for Semi-Supervised Text Classification
In Table 2, â Random perturbation with labeled examplesâ is the method in which we replace radv with random perturbations, and â Random perturbation with labeled and unlabeled examplesâ is the method in which we replace rv-adv with random perturbations. Every adversarial training method outperformed every random perturbation method. To visualize the effect of adversarial and virtual adversarial training on embeddings, we examined embeddings trained using each method. Table 3 shows the 10 top nearest neighbors to â
1605.07725#21
1605.07725#23
1605.07725
[ "1603.04467" ]
1605.07725#23
Adversarial Training Methods for Semi-Supervised Text Classification
goodâ and â badâ with trained embeddings. The baseline and random methods are both strongly inï¬ uenced by the grammatical structure of language, due to the language model pretraining step, but are not strongly inï¬ uenced by the semantics of the text classiï¬ cation task. For example, â badâ appears in the list of nearest neighbors to â goodâ on the baseline and the random perturbation method. Both â badâ and â goodâ are adjectives that can modify the same set of nouns, so it is reasonable for a language model to assign them similar embeddings, but this clearly does not convey much information about the actual meaning of the words. Adversarial training ensures that the meaning of a sentence cannot be inverted via a small change, so these words with similar grammatical role but different meaning become separated. When using adversarial and virtual adversarial training, â badâ no longer appears in the 10 top nearest neighbors to â goodâ . â badâ
1605.07725#22
1605.07725#24
1605.07725
[ "1603.04467" ]