id
stringlengths
12
15
title
stringlengths
8
162
content
stringlengths
1
17.6k
prechunk_id
stringlengths
0
15
postchunk_id
stringlengths
0
15
arxiv_id
stringlengths
10
10
references
listlengths
1
1
1606.01540#17
OpenAI Gym
RLLib: Lightweight standard and on/off policy reinforcement learning library (C++). http://web.cs.miami.edu/home/saminda/rilib.html, 2013. [11] Christos Dimitrakakis, Guangliang Li, and Nikoalos Tziortziotis. The reinforcement learning competition 2014. AI Magazine, 35(3):61â 65, 2014. [12] R. S. Sutton and A. G. Barto.
1606.01540#16
1606.01540#18
1606.01540
[ "1602.01783" ]
1606.01540#18
OpenAI Gym
Reinforcement Learning: An Introduction. MIT Press, 1998. [13] Petr BaudiË s and Jean-loup Gailly. Pachi: State of the art open source go program. In Advances in Computer Games, pages 24â 38. Springer, 2011. [14] Emanuel Todorov, Tom Erez, and Yuval Tassa. Mujoco: A physics engine for model-based control. In Intelligent Robots and Systems (IROS), 2012 IEEE/RSJ International Conference on, pages 5026â 5033. IEEE, 2012. [15] MichaÅ Kempka, Marek Wydmuch, Grzegorz Runc, Jakub Toczek, and Wojciech Ja´skowski.
1606.01540#17
1606.01540#19
1606.01540
[ "1602.01783" ]
1606.01540#19
OpenAI Gym
Vizdoom: A doom-based ai research platform for visual reinforcement learning. arXiv preprint arXiv:1605.02097, 2016. 4
1606.01540#18
1606.01540
[ "1602.01783" ]
1606.01305#0
Zoneout: Regularizing RNNs by Randomly Preserving Hidden Activations
7 1 0 2 p e S 2 2 ] E N . s c [ 4 v 5 0 3 1 0 . 6 0 6 1 : v i X r a Under review as a conference paper at ICLR 2017 # ZONEOUT: REGULARIZING RNNS BY RANDOMLY PRESERVING HIDDEN ACTIVATIONS David Krueger!*, Tegan Maharaj?*, Janos Kramar? Mohammad Pezeshki! Nicolas Ballas', Nan Rosemary Keâ , Anirudh Goyalâ Yoshua Bengioâ â , Aaron Courville'!, Christopher Pal? ! MILA, Université de Montréal, firstname. [email protected]. 2 Beole Polytechnique de Montréal, firstname. [email protected]. * Equal contributions. â CIFAR Senior Fellow. CIFAR Fellow. # ABSTRACT
1606.01305#1
1606.01305
[ "1603.05118" ]
1606.01305#1
Zoneout: Regularizing RNNs by Randomly Preserving Hidden Activations
We propose zoneout, a novel method for regularizing RNNs. At each timestep, zoneout stochastically forces some hidden units to maintain their previous values. Like dropout, zoneout uses random noise to train a pseudo-ensemble, improving generalization. But by preserving instead of dropping hidden units, gradient information and state information are more readily propagated through time, as in feedforward stochastic depth networks. We perform an empirical investigation of various RNN regularizers, and ï¬ nd that zoneout gives signiï¬ cant performance improvements across tasks. We achieve competitive results with relatively simple models in character- and word-level language modelling on the Penn Treebank and Text8 datasets, and combining with recurrent batch normalization (Cooijmans et al., 2016) yields state-of-the-art results on permuted sequential MNIST. # INTRODUCTION Regularizing neural nets can signiï¬ cantly improve performance, as indicated by the widespread use of early stopping, and success of regularization methods such as dropout and its recurrent variants (Hinton et al., 2012; Srivastava et al., 2014; Zaremba et al., 2014; Gal, 2015). In this paper, we address the issue of regularization in recurrent neural networks (RNNs) with a novel method called zoneout. RNNs sequentially construct ï¬ xed-length representations of arbitrary-length sequences by folding new observations into their hidden state using an input-dependent transition operator. The repeated application of the same transition operator at the different time steps of the sequence, however, can make the dynamics of an RNN sensitive to minor perturbations in the hidden state; the transition dynamics can magnify components of these perturbations exponentially. Zoneout aims to improve RNNsâ robustness to perturbations in the hidden state in order to regularize transition dynamics. Like dropout, zoneout injects noise during training.
1606.01305#0
1606.01305#2
1606.01305
[ "1603.05118" ]
1606.01305#2
Zoneout: Regularizing RNNs by Randomly Preserving Hidden Activations
But instead of setting some unitsâ activations to 0 as in dropout, zoneout randomly replaces some unitsâ activations with their activations from the previous timestep. As in dropout, we use the expectation of the random noise at test time. This results in a simple regularization approach which can be applied through time for any RNN architecture, and can be conceptually extended to any model whose state varies over time. Compared with dropout, zoneout is appealing because it preserves information ï¬ ow forwards and backwards through the network. This helps combat the vanishing gradient problem (Hochreiter, 1991; Bengio et al., 1994), as we observe experimentally. We also empirically evaluate zoneout on classiï¬ cation using the permuted sequential MNIST dataset, and on language modelling using the Penn Treebank and Text8 datasets, demonstrat- ing competitive or state of the art performance across tasks. In particular, we show that zo- neout performs competitively with other proposed regularization methods for RNNs, includ- ing recently-proposed dropout variants. Code for replicating all experiments can be found at: http://github.com/teganmaharaj/zoneout
1606.01305#1
1606.01305#3
1606.01305
[ "1603.05118" ]
1606.01305#3
Zoneout: Regularizing RNNs by Randomly Preserving Hidden Activations
1 Under review as a conference paper at ICLR 2017 2 RELATED WORK 2.1 RELATIONSHIP TO DROPOUT Zoneout can be seen as a selective application of dropout to some of the nodes in a modiï¬ ed computational graph, as shown in Figure 1. In zoneout, instead of dropping out (being set to 0), units zone out and are set to their previous value (ht = htâ 1). Zoneout, like dropout, can be viewed as a way to train a pseudo-ensemble (Bachman et al., 2014), injecting noise using a stochastic â identity-maskâ rather than a zero-mask. We conjecture that identity-masking is more appropriate for RNNs, since it makes it easier for the network to preserve information from previous timesteps going forward, and facilitates, rather than hinders, the ï¬ ow of gradient information going backward, as we demonstrate experimentally. ie M1 Figure 1: Zoneout as a special case of dropout; Ë ht is the unit hâ s hidden activation for the next time step (if not zoned out). Zoneout can be seen as applying dropout on the hidden state delta, Ë
1606.01305#2
1606.01305#4
1606.01305
[ "1603.05118" ]
1606.01305#4
Zoneout: Regularizing RNNs by Randomly Preserving Hidden Activations
ht â htâ 1. When this update is dropped out (represented by the dashed line), ht becomes htâ 1. 2.2 DROPOUT IN RNNS Initially successful applications of dropout in RNNs (Pham et al., 2013; Zaremba et al., 2014) only applied dropout to feed-forward connections (â up the stackâ ), and not recurrent connections (â forward through timeâ ), but several recent works (Semeniuta et al., 2016; Moon et al., 2015; Gal, 2015) propose methods that are not limited in this way. Bayer et al. (2013) successfully apply fast dropout (Wang & Manning, 2013), a deterministic approximation of dropout, to RNNs.
1606.01305#3
1606.01305#5
1606.01305
[ "1603.05118" ]
1606.01305#5
Zoneout: Regularizing RNNs by Randomly Preserving Hidden Activations
# eas Semeniuta et al. (2016) apply recurrent dropout to the updates to LSTM memory cells (or GRU states), i.e. they drop out the input/update gate in LSTM/GRU. Like zoneout, their approach prevents the loss of long-term memories built up in the states/cells of GRUs/LSTMS, but zoneout does this by preserving unitsâ activations exactly. This difference is most salient when zoning out the hidden states (not the memory cells) of an LSTM, for which there is no analogue in recurrent dropout. Whereas saturated output gates or output nonlinearities would cause recurrent dropout to suffer from vanishing gradients (Bengio et al., 1994), zoned-out units still propagate gradients effectively in this situation. Furthermore, while the recurrent dropout method is speciï¬ c to LSTMs and GRUs, zoneout generalizes to any model that sequentially builds distributed representations of its input, including vanilla RNNs. Also motivated by preventing memory loss, Moon et al. (2015) propose rnnDrop. This technique amounts to using the same dropout mask at every timestep, which the authors show results in improved performance on speech recognition in their experiments. Semeniuta et al. (2016) show, however, that past statesâ
1606.01305#4
1606.01305#6
1606.01305
[ "1603.05118" ]
1606.01305#6
Zoneout: Regularizing RNNs by Randomly Preserving Hidden Activations
inï¬ uence vanishes exponentially as a function of dropout probability when taking the expectation at test time in rnnDrop; this is problematic for tasks involving longer-term dependencies. # carurs Gal (2015) propose another technique which uses the same mask at each timestep. Motivated by variational inference, they drop out the rows of weight matrices in the input and output embeddings and LSTM gates, instead of dropping unitsâ activations. The proposed variational RNN technique achieves single-model state-of-the-art test perplexity of 73.4 on word-level language modelling of Penn Treebank. 2.3 RELATIONSHIP TO STOCHASTIC DEPTH Zoneout can also be viewed as a per-unit version of stochastic depth (Huang et al., 2016), which randomly drops entire layers of feed-forward residual networks (ResNets (He et al., 2015)).
1606.01305#5
1606.01305#7
1606.01305
[ "1603.05118" ]
1606.01305#7
Zoneout: Regularizing RNNs by Randomly Preserving Hidden Activations
This is 2 # Under review as a conference paper at ICLR 2017 equivalent to zoning out all of the units of a layer at the same time. In a typical RNN, there is a new input at each timestep, causing issues for a naive implementation of stochastic depth. Zoning out an entire layer in an RNN means the input at the corresponding timestep is completely ignored, whereas zoning out individual units allows the RNN to take each element of its input sequence into account. We also found that using residual connections in recurrent nets led to instability, presumably due to the parameter sharing in RNNs. Concurrent with our work, Singh et al. (2016) propose zoneout for ResNets, calling it SkipForward. In their experiments, zoneout is outperformed by stochastic depth, dropout, and their proposed Swapout technique, which randomly drops either or both of the identity or residual connections. Unlike Singh et al. (2016), we apply zoneout to RNNs, and ï¬ nd it outperforms stochastic depth and recurrent dropout. 2.4 SELECTIVELY UPDATING HIDDEN UNITS Like zoneout, clockwork RNNs (Koutnik et al., 2014) and hierarchical RNNs (Hihi & Bengio, 1996) update only some unitsâ activations at every timestep, but their updates are periodic, whereas zoneoutâ s are stochastic. Inspired by clockwork RNNs, we experimented with zoneout variants that target different update rates or schedules for different units, but did not ï¬ nd any performance beneï¬ t.
1606.01305#6
1606.01305#8
1606.01305
[ "1603.05118" ]
1606.01305#8
Zoneout: Regularizing RNNs by Randomly Preserving Hidden Activations
Hierarchical multiscale LSTMs (Chung et al., 2016) learn update probabilities for different units using the straight-through estimator (Bengio et al., 2013; Courbariaux et al., 2015), and combined with recently-proposed Layer Normalization (Ba et al., 2016), achieve competitive results on a variety of tasks. As the authors note, their method can be interpreted as an input-dependent form of adaptive zoneout. In recent work, Ha et al. (2016) use a hypernetwork to dynamically rescale the row-weights of a primary LSTM network, achieving state-of-the-art 1.21 BPC on character-level Penn Treebank when combined with layer normalization (Ba et al., 2016) in a two-layer network. This scaling can be viewed as an adaptive, differentiable version of the variational LSTM (Gal, 2015), and could similarly be used to create an adaptive, differentiable version of zoneout. Very recent work conditions zoneout probabilities on suprisal (a measure of the discrepancy between the predicted and actual state), and sets a new state of the art on enwik8 (Rocki et al., 2016). # 3 ZONEOUT AND PRELIMINARIES We now explain zoneout in full detail, and compare with other forms of dropout in RNNs. We start by reviewing recurrent neural networks (RNNs). 3.1 RECURRENT NEURAL NETWORKS Recurrent neural networks process data x1, x2, . . . , xT sequentially, constructing a corresponding sequence of representations, h1, h2, . . . , hT . Each hidden state is trained (implicitly) to remember and emphasize all task-relevant aspects of the preceding inputs, and to incorporate new inputs via a transition operator, T , which converts the present hidden state and input into a new hidden state: ht = T (htâ 1, xt). Zoneout modiï¬ es these dynamics by mixing the original transition operator Ë T with the identity operator (as opposed to the null operator used in dropout), according to a vector of Bernoulli masks, dt: Zoneout: 3.2 LONG SHORT-TERM MEMORY
1606.01305#7
1606.01305#9
1606.01305
[ "1603.05118" ]
1606.01305#9
Zoneout: Regularizing RNNs by Randomly Preserving Hidden Activations
In long short-term memory RNNs (LSTMs) (Hochreiter & Schmidhuber, 1997), the hidden state is divided into memory cell ct, intended for internal long-term storage, and hidden state ht, used as a transient representation of state at timestep t. In the most widely used formulation of an LSTM (Gers et al., 2000), ct and ht are computed via a set of four â gatesâ , including the forget gate, ft, which directly connects ct to the memories of the previous timestep ctâ
1606.01305#8
1606.01305#10
1606.01305
[ "1603.05118" ]
1606.01305#10
Zoneout: Regularizing RNNs by Randomly Preserving Hidden Activations
1, via an element-wise multiplication. Large values of the forget gate cause the cell to remember most (not all) of its previous value. The other gates control the ï¬ ow of information in (it, gt) and out (ot) of the cell. Each gate has a weight matrix and bias vector; for example the forget gate has Wxf , Whf , and bf . For brevity, we will write these as Wx, Wh, b. 3 # Under review as a conference paper at ICLR 2017 An LSTM is deï¬ ned as follows: in, fe, Or = O(Wexe + Wrhe_1 + b) H = tanh(W,,2; + Wight-1 + bg) = frOarithog hy = 0, © tanh(c) A naive application of dropout in LSTMs would zero-mask either or both of the memory cells and hidden states, without changing the computation of the gates (i, f, o, g). Dropping memory cells, for example, changes the computation of ct as follows: = dO (frOG-1 +t © 9) Alternatives abound, however; masks can be applied to any subset of the gates, cells, and states. Semeniuta et al. (2016), for instance, zero-mask the input gate:
1606.01305#9
1606.01305#11
1606.01305
[ "1603.05118" ]
1606.01305#11
Zoneout: Regularizing RNNs by Randomly Preserving Hidden Activations
c= (fr Oa-1 + di OO H) When the input gate is masked like this, there is no additive contribution from the input or hidden state, and the value of the memory cell simply decays according to the forget gate. (a) (b) Figure 2: (a) Zoneout, vs (b) the recurrent dropout strategy of (Semeniuta et al., 2016) in an LSTM. Dashed lines are zero-masked; in zoneout, the corresponding dotted lines are masked with the corresponding opposite zero-mask. Rectangular nodes are embedding layers. In zoneout, the values of the hidden state and memory cell randomly either maintain their previous value or are updated as usual. This introduces stochastic identity connections between subsequent time steps:
1606.01305#10
1606.01305#12
1606.01305
[ "1603.05118" ]
1606.01305#12
Zoneout: Regularizing RNNs by Randomly Preserving Hidden Activations
= df Ou-1+(l-di)O(frOartiOg) hy = dP? Oy + (l= di?) © (0 © tanh (fe O¢1tt%© g)) We usually use different zoneout masks for cells and hiddens. We also experiment with a variant of recurrent dropout that reuses the input dropout mask to zoneout the corresponding output gates: ce = (fr © r-1 + dy Ot © H) hy = ((1â dk) © o + dy © 04-1) © tanh(c;)
1606.01305#11
1606.01305#13
1606.01305
[ "1603.05118" ]
1606.01305#13
Zoneout: Regularizing RNNs by Randomly Preserving Hidden Activations
The motivation for this variant is to prevent the network from being forced (by the output gate) to expose a memory cell which has not been updated, and hence may contain misleading information. # 4 EXPERIMENTS AND DISCUSSION We evaluate zoneoutâ s performance on the following tasks: (1) Character-level language modelling on the Penn Treebank corpus (Marcus et al., 1993); (2) Word-level language modelling on the Penn Treebank corpus (Marcus et al., 1993); (3) Character-level language modelling on the Text8 corpus (Mahoney, 2011); (4) Classiï¬ cation of hand-written digits on permuted sequential MNIST (pMNIST) (Le et al., 2015). We also investigate the gradient ï¬ ow to past hidden states, using pMNIST. 4 Under review as a conference paper at ICLR 2017 4.1 PENN TREEBANK LANGUAGE MODELLING DATASET The Penn Treebank language model corpus contains 1 million words. The model is trained to predict the next word (evaluated on perplexity) or character (evaluated on BPC: bits per character) in a sequence. 1 4.1.1 CHARACTER-LEVEL For the character-level task, we train networks with one layer of 1000 hidden units. We train LSTMs with a learning rate of 0.002 on overlapping sequences of 100 in batches of 32, optimize using Adam, and clip gradients with threshold 1. These settings match those used in Cooijmans et al. (2016). We also train GRUs and tanh-RNNs with the same parameters as above, except sequences are non- overlapping and we use learning rates of 0.001, and 0.0003 for GRUs and tanh-RNNs respectively. Small values (0.1, 0.05) of zoneout signiï¬ cantly improve generalization performance for all three models. Intriguingly, we ï¬ nd zoneout increases training time for GRU and tanh-RNN, but decreases training time for LSTMs. We focus our investigation on LSTM units, where the dynamics of zoning out states, cells, or both provide interesting insight into zoneoutâ
1606.01305#12
1606.01305#14
1606.01305
[ "1603.05118" ]
1606.01305#14
Zoneout: Regularizing RNNs by Randomly Preserving Hidden Activations
s behaviour. Figure 3 shows our exploration of zoneout in LSTMs, for various zoneout probabilities of cells and/or hiddens. Zoneout on cells with probability 0.5 or zoneout on states with probability 0.05 both outperform the best-performing recurrent dropout (p = 0.25). Combining zc = 0.5 and zh = 0.05 leads to our best-performing model, which achieves 1.27 BPC, competitive with recent state-of-the-art set by (Ha et al., 2016). We compare zoneout to recurrent dropout (for p â {0.05, 0.2, 0.25, 0.5, 0.7}), weight noise (Ï = 0.075), norm stabilizer (β = 50) (Krueger & Memisevic, 2015), and explore stochastic depth (Huang et al., 2016) in a recurrent setting (analagous to zoning out an entire timestep). We also tried a shared-mask variant of zoneout as used in pMNIST experiments, where the same mask is used for both cells and hiddens. Neither stochastic depth or shared-mask zoneout performed as well as separate masks, sampled per unit. Figure 3 shows the best performance achieved with each regularizer, as well as an unregularized LSTM baseline. Results are reported in Table 1, and learning curves shown in Figure 4. Low zoneout probabilities (0.05-0.25) also improve over baseline in GRUs and tanh-RNNs, reducing BPC from 1.53 to 1.41 for GRU and 1.67 to 1.52 for tanh-RNN. Similarly, low zoneout probabilities work best on the hidden states of LSTMs. For memory cells in LSTMs, however, higher probabilities (around 0.5) work well, perhaps because large forget-gate values approximate the effect of cells zoning out. We conjecture that best performance is achieved with zoneout LSTMs because of the stability of having both state and cell. The probability that both will be zoned out is very low, but having one or the other zoned out carries information from the previous timestep forward, while having the other react â normallyâ to new information. # 4.1.2 WORD-LEVEL
1606.01305#13
1606.01305#15
1606.01305
[ "1603.05118" ]
1606.01305#15
Zoneout: Regularizing RNNs by Randomly Preserving Hidden Activations
For the word-level task, we replicate settings from Zaremba et al. (2014)â s best single-model perfor- mance. This network has 2 layers of 1500 units, with weights initialized uniformly [-0.04, +0.04]. The model is trained for 14 epochs with learning rate 1, after which the learning rate is reduced by a factor of 1.15 after each epoch. Gradient norms are clipped at 10. With no dropout on the non-recurrent connections (i.e. zoneout as the only regularization), we do not achieve competitive results. We did not perform any search over models, and conjecture that the large model size requires regularization of the feed-forward connections. Adding zoneout (zc = 0.25 and zh = 0.025) on the recurrent connections to the model optimized for dropout on the non-recurrent connections however, we are able to improve test perplexity from 78.4 to 77.4. We report the best performance achieved with a given technique in Table 1. 1 These metrics are deterministic functions of negative log-likelihood (NLL).
1606.01305#14
1606.01305#16
1606.01305
[ "1603.05118" ]
1606.01305#16
Zoneout: Regularizing RNNs by Randomly Preserving Hidden Activations
Speciï¬ cally, perplexity is exponentiated NLL, and BPC (entropy) is NLL divided by the natural logarithm of 2. 5 # Under review as a conference paper at ICLR 2017 22 â Zh=05 â Unregularized LSTM â Z=05 22) â Zoneout ee Zh =0.05 © Stochastic depth _ â ww Recurrent dropout y 2 2c = 0.05 20 4-4 Norm stabilizer § aa Zc =0.5,Zh =05 y 4 Weight noise 5 <4 Zc = 0.05, Zh = 0.05 â < gis b> Zc = 0.5, Zh = 0.05 gi g s 1.6| 16 14 4 1 6 11 16 21 26 1 6 11 16 21 26 31 36 41 46 51 56 61 66 71 76 81 86 Epoch Epoch Figure 3: Validation BPC (bits per character) on Character-level Penn Treebank, for different probabilities of zoneout on cells zc and hidden states zh (left), and comparison of an unregularized LSTM, zoneout zc = 0.5, zh = 0.05, stochastic depth zoneout z = 0.05, recurrent dropout p = 0.25, norm stabilizer β = 50, and weight noise Ï = 0.075 (right). 3.5 3.0 Unregularized LSTM (training) ~ Unregularized LSTM (training) â _Unregularized LSTM (validation) â Unregularized LSTM (validation) Recurrent dropout (training) Recurrent dropout (training) 3.0 ast Recurrent dropout (validation) § Recurrent dropout (validation) ONE as Zoneout (training) S - Zoneout (training) â Zoneout (validation) ro â Zoneout (validation) a 2.0 r zs 5 + z Pe Ey a 0 5 10 15 20 25 30 o 5 10 15 20 2 30 35 40 Epochs Epochs 3 8 s 2 Figure 4:
1606.01305#15
1606.01305#17
1606.01305
[ "1603.05118" ]
1606.01305#17
Zoneout: Regularizing RNNs by Randomly Preserving Hidden Activations
Training and validation bits-per-character (BPC) comparing LSTM regularization methods on character-level Penn Treebank (left) and Text8. (right) 4.2 TEXT8 Enwik8 is a corpus made from the ï¬ rst 109 bytes of Wikipedia dumped on Mar. 3, 2006. Text8 is a "clean text" version of this corpus; with html tags removed, numbers spelled out, symbols converted to spaces, all lower-cased. Both datasets were created and are hosted by Mahoney (2011). We use a single-layer network of 2000 units, initialized orthogonally, with batch size 128, learning rate 0.001, and sequence length 180. We optimize with Adam (Kingma & Ba, 2014), clip gradients to a maximum norm of 1 (Pascanu et al., 2012), and use early stopping, again matching the settings of Cooijmans et al. (2016). Results are reported in Table 1, and Figure 4 shows training and validation learning curves for zoneout (zc = 0.5, zh = 0.05) compared to an unregularized LSTM and to recurrent dropout. 4.3 PERMUTED SEQUENTIAL MNIST In sequential MNIST, pixels of an image representing a number [0-9] are presented one at a time, left to right, top to bottom. The task is to classify the number shown in the image. In pMNIST , the pixels are presented in a (ï¬ xed) random order. We compare recurrent dropout and zoneout to an unregularized LSTM baseline. All models have a single layer of 100 units, and are trained for 150 epochs using RMSProp (Tieleman & Hinton, 2012) with a decay rate of 0.5 for the moving average of gradient norms. The learning rate is set to 0.001 and the gradients are clipped to a maximum norm of 1 (Pascanu et al., 2012).
1606.01305#16
1606.01305#18
1606.01305
[ "1603.05118" ]
1606.01305#18
Zoneout: Regularizing RNNs by Randomly Preserving Hidden Activations
6 # Under review as a conference paper at ICLR 2017 As shown in Figure 5 and Table 2, zoneout gives a signiï¬ cant performance boost compared to the LSTM baseline and outperforms recurrent dropout (Semeniuta et al., 2016), although recurrent batch normalization (Cooijmans et al., 2016) outperforms all three. However, by adding zoneout to the recurrent batch normalized LSTM, we achieve state of the art performance. For this setting, the zoneout mask is shared between cells and states, and the recurrent dropout probability and zoneout probabilities are both set to 0.15.
1606.01305#17
1606.01305#19
1606.01305
[ "1603.05118" ]
1606.01305#19
Zoneout: Regularizing RNNs by Randomly Preserving Hidden Activations
Table 1: Validation and test results of different models on the three language modelling tasks. Results are reported for the best-performing settings. Performance on Char-PTB and Text8 is measured in bits- per-character (BPC); Word-PTB is measured in perplexity. For Char-PTB and Text8 all models are 1-layer unless otherwise noted; for Word-PTB all models are 2-layer. Results above the line are from our own implementation and experiments. Models below the line are: NR-dropout (non-recurrent dropout), V-Dropout (variational dropout), RBN (recurrent batchnorm), H-LSTM+LN (HyperLSTM + LayerNorm), 3-HM-LSTM+LN (3-layer Hierarchical Multiscale LSTM + LayerNorm). Char-PTB Word-PTB Text8 Model Valid Test Valid Test Valid Test Unregularized LSTM Weight noise Norm stabilizer Stochastic depth Recurrent dropout Zoneout 1.466 1.507 1.459 1.432 1.396 1.362 1.356 1.344 1.352 1.343 1.286 1.252 120.7 â
1606.01305#18
1606.01305#20
1606.01305
[ "1603.05118" ]
1606.01305#20
Zoneout: Regularizing RNNs by Randomly Preserving Hidden Activations
â â 91.6 81.4 114.5 â â â 87.0 77.4 1.396 1.356 1.382 1.337 1.386 1.331 1.408 1.367 1.398 1.343 1.401 1.336 NR-dropout (Zaremba et al., 2014) V-dropout (Gal, 2015) RBN (Cooijmans et al., 2016) H-LSTM + LN (Ha et al., 2016) 3-HM-LSTM + LN (Chung et al., 2016) â â â 1.281 â â â 1.32 1.250 1.24 82.2 â â â â 78.4 73.4 â â â â â â â â â â 1.36 â 1.29 Table 2: Error rates on the pMNIST digit classiï¬ cation task. Zoneout outperforms recurrent dropout, and sets state of the art when combined with recurrent batch normalization. Model Valid Test Unregularized LSTM Recurrent dropout p = 0.5 Zoneout zc = zh = 0.15 Recurrent batchnorm Recurrent batchnorm & Zoneout zc = zh = 0.15 0.092 0.083 0.063 - 0.045 0.102 0.075 0.069 0.046 0.041 1.0 Vanilla LSTM (Train) â Vanilla LSTM (Validation) =0.1 15 (Train) 0.15 ( 08 0.6 Error Rate 04 02 0.0 0 20 40 60 go 100 120 «+140 160 Epochs Figure 5: Training and validation error rates for an unregularized LSTM, recurrent dropout, and zoneout on the task of permuted sequential MNIST digit classiï¬ cation.
1606.01305#19
1606.01305#21
1606.01305
[ "1603.05118" ]
1606.01305#21
Zoneout: Regularizing RNNs by Randomly Preserving Hidden Activations
7 # Under review as a conference paper at ICLR 2017 4.4 GRADIENT FLOW We investigate the hypothesis that identity connections introduced by zoneout facilitate gradient ï¬ ow to earlier timesteps. Vanishing gradients are a perennial issue in RNNs. As effective as many techniques are for mitigating vanishing gradients (notably the LSTM architecture Hochreiter & Schmidhuber (1997)), we can always imagine a longer sequence to train on, or a longer-term dependence we want to capture. We compare gradient flow in an unregularized LSTM to zoning out (stochastic identity-mapping) and dropping out (stochastic zero-mapping) the recurrent connections after one epoch of training on pMNIST. We compute the average gradient norms || oe || of loss L with respect to cell activations c; at each timestep t, and for each method, normalize the average gradient norms by the sum of average gradient norms for all timesteps. Figure 6 shows that zoneout propagates gradient information to early timesteps much more effectively than dropout on the recurrent connections, and even more effectively than an unregularized LSTM. The same effect was observed for hidden states ht.
1606.01305#20
1606.01305#22
1606.01305
[ "1603.05118" ]
1606.01305#22
Zoneout: Regularizing RNNs by Randomly Preserving Hidden Activations
â = Dropout â Zoneout â _ Unregularized STM 0100 200 +300 ~-400~=500~~<600~â ~«700 timestep Figure 6: Normalized >> || oe || of loss L with respect to cell activations c, at each timestep zoneout (z, = 0.5), dropout (z. = 0.5), and an unregularized LSTM on one epoch of pMNIST . # 5 CONCLUSION We have introduced zoneout, a novel and simple regularizer for RNNs, which stochastically preserves hidden unitsâ activations. Zoneout improves performance across tasks, outperforming many alterna- tive regularizers to achieve results competitive with state of the art on the Penn Treebank and Text8 datasets, and state of the art results on pMNIST. While searching over zoneout probabilites allows us to tune zoneout to each task, low zoneout probabilities (0.05 - 0.2) on states reliably improve performance of existing models. We perform no hyperparameter search to achieve these results, simply using settings from the previous state of the art. Results on pMNIST and word-level Penn Treebank suggest that Zoneout works well in combination with other regularizers, such as recurrent batch normalization, and dropout on feedforward/embedding layers.
1606.01305#21
1606.01305#23
1606.01305
[ "1603.05118" ]
1606.01305#23
Zoneout: Regularizing RNNs by Randomly Preserving Hidden Activations
We conjecture that the beneï¬ ts of zoneout arise from two main factors: (1) Introducing stochasticity makes the network more robust to changes in the hidden state; (2) The identity connections improve the ï¬ ow of information forward and backward through the network. ACKNOWLEDGMENTS We are grateful to Hugo Larochelle, Jan Chorowski, and students at MILA, especially à aË glar Gülçehre, Marcin Moczulski, Chiheb Trabelsi, and Christopher Beckham, for helpful feedback and discussions. We thank the developers of Theano (Theano Development Team, 2016), Fuel, and Blocks (van Merriënboer et al., 2015). We acknowledge the computing resources provided by ComputeCanada and CalculQuebec. We also thank IBM and Samsung for their support. We would also like to acknowledge the work of Pranav Shyam on learning RNN hierarchies. This research was developed with funding from the Defense Advanced Research Projects Agency (DARPA) and the Air
1606.01305#22
1606.01305#24
1606.01305
[ "1603.05118" ]
1606.01305#24
Zoneout: Regularizing RNNs by Randomly Preserving Hidden Activations
8 # t for # Under review as a conference paper at ICLR 2017 Force Research Laboratory (AFRL). The views, opinions and/or ï¬ ndings expressed are those of the authors and should not be interpreted as representing the ofï¬ cial views or policies of the Department of Defense or the U.S. Government. # REFERENCES Lei Jimmy Ba, Ryan Kiros, and Geoffrey E. Hinton. Layer normalization. CoRR, abs/1607.06450, 2016. URL http://arxiv.org/abs/1607.06450. Philip Bachman, Ouais Alsharif, and Doina Precup. Learning with pseudo-ensembles. In Advances in Neural Information Processing Systems, pp. 3365â 3373, 2014. J. Bayer, C. Osendorfer, D. Korhammer, N. Chen, S. Urban, and P. van der Smagt.
1606.01305#23
1606.01305#25
1606.01305
[ "1603.05118" ]
1606.01305#25
Zoneout: Regularizing RNNs by Randomly Preserving Hidden Activations
On Fast Dropout and its Applicability to Recurrent Networks. ArXiv e-prints, November 2013. Yoshua Bengio, Patrice Simard, and Paolo Frasconi. Learning long-term dependencies with gradient descent is difï¬ cult. Neural Networks, IEEE Transactions on, 5(2):157â 166, 1994. Yoshua Bengio, Nicholas Léonard, and Aaron C. Courville. Estimating or propagating gradients through stochastic neurons for conditional computation. CoRR, abs/1308.3432, 2013. URL http://arxiv.org/abs/1308.3432. Junyoung Chung, Sungjin Ahn, and Yoshua Bengio. Hierarchical multiscale recurrent neural networks. CoRR, abs/1609.01704, 2016. URL http://arxiv.org/abs/1609.01704. Tim Cooijmans, Nicolas Ballas, César Laurent, Caglar Gulcehre, and Aaron Courville. Recurrent batch normalization. arXiv preprint arXiv:1603.09025, 2016. Matthieu Courbariaux, Yoshua Bengio, and Jean-Pierre David.
1606.01305#24
1606.01305#26
1606.01305
[ "1603.05118" ]
1606.01305#26
Zoneout: Regularizing RNNs by Randomly Preserving Hidden Activations
Binaryconnect: Training deep neural networks with binary weights during propagations. In NIPS, pp. 3123â 3131, 2015. Yarin Gal. A Theoretically Grounded Application of Dropout in Recurrent Neural Networks. ArXiv e-prints, December 2015. Felix A. Gers, Jürgen Schmidhuber, and Fred A. Cummins. Learning to forget: Continual prediction with LSTM. Neural Computation, 12(10):2451â 2471, 2000. David Ha, Andrew M. Dai, and Quoc V. Le. Hypernetworks. CoRR, abs/1609.09106, 2016. URL http://arxiv.org/abs/1609.09106. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. arXiv preprint arXiv:1512.03385, 2015. Salah El Hihi and Yoshua Bengio. Hierarchical recurrent neural networks for long-term dependencies. In Advances in Neural Information Processing Systems. 1996. Geoffrey E Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, and Ruslan R Salakhutdinov. Improving neural networks by preventing co-adaptation of feature detectors. arXiv preprint arXiv:1207.0580, 2012. Sepp Hochreiter. Untersuchungen zu dynamischen neuronalen netzen.
1606.01305#25
1606.01305#27
1606.01305
[ "1603.05118" ]
1606.01305#27
Zoneout: Regularizing RNNs by Randomly Preserving Hidden Activations
Masterâ s thesis, Institut fur Informatik, Technische Universitat, Munchen, 1991. Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural computation, 9(8): 1735â 1780, 1997. Gao Huang, Yu Sun, Zhuang Liu, Daniel Sedra, and Kilian Weinberger. Deep networks with stochastic depth. arXiv preprint arXiv:1603.09382, 2016. Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. Jan Koutnik, Klaus Greff, Faustino Gomez, and Juergen Schmidhuber. A clockwork rnn. arXiv preprint arXiv:1402.3511, 2014. 9 # Under review as a conference paper at ICLR 2017 David Krueger and Roland Memisevic. Regularizing rnns by stabilizing activations. arXiv preprint arXiv:1511.08400, 2015. Quoc V Le, Navdeep Jaitly, and Geoffrey E Hinton. A simple way to initialize recurrent networks of rectiï¬ ed linear units. arXiv preprint arXiv:1504.00941, 2015.
1606.01305#26
1606.01305#28
1606.01305
[ "1603.05118" ]
1606.01305#28
Zoneout: Regularizing RNNs by Randomly Preserving Hidden Activations
# Matt Mahoney. About the test data, 2011. URL http://mattmahoney.net/dc/textdata. Mitchell P Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. Building a large annotated corpus of english: The penn treebank. Computational linguistics, 19(2):313â 330, 1993. Taesup Moon, Heeyoul Choi, Hoshik Lee, and Inchul Song. Rnndrop: A novel dropout for rnns in asr. Automatic Speech Recognition and Understanding (ASRU), 2015. Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. Understanding the exploding gradient problem. CoRR, abs/1211.5063, 2012. URL http://arxiv.org/abs/1211.5063. V. Pham, T. Bluche, C. Kermorvant, and J. Louradour.
1606.01305#27
1606.01305#29
1606.01305
[ "1603.05118" ]
1606.01305#29
Zoneout: Regularizing RNNs by Randomly Preserving Hidden Activations
Dropout improves Recurrent Neural Networks for Handwriting Recognition. ArXiv e-prints, November 2013. Kamil Rocki, Tomasz Kornuta, and Tegan Maharaj. Surprisal-driven zoneout. CoRR, abs/1610.07675, 2016. URL http://arxiv.org/abs/1610.07675. Stanislau Semeniuta, Aliaksei Severyn, and Erhardt Barth. Recurrent dropout without memory loss. arXiv preprint arXiv:1603.05118, 2016.
1606.01305#28
1606.01305#30
1606.01305
[ "1603.05118" ]
1606.01305#30
Zoneout: Regularizing RNNs by Randomly Preserving Hidden Activations
S. Singh, D. Hoiem, and D. Forsyth. Swapout: Learning an ensemble of deep architectures. ArXiv e-prints, May 2016. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: A simple way to prevent neural networks from overï¬ tting. The Journal of Machine Learning Research, 15(1):1929â 1958, 2014. Theano Development Team. Theano: A Python framework for fast computation of mathematical expressions. arXiv e-prints, abs/1605.02688, May 2016. Tijmen Tieleman and Geoffrey Hinton. Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural Networks for Machine Learning, 4:2, 2012. Bart van Merriënboer, Dzmitry Bahdanau, Vincent Dumoulin, Dmitriy Serdyuk, David Warde-Farley, Jan Chorowski, and Yoshua Bengio.
1606.01305#29
1606.01305#31
1606.01305
[ "1603.05118" ]
1606.01305#31
Zoneout: Regularizing RNNs by Randomly Preserving Hidden Activations
Blocks and fuel: Frameworks for deep learning. CoRR, abs/1506.00619, 2015. Sida Wang and Christopher Manning. Fast dropout training. In Proceedings of the 30th International Conference on Machine Learning, pp. 118â 126, 2013. Wojciech Zaremba, Ilya Sutskever, and Oriol Vinyals. Recurrent neural network regularization. arXiv preprint arXiv:1409.2329, 2014.
1606.01305#30
1606.01305#32
1606.01305
[ "1603.05118" ]
1606.01305#32
Zoneout: Regularizing RNNs by Randomly Preserving Hidden Activations
10 # Under review as a conference paper at ICLR 2017 6 APPENDIX 6.1 STATIC IDENTITY CONNECTIONS EXPERIMENT This experiment was suggested by AnonReviewer2 during the ICLR review process with the goal of disentangling the effects zoneout has (1) through noise injection in the training process and (2) through identity connections. Based on these results, we observe that noise injection is essential for obtaining the regularization beneï¬ ts of zoneout. In this experiment, one zoneout mask is sampled at the beginning of training, and used for all examples. This means the identity connections introduced are static across training examples (but still different for each timestep). Using static identity connections resulted in slightly lower training (but not validation) error than zoneout, but worse performance than an unregularized LSTM on both train and validation sets, as shown in Figure 7.
1606.01305#31
1606.01305#33
1606.01305
[ "1603.05118" ]
1606.01305#33
Zoneout: Regularizing RNNs by Randomly Preserving Hidden Activations
2.2 â Vanilla LSTM (validation) Vanilla LSTM (training) 2.0 â Zoneout (validation) â Zoneout (training) H 18 â Static identity connections (validation) â Static identity connections (training) BPC 1 6 11 16 21 26 31 36 41 46 51 56 61 66 71 76 81 86 91 96101 1 6 11 16 21 26 31 36 41 46 51 56 61 66 71 76 81 86 91 96101 Epoch Figure 7: Training and validation curves for an LSTM with static identity connections compared to zoneout (both Zc = 0.5 and Zh = 0.05) and compared to a vanilla LSTM, showing that static identity connections fail to capture the beneï¬ ts of zoneout.
1606.01305#32
1606.01305#34
1606.01305
[ "1603.05118" ]
1606.01305#34
Zoneout: Regularizing RNNs by Randomly Preserving Hidden Activations
11
1606.01305#33
1606.01305
[ "1603.05118" ]
1605.09782#0
Adversarial Feature Learning
7 1 0 2 r p A 3 ] G L . s c [ 7 v 2 8 7 9 0 . 5 0 6 1 : v i X r a Published as a conference paper at ICLR 2017 # ADVERSARIAL FEATURE LEARNING # Jeff Donahue [email protected] Computer Science Division University of California, Berkeley # Philipp Krähenbühl [email protected] Department of Computer Science University of Texas, Austin # Trevor Darrell [email protected] Computer Science Division University of California, Berkeley # ABSTRACT
1605.09782#1
1605.09782
[ "1605.02688" ]
1605.09782#1
Adversarial Feature Learning
The ability of the Generative Adversarial Networks (GANs) framework to learn generative models mapping from simple latent distributions to arbitrarily complex data distributions has been demonstrated empirically, with compelling results showing that the latent space of such generators captures semantic variation in the data distribution. Intuitively, models trained to predict these semantic latent representations given data may serve as useful feature representations for auxiliary problems where semantics are relevant. However, in their existing form, GANs have no means of learning the inverse mapping â projecting data back into the latent space. We propose Bidirectional Generative Adversarial Networks (BiGANs) as a means of learning this inverse mapping, and demonstrate that the resulting learned feature representation is useful for auxiliary supervised discrimination tasks, competitive with contemporary approaches to unsupervised and self-supervised feature learning.
1605.09782#0
1605.09782#2
1605.09782
[ "1605.02688" ]
1605.09782#2
Adversarial Feature Learning
# INTRODUCTION Deep convolutional networks (convnets) have become a staple of the modern computer vision pipeline. After training these models on a massive database of image-label pairs like ImageNet (Russakovsky et al., 2015), the network easily adapts to a variety of similar visual tasks, achieving impressive results on image classiï¬ cation (Donahue et al., 2014; Zeiler & Fergus, 2014; Razavian et al., 2014) or localization (Girshick et al., 2014; Long et al., 2015) tasks. In other perceptual domains such as natural language processing or speech recognition, deep networks have proven highly effective as well (Bahdanau et al., 2015; Sutskever et al., 2014; Vinyals et al., 2015; Graves et al., 2013). However, all of these recent results rely on a supervisory signal from large-scale databases of hand-labeled data, ignoring much of the useful information present in the structure of the data itself. Meanwhile, Generative Adversarial Networks (GANs) (Goodfellow et al., 2014) have emerged as a powerful framework for learning generative models of arbitrarily complex data distributions. The GAN framework learns a generator mapping samples from an arbitrary latent distribution to data, as well as an adversarial discriminator which tries to distinguish between real and generated samples as accurately as possible. The generatorâ s goal is to â foolâ the discriminator by producing samples which are as close to real data as possible. When trained on databases of natural images, GANs produce impressive results (Radford et al., 2016; Denton et al., 2015). Interpolations in the latent space of the generator produce smooth and plausible semantic variations, and certain directions in this space correspond to particular semantic attributes along which the data distribution varies. For example, Radford et al. (2016) showed that a GAN trained on a database of human faces learns to associate particular latent directions with gender and the presence of eyeglasses.
1605.09782#1
1605.09782#3
1605.09782
[ "1605.02688" ]
1605.09782#3
Adversarial Feature Learning
A natural question arises from this ostensible â semantic juiceâ ï¬ owing through the weights of generators learned using the GAN framework: can GANs be used for unsupervised learning of rich feature representations for arbitrary data distributions? An obvious issue with doing so is that the 1 Published as a conference paper at ICLR 2017 features data z G G(z) G(z), z D x, E(x) E(x) E x P (y) Figure 1: The structure of Bidirectional Generative Adversarial Networks (BiGAN). generator maps latent samples to generated data, but the framework does not include an inverse mapping from data to latent representation. Hence, we propose a novel unsupervised feature learning framework, Bidirectional Generative Adversarial Networks (BiGAN). The overall model is depicted in Figure 1. In short, in addition to the generator G from the standard GAN framework (Goodfellow et al., 2014), BiGAN includes an encoder E which maps data x to latent representations z. The BiGAN discriminator D discriminates not only in data space (x versus G(z)), but jointly in data and latent space (tuples (x, E(x)) versus (G(z), z)), where the latent component is either an encoder output E(x) or a generator input z. It may not be obvious from this description that the BiGAN encoder E should learn to invert the generator G. The two modules cannot directly â communicateâ with one another: the encoder never â seesâ generator outputs (E(G(z)) is not computed), and vice versa. Yet, in Section 3, we will both argue intuitively and formally prove that the encoder and generator must learn to invert one another in order to fool the BiGAN discriminator. Because the BiGAN encoder learns to predict features z given data x, and prior work on GANs has demonstrated that these features capture semantic attributes of the data, we hypothesize that a trained BiGAN encoder may serve as a useful feature representation for related semantic tasks, in the same way that fully supervised visual models trained to predict semantic â
1605.09782#2
1605.09782#4
1605.09782
[ "1605.02688" ]
1605.09782#4
Adversarial Feature Learning
labelsâ given images serve as powerful feature representations for related visual tasks. In this context, a latent representation z may be thought of as a â labelâ for x, but one which came for â free,â without the need for supervision. An alternative approach to learning the inverse mapping from data to latent representation is to directly model p(z|G(z)), predicting generator input z given generated data G(z). Weâ ll refer to this alternative as a latent regressor, later arguing (Section 4.1) that the BiGAN encoder may be preferable in a feature learning context, as well as comparing the approaches empirically. BiGANs are a robust and highly generic approach to unsupervised feature learning, making no assumptions about the structure or type of data to which they are applied, as our theoretical results will demonstrate. Our empirical studies will show that despite their generality, BiGANs are competitive with contemporary approaches to self-supervised and weakly supervised feature learning designed speciï¬
1605.09782#3
1605.09782#5
1605.09782
[ "1605.02688" ]
1605.09782#5
Adversarial Feature Learning
cally for a notoriously complex data distribution â natural images. Dumoulin et al. (2016) independently proposed an identical model in their concurrent work, exploring the case of a stochastic encoder E and the ability of such models to learn in a semi-supervised setting. # 2 PRELIMINARIES Let pX(x) be the distribution of our data for x â â ¦X (e.g. natural images). The goal of generative modeling is capture this data distribution using a probabilistic model. Unfortunately, exact modeling of this probability density function is computationally intractable (Hinton et al., 2006; Salakhutdinov & Hinton, 2009) for all but the most trivial models. Generative Adversarial Networks (GANs) (Good-
1605.09782#4
1605.09782#6
1605.09782
[ "1605.02688" ]
1605.09782#6
Adversarial Feature Learning
2 Published as a conference paper at ICLR 2017 fellow et al., 2014) instead model the data distribution as a transformation of a ï¬ xed latent distribution pZ(z) for z â â ¦Z. This transformation, called a generator, is expressed as a deterministic feed forward network G : â ¦Z â â ¦X with pG(x|z) = δ (x â G(z)) and pG(x) = Ezâ ¼pZ [pG(x|z)]. The goal is to train a generator such that pG(x) â pX(x). The GAN framework trains a generator, such that no discriminative model D : Qx ++ [0,1] can distinguish samples of the data distribution from samples of the generative distribution. Both generator and discriminator are learned using the adversarial (minimax) objective min max V(D,G), where learned using the adversarial (minimax) objective min max V(D, G) = Exnpx [log D(x)] + Ex~pe [log (1 - D())] V(D, G) = Exnpx [log D(x)] + Ex~pe [log (1 - D())] () Ez~pg [log(1â D(G(z)))] := Goodfellow et al. (2014) showed that for an ideal discriminator the objective C(G) maxD V (D, G) is equivalent to the Jensen-Shannon divergence between the two distributions pG and pX. The adversarial objective 1 does not directly lend itself to an efï¬ cient optimization, as each step in the generator G requires a full discriminator D to be learned. Furthermore, a perfect discriminator no longer provides any gradient information to the generator, as the gradient of any global or local maximum of V (D, G) is 0. To provide a strong gradient signal nonetheless, Goodfellow et al. (2014) slightly alter the objective between generator and discriminator updates, while keeping the same ï¬
1605.09782#5
1605.09782#7
1605.09782
[ "1605.02688" ]
1605.09782#7
Adversarial Feature Learning
xed point characteristics. They also propose to optimize (1) using an alternating optimization switching between updates to the generator and discriminator. While this optimization is not guaranteed to converge, empirically it works well if the discriminator and generator are well balanced. Despite the empirical strength of GANs as generative models of arbitrary data distributions, it is not clear how they can be applied as an unsupervised feature representation. One possibility for learning such representations is to learn an inverse mapping regressing from generated data G(z) back to the latent input z. However, unless the generator perfectly models the data distribution pX, a nearly impossible objective for a complex data distribution such as that of high-resolution natural images, this idea may prove insufï¬
1605.09782#6
1605.09782#8
1605.09782
[ "1605.02688" ]
1605.09782#8
Adversarial Feature Learning
cient. # 3 BIDIRECTIONAL GENERATIVE ADVERSARIAL NETWORKS In Bidirectional Generative Adversarial Networks (BiGANs) we not only train a generator, but additionally train an encoder E : â ¦X â â ¦Z. The encoder induces a distribution pE(z|x) = δ(z â E(x)) mapping data points x into the latent feature space of the generative model. The discriminator is also modiï¬ ed to take input from the latent space, predicting PD(Y |x, z), where Y = 1 if x is real (sampled from the real data distribution pX), and Y = 0 if x is generated (the output of G(z), z â ¼ pZ). The BiGAN training objective is deï¬ ned as a minimax objective min G,E max D V (D, E, G) (2)
1605.09782#7
1605.09782#9
1605.09782
[ "1605.02688" ]
1605.09782#9
Adversarial Feature Learning
where V(D, E, G) := Exnpy [Ez~py(-|x) log D(x, 2)] | + Exxpz [ Ex~po(-|z) [log (1 â D(x, 2))} }. â _-el_â __â â ___--__â â â â _â _â _â â â â OC___â â " log D(x,E(x)) log(1â D(G(z),z)) We optimize this minimax objective using the same alternating gradient based optimization as Goodfellow et al. (2014). See Section 3.4 for details. BiGANs share many of the theoretical properties of GANs (Goodfellow et al.}[2014), while addition- ally guaranteeing that at the global optimum, G and E are each otherâ s inverse. BiGANs are also closely related to autoencoders with an ¢p loss function. In the following sections we highlight some of the appealing theoretical properties of BiGANs.
1605.09782#8
1605.09782#10
1605.09782
[ "1605.02688" ]
1605.09782#10
Adversarial Feature Learning
Deï¬ nitions Let pGZ(x, z) := pG(x|z)pZ(z) and pEX(x, z) := pE(z|x)pX(x) be the joint distri- butions modeled by the generator and encoder respectively. â ¦ := â ¦X à ⠦Z is the joint latent and 3 (3) Published as a conference paper at ICLR 2017 data space. For a region R â â ¦, Prx(R) := fo Pex(X,2)1x,2)er) Ux, 2) = fo, Px(X) Jo, PE(ZIX)1x,2)eR] dz dx Pea(R) = fo paz(x, 2)1\(«,2)eR] U(x, 2) = Jon pz(z) Jox Da(X|2Z)1x,2)eR] 1x dz â ¦Z are probability measures over that region. We also deï¬ ne # Px(Rx) := # Pz(Rz) = â ¦X pX(x)1[xâ RX] dx â ¦Z pZ(z)1[zâ RZ] dz as measures over regions RX â â ¦X and RZ â â ¦Z. We refer to the set of features and data samples in the support of PX and PZ as Ë â ¦X := supp(PX) and Ë â ¦Z := supp(PZ) respectively. DKL ( P || Q ) and DJS ( P || Q ) respectively denote the Kullback-Leibler (KL) and Jensen-Shannon divergences between probability measures P and Q.
1605.09782#9
1605.09782#11
1605.09782
[ "1605.02688" ]
1605.09782#11
Adversarial Feature Learning
By deï¬ nition, DKL ( P || Q ) := Exâ ¼P [log fP Q(x)] P +Q DJS ( P || Q ) := 1 2 2 where fpg := ra is the Radon-Nikodym (RN) derivative of measure P with respect to measure Q, with the defining property that P(R) = [;, fpq dQ. The RN derivative fpg : 2+ Ryo is defined for any measures P and Q on space 2 such that P is absolutely continuous with respect to Q: i.e., for any R CQ, P(R) > 0 = > Q(R) > 0.
1605.09782#10
1605.09782#12
1605.09782
[ "1605.02688" ]
1605.09782#12
Adversarial Feature Learning
3.1 OPTIMAL DISCRIMINATOR, GENERATOR, & ENCODER We start by characterizing the optimal discriminator for any generator and encoder, following Good- fellow et al. (2014). This optimal discriminator then allows us to reformulate objective (3), and show that it reduces to the Jensen-Shannon divergence between the joint distributions PEX and PGZ. Proposition 1 For any E and G, the optimal discriminator Dig := arg max p V (D, E,G) is the dPpx : Q + [0,1] of measure Px with respect to Radon-Nikodym derivative fq := W(Poxt Pon) measure Pex + Pez. Proof. Given in Appendix A.1.
1605.09782#11
1605.09782#13
1605.09782
[ "1605.02688" ]
1605.09782#13
Adversarial Feature Learning
This optimal discriminator now allows us to characterize the optimal generator and encoder. Proposition 2 The encoder and generatorâ s objective for an optimal discriminator C(E, G) := maxD V (D, E, G) = V (Dâ EG, E, G) can be rewritten in terms of the Jensen-Shannon divergence between measures PEX and PGZ as C(E, G) = 2 DJS ( PEX || PGZ ) â log 4. Proof. Given in Appendix A.2. Theorem 1 The global minimum of C(E, G) is achieved if and only if PEX = PGZ. At that point, C(E, G) = â log 4 and Dâ Proof. From Proposition 2, we have that C(E, G) = 2 DJS ( PEX || PGZ ) â log 4. The Jensen- Shannon divergence DJS ( P || Q ) â ¥ 0 for any P and Q, and DJS ( P || Q ) = 0 if and only if P = Q. Therefore, the global minimum of C(E, G) occurs if and only if PEX = PGZ, and at this point the value is C(E, G) = â
1605.09782#12
1605.09782#14
1605.09782
[ "1605.02688" ]
1605.09782#14
Adversarial Feature Learning
log 4. Finally, PEX = PGZ implies that the optimal discriminator is chance: Dâ The optimal discriminator, encoder, and generator of BiGAN are similar to the optimal discriminator and generator of the GAN framework (Goodfellow et al., 2014). However, an important difference is that BiGAN optimizes a Jensen-Shannon divergence between a joint distribution over both data X and latent features Z. This joint divergence allows us to further characterize properties of G and E, as shown below. 3.2 OPTIMAL GENERATOR & ENCODER ARE INVERSES
1605.09782#13
1605.09782#15
1605.09782
[ "1605.02688" ]
1605.09782#15
Adversarial Feature Learning
We ï¬ rst present an intuitive argument that, in order to â foolâ a perfect discriminator, a deterministic BiGAN encoder and generator must invert each other. (Later we will formally state and prove this 4 Published as a conference paper at ICLR 2017 property.) Consider a BiGAN discriminator input pair (x, z). Due to the sampling procedure, (x, z) must satisfy at least one of the following two properties: (a) x â Ë â ¦X â § E(x) = z (b) z â Ë â ¦Z â § G(z) = x If only one of these properties is satisï¬ ed, a perfect discriminator can infer the source of (x, z) with certainty: if only (a) is satisï¬ ed, (x, z) must be an encoder pair (x, E(x)) and Dâ EG(x, z) = 1; if only (b) is satisï¬ ed, (x, z) must be a generator pair (G(z), z) and Dâ EG(x, z) = 0. Therefore, in order to fool a perfect discriminator at (x, z) (so that 0 < Dâ EG(x, z) < 1), E and G must satisfy both (a) and (b). In this case, we can substitute the equality E(x) = z required by (a) into the equality G(z) = x required by (b), and vice versa, giving the inversion properties x = G(E(x)) and z = E(G(z)). Formally, we show in Theorem 2 that the optimal generator and encoder invert one another almost everywhere on the support Ë
1605.09782#14
1605.09782#16
1605.09782
[ "1605.02688" ]
1605.09782#16
Adversarial Feature Learning
â ¦X and Ë â ¦Z of PX and PZ. Theorem 2 If E and G are an optimal encoder and generator, then E = Gâ 1 almost everywhere; that is, G(E(x)) = x for PX-almost every x â â ¦X, and E(G(z)) = z for PZ-almost every z â â ¦Z. Proof. Given in Appendix A.4. While Theorem|2|characterizes the encoder and decoder at their optimum, due to the non-convex nature of the optimization, this optimum might never be reached. Experimentally, Section|4]shows that on standard datasets, the two are approximate inverses; however, they are rarely exact inverses. It is thus also interesting to show what objective BiGAN optimizes in terms of E and G. Next we show that BiGANs are closely related to autoencoders with an £ loss function. 3.3 RELATIONSHIP TO AUTOENCODERS As argued in Section 1, a model trained to predict features z given data x should learn useful semantic representations. Here we show that the BiGAN objective forces the encoder E to do exactly this: in order to fool the discriminator at a particular z, the encoder must invert the generator at that z, such that E(G(z)) = z. Theorem 3 The encoder and generator objective given an optimal discriminator C(E,G) := maxp V(D, E,G) can be rewritten as an â ¬y autoencoder loss function C(E,G) = Exnpx [1 peopeencteea)=x] log fra(x, E(x))| + Eapz [tate ecinen (Cte) =e] log (1 â fra(G(2),2))| with log fEG â (â â , 0) and log (1 â fEG) â (â â
1605.09782#15
1605.09782#17
1605.09782
[ "1605.02688" ]
1605.09782#17
Adversarial Feature Learning
, 0) PEX-almost and PGZ-almost everywhere. Proof. Given in Appendix A.5. Here the indicator function 1{q((x))=x| in the first term is equivalent to an autoencoder with Co loss, while the indicator 1(j(c@(z))=z) in the second term shows that the BiGAN encoder must invert the generator, the desired property for feature learning. The objective further encourages the functions E(x) and G(z) to produce valid outputs in the support of Pz, and Px respectively. Unlike regular autoencoders, the fy loss function does not make any assumptions about the structure or distribution of the data itself; in fact, all the structural properties of BiGAN are learned as part of the discriminator. 3.4 LEARNING In practice, as in the GAN framework (Goodfellow et al., 2014), each BiGAN module D, G, and E is a parametric function (with parameters θD, θG, and θE, respectively). As a whole, BiGAN can be optimized using alternating stochastic gradient steps. In one iteration, the discriminator parameters θD are updated by taking one or more steps in the positive gradient direction â θD V (D, E, G), then the encoder parameters θE and generator parameters θG are together updated by taking a step in the negative gradient direction â
1605.09782#16
1605.09782#18
1605.09782
[ "1605.02688" ]
1605.09782#18
Adversarial Feature Learning
â θE ,θG V (D, E, G). In both cases, the expectation terms of 5 Published as a conference paper at ICLR 2017 V (D, E, G) are estimated using mini-batches of n samples {x(i) â ¼ pX}n drawn independently for each update step. i=1 and {z(i) â ¼ pZ}n i=1 Goodfellow et al. (2014) found that an objective in which the real and generated labels Y are swapped provides stronger gradient signal to G. We similarly observed in BiGAN training that an â inverseâ objective provides stronger gradient signal to G and E. For efï¬ ciency, we also update all modules D, G, and E simultaneously at each iteration, rather than alternating between D updates and G, E updates. See Appendix B for details. 3.5 GENERALIZED BIGAN It is often useful to parametrize the output of the generator G and encoder F in a different, usually smaller, space { & and Oy rather than the original Qx and Qz,. For example, for visual feature learning, the images input to the encoder should be of similar resolution to images used in the evaluation. On the other hand, generating high resolution images remains difficult for current generative models. In this situation, the encoder may take higher resolution input while the generator output and discriminator input remain low resolution. We generalize the BiGAN objective V(D, G, E) (3) with functions gx : Qx + OX and gz : Az QZ, and encoder E : Nx ++ Nz, generator G : Az 4 NX, and discriminator D : OX x 0, + [0, 1]: Exnpx [ Een pn(-lx) [log D(9x(x), z')| ] + EL wpz [ Been pg (-lz) [log (1 _ D(xâ , gz(z)))] ] SS â â â â log D(gx (x), E(x)) log(1â D(G(2),gz(2))) and gz : Az OX x 0, + [0, 1]: gz(z)))] ] Nz, generator G : Az 4 NX, D(9x(x), z')| ] + EL wpz [
1605.09782#17
1605.09782#19
1605.09782
[ "1605.02688" ]
1605.09782#19
Adversarial Feature Learning
An identity gx (x) = x and gz(z) = z (and Q = Nx, NZ = Oz) yields the original objective. For visual feature learning with higher resolution encoder inputs, gx is an image resizing function that downsamples a high resolution image x â ¬ (x to a lower resolution image xâ â ¬ OQ, as output by the generator. (gz, is identity.) In this case, the encoder and generator respectively induce probability measures Pgxâ and Paz over regions R C of the joint space OY := OX x OF, with Pex/(R) := Joy. Say, Sng, PEXCS 2) Loe 2ryer9(9x x) â x') da! dxâ dx = Jo, Px) 1 (x(x), BG9)ER) IX and Pez; defined analogously. For optimal E and G, we can show Pex: = Paz: a generalization of Theorem|]] When E and G are deterministic and optimal, Theorem|2|- that £ and G invert one another â can also be generalized: 4,9, {E(x) = gz(z) \ G(z) = 9x(x)} for Px-almost every x â ¬ Ox, and 5,29, {E(x) = gz(z) A Gz) = gx(x)} for Pz-almost every z â ¬ Oz.
1605.09782#18
1605.09782#20
1605.09782
[ "1605.02688" ]
1605.09782#20
Adversarial Feature Learning
# 4 EVALUATION We evaluate the feature learning capabilities of BiGANs by ï¬ rst training them unsupervised as described in Section 3.4, then transferring the encoderâ s learned feature representations for use in auxiliary supervised learning tasks. To demonstrate that BiGANs are able to learn meaningful feature representations both on arbitrary data vectors, where the model is agnostic to any underlying structure, as well as very high-dimensional and complex distributions, we evaluate on both permutation-invariant MNIST (LeCun et al., 1998) and on the high-resolution natural images of ImageNet (Russakovsky et al., 2015). In all experiments, each module D, G, and E is a parametric deep (multi-layer) network. The BiGAN discriminator D(x, z) takes data x as its initial input, and at each linear layer thereafter, the latent representation z is transformed using a learned linear transformation to the hidden layer dimension and added to the non-linearity input. 4.1 BASELINE METHODS Besides the BiGAN framework presented above, we considered alternative approaches to learning feature representations using different GAN variants. Discriminator The discriminator D in a standard GAN takes data samples x â ¼ pX as input, making its learned intermediate representations natural candidates as feature representations for related tasks.
1605.09782#19
1605.09782#21
1605.09782
[ "1605.02688" ]
1605.09782#21
Adversarial Feature Learning
6 Published as a conference paper at ICLR 2017 BiGAN D LR JLR- AE(é:) AE (¢,) 97.39 9730 97.44 97.13 97.58 97.63 Table 1: One Nearest Neighbors (1NN) classification accuracy (%) on the permutation-invariant MNIST (LeCun et al.|/1998) test set in the feature space learned by BiGAN, Latent Regressor (LR), Joint Latent Regressor (JLR), and an autoencoder (AE) using an ¢, or £3 distance.
1605.09782#20
1605.09782#22
1605.09782
[ "1605.02688" ]
1605.09782#22
Adversarial Feature Learning
G(z) x G(E(x)) 21 GI 7 Figure 2: Qualitative results for permutation-invariant MNIST BiGAN training, including generator samples G(z), real data x, and corresponding reconstructions G(E(x)). This alternative is appealing as it requires no additional machinery, and is the approach used for unsupervised feature learning in Radford et al. (2016). On the other hand, it is not clear that the task of distinguishing between real and generated data requires or beneï¬ ts from intermediate representations that are useful as semantic feature representations. In fact, if G successfully generates the true data distribution pX(x), D may ignore the input data entirely and predict P (Y = 1) = P (Y = 1|x) = 1 2 unconditionally, not learning any meaningful intermediate representations. Latent regressor We consider an alternative encoder training by minimizing a reconstruction loss L(z, E(G(z))), after or jointly during a regular GAN training, called latent regressor or joint latent regressor respectively. We use a sigmoid cross entropy loss L as it naturally maps to a uniformly distributed output space. Intuitively, a drawback of this approach is that, unlike the encoder in a BiGAN, the latent regressor encoder E is trained only on generated samples G(z), and never â
1605.09782#21
1605.09782#23
1605.09782
[ "1605.02688" ]
1605.09782#23
Adversarial Feature Learning
seesâ real data x â ¼ pX. While this may not be an issue in the theoretical optimum where pG(x) = pX(x) exactly â i.e., G perfectly generates the data distribution pX â in practice, for highly complex data distributions pX, such as the distribution of natural images, the generator will almost never achieve this perfect result. The fact that the real data x are never input to this type of encoder limits its utility as a feature representation for related tasks, as shown later in this section. 4.2 PERMUTATION-INVARIANT MNIST
1605.09782#22
1605.09782#24
1605.09782
[ "1605.02688" ]
1605.09782#24
Adversarial Feature Learning
We ï¬ rst present results on permutation-invariant MNIST (LeCun et al., 1998). In the permutation- invariant setting, each 28à 28 digit image must be treated as an unstructured 784D vector (Goodfellow et al., 2013). In our case, this condition is met by designing each module as a multi-layer perceptron (MLP), agnostic to the underlying spatial structure in the data (as opposed to a convnet, for example). See Appendix C.1 for more architectural and training details. We set the latent distribution pZ = [U(â 1, 1)]50 â a 50D continuous uniform distribution.
1605.09782#23
1605.09782#25
1605.09782
[ "1605.02688" ]
1605.09782#25
Adversarial Feature Learning
Table[I]compares the encoding learned by a BiGAN-trained encoder F with the baselines described in Section]. I] as well as autoencoders trained directly to minimize either £2 or £; reconstruction error. The same architecture and optimization algorithm is used across all methods. All methods, including BiGAN, perform at roughly the same level. This result is not overly surprising given the relative simplicity of MNIST digits. For example, digits generated by G ina GAN nearly perfectly match the data distribution (qualitatively), making the latent regressor (LR) baseline method a reasonable choice, as argued in Section/4.1] Qualitative results are presented in Figure[2] 4.3 IMAGENET Next, we present results from training BiGANs on ImageNet LSVRC (Russakovsky et al., 2015), a large-scale database of natural images. GANs trained on ImageNet cannot perfectly reconstruct
1605.09782#24
1605.09782#26
1605.09782
[ "1605.02688" ]
1605.09782#26
Adversarial Feature Learning
7 Published as a conference paper at ICLR 2017 D E Noroozi & Favaro (2016) G AlexNet-based D Krizhevsky et al. (2012) Figure 3: The convolutional ï¬ lters learned by the three modules (D, G, and E) of a BiGAN (left, top-middle) trained on the ImageNet (Russakovsky et al., 2015) database. We compare with the ï¬ lters learned by a discriminator D trained with the same architecture (bottom-middle), as well as the ï¬ lters reported by Noroozi & Favaro (2016), and by Krizhevsky et al. (2012) for fully supervised ImageNet training (right).
1605.09782#25
1605.09782#27
1605.09782
[ "1605.02688" ]
1605.09782#27
Adversarial Feature Learning
G(z) x G(E(x)) x G(E(x)) x G(E(x)) Figure 4: Qualitative results for ImageNet BiGAN training, including generator samples G(z), real data x, and corresponding reconstructions G(E(x)). the data, but often capture some interesting aspects. Here, each of D, G, and E is a convnet. In all experiments, the encoder E architecture follows AlexNet (Krizhevsky et al., 2012) through the ï¬ fth and last convolution layer (conv5). We also experiment with an AlexNet-based discriminator D as a baseline feature learning approach. We set the latent distribution pZ = [U(â 1, 1)]200 â a 200D continuous uniform distribution. Additionally, we experiment with higher resolution encoder input images â 112 à 112 rather than the 64 à 64 used elsewhere â using the generalization described in Section 3.5. See Appendix C.2 for more architectural and training details. Qualitative results The convolutional ï¬ lters learned by each of the three modules are shown in Figure 3. We see that the ï¬ lters learned by the encoder E have clear Gabor-like structure, similar to those originally reported for the fully supervised AlexNet model (Krizhevsky et al., 2012). The ï¬ lters also have similar â groupingâ structure where one half (the bottom half, in this case) is more color sensitive, and the other half is more edge sensitive. (This separation of the ï¬ lters occurs due to the AlexNet architecture maintaining two separate ï¬ lter paths for computational efï¬ ciency.)
1605.09782#26
1605.09782#28
1605.09782
[ "1605.02688" ]
1605.09782#28
Adversarial Feature Learning
In Figure 4 we present sample generations G(z), as well as real data samples x and their BiGAN re- constructions G(E(x)). The reconstructions, while certainly imperfect, demonstrate empirically that 8 Published as a conference paper at ICLR 2017 conv1 conv2 conv3 conv4 conv5 Random (Noroozi & Favaro, 2016) Wang & Gupta (2015) Doersch et al. (2015) Noroozi & Favaro (2016)* 48.5 51.8 53.1 57.1 41.0 46.9 47.6 56.0 34.8 42.8 48.7 52.4 27.1 38.8 45.6 48.3 12.0 29.8 30.4 38.1 BiGAN (ours) BiGAN, 112 à 112 E (ours) 56.2 55.3 54.4 53.2 49.4 49.3 43.9 44.4 33.3 34.8 Table 2: Classiï¬
1605.09782#27
1605.09782#29
1605.09782
[ "1605.02688" ]
1605.09782#29
Adversarial Feature Learning
cation accuracy (%) for the ImageNet LSVRC (Russakovsky et al., 2015) validation set with various portions of the network frozen, or reinitialized and trained from scratch, following the evaluation from Noroozi & Favaro (2016). In, e.g., the conv3 column, the ï¬ rst three layers â conv1 through conv3 â are transferred and frozen, and the last layers â conv4, conv5, and fully connected layers â are reinitialized and trained fully supervised for ImageNet classiï¬ cation. BiGAN is competitive with these contemporary visual feature learning methods, despite its generality. (*Results from Noroozi & Favaro (2016) are not directly comparable to those of the other methods as a different base convnet architecture with larger intermediate feature maps is used.) the BiGAN encoder E and generator G learn approximate inverse mappings, as shown theoretically in Theorem 2. In Appendix C.2, we present nearest neighbors in the BiGAN learned feature space. ImageNet classiï¬ cation Following Noroozi & Favaro (2016), we evaluate by freezing the ï¬ rst N layers of our pretrained network and randomly reinitializing and training the remainder fully supervised for ImageNet classiï¬ cation.
1605.09782#28
1605.09782#30
1605.09782
[ "1605.02688" ]
1605.09782#30
Adversarial Feature Learning
Results are reported in Table 2. VOC classiï¬ cation, detection, and segmentation We evaluate the transferability of BiGAN rep- resentations to the PASCAL VOC (Everingham et al., 2014) computer vision benchmark tasks, including classiï¬ cation, object detection, and semantic segmentation. The classiï¬ cation task involves simple binary prediction of presence or absence in a given image for each of 20 object categories. The object detection and semantic segmentation tasks go a step further by requiring the objects to be localized, with semantic segmentation requiring this at the ï¬ nest scale: pixelwise prediction of object identity. For detection, the pretrained model is used as the initialization for Fast R-CNN (Gir- shick, 2015) (FRCN) training; and for semantic segmentation, the model is used as the initialization for Fully Convolutional Network (Long et al., 2015) (FCN) training, in each case replacing the AlexNet (Krizhevsky et al., 2012) model trained fully supervised for ImageNet classiï¬
1605.09782#29
1605.09782#31
1605.09782
[ "1605.02688" ]
1605.09782#31
Adversarial Feature Learning
cation. We report results on each of these tasks in Table 3, comparing BiGANs with contemporary approaches to unsupervised (Krähenbühl et al., 2016) and self-supervised (Doersch et al., 2015; Agrawal et al., 2015; Wang & Gupta, 2015; Pathak et al., 2016) feature learning in the visual domain, as well as the baselines discussed in Section 4.1. # 4.4 DISCUSSION Despite making no assumptions about the underlying structure of the data, the BiGAN unsupervised feature learning framework offers a representation competitive with existing self-supervised and even weakly supervised feature learning approaches for visual feature learning, while still being a purely generative model with the ability to sample data x and predict latent representation z. Furthermore, BiGANs outperform the discriminator (D) and latent regressor (LR) baselines discussed in Section 4.1, conï¬ rming our intuition that these approaches may not perform well in the regime of highly complex data distributions such as that of natural images. The version in which the encoder takes a higher resolution image than output by the generator (BiGAN 112 à 112 E) performs better still, and this strategy is not possible under the LR and D baselines as each of those modules take generator outputs as their input. Although existing self-supervised approaches have shown impressive performance and thus far tended to outshine purely unsupervised approaches in the complex domain of high-resolution images, purely unsupervised approaches to feature learning or pre-training have several potential beneï¬
1605.09782#30
1605.09782#32
1605.09782
[ "1605.02688" ]
1605.09782#32
Adversarial Feature Learning
ts. 9 Published as a conference paper at ICLR 2017 FRCN FCN Classification Detection Segmentation (% mAP) (% mAP) (% mlIU) trained layers fe8 = fc6-8 ~â all all all ImageNet (Krizhevsky et al. 770 788 78.3 56.8 48.0 [Agrawal et al.|(2015) 31.2 31.0 54.2 43.9 - self-su â athe a i20T6) 30.5 34.6 56.5 44.5 30.0 sem sup. 28.4 55.6 63.1 474 - Doers al. 44.7 55.1 65.3 51.1 - k-means 32.0 39.2 566 45.6 32.6 Discriminator (D 30.7 40.5 564 - - Latent Regressor (LR) 36.9 47.9 57.1 - - unsup. Joint LR 37.1 47.9 56.5 - - Autoencoder (¢2) 24.8 160 53.8 41.9 - BiGAN (ours) 37.5 48.7 58.9 46.2 34.9 BiGAN, 112 x 112 E (ours) 41.7 525 60.3 46.9 35.2 Table 3: Classiï¬
1605.09782#31
1605.09782#33
1605.09782
[ "1605.02688" ]
1605.09782#33
Adversarial Feature Learning
cation and Fast R-CNN (Girshick, 2015) detection results for the PASCAL VOC 2007 (Everingham et al., 2014) test set, and FCN (Long et al., 2015) segmentation results on the PASCAL VOC 2012 validation set, under the standard mean average precision (mAP) or mean intersection over union (mIU) metrics for each task. Classiï¬ cation models are trained with various portions of the AlexNet (Krizhevsky et al., 2012) model frozen. In the fc8 column, only the linear classiï¬ er (a multinomial logistic regression) is learned â in the case of BiGAN, on top of randomly initialized fully connected (FC) layers fc6 and fc7. In the fc6-8 column, all three FC layers are trained fully supervised with all convolution layers frozen. Finally, in the all column, the entire network is â ï¬ ne-tunedâ .
1605.09782#32
1605.09782#34
1605.09782
[ "1605.02688" ]
1605.09782#34
Adversarial Feature Learning
BiGAN outperforms other unsupervised (unsup.) feature learning approaches, including the GAN-based baselines described in Section 4.1, and despite its generality, is competitive with contemporary self-supervised (self-sup.) feature learning approaches speciï¬ c to the visual domain. BiGAN and other unsupervised learning approaches are agnostic to the domain of the data. The self-supervised approaches are speciï¬ c to the visual domain, in some cases requiring weak super- vision from video unavailable in images alone. For example, the methods are not applicable in the permutation-invariant MNIST setting explored in Section 4.2, as the data are treated as ï¬ at vectors rather than 2D images. Furthermore, BiGAN and other unsupervised approaches neednâ t suffer from domain shift between the pre-training task and the transfer task, unlike self-supervised methods in which some aspect of the data is normally removed or corrupted in order to create a non-trivial prediction task. In the context prediction task (Doersch et al., 2015), the network sees only small image patches â the global image structure is unobserved. In the context encoder or inpainting task (Pathak et al., 2016), each image is corrupted by removing large areas to be ï¬ lled in by the prediction network, creating inputs with dramatically different appearance from the uncorrupted natural images seen in the transfer tasks. Other approaches (Agrawal et al., 2015; Wang & Gupta, 2015) rely on auxiliary information un- available in the static image domain, such as video, egomotion, or tracking. Unlike BiGAN, such approaches cannot learn feature representations from unlabeled static images.
1605.09782#33
1605.09782#35
1605.09782
[ "1605.02688" ]
1605.09782#35
Adversarial Feature Learning
We ï¬ nally note that the results presented here constitute only a preliminary exploration of the space of model architectures possible under the BiGAN framework, and we expect results to improve sig- niï¬ cantly with advancements in generative image models and discriminative convolutional networks alike. # ACKNOWLEDGMENTS The authors thank Evan Shelhamer, Jonathan Long, and other Berkeley Vision labmates for helpful discussions throughout this work. This work was supported by DARPA, AFRL, DoD MURI award N000141110688, NSF awards IIS-1427425 and IIS-1212798, and the Berkeley Artiï¬ cial Intelligence Research laboratory. The GPUs used for this work were donated by NVIDIA.
1605.09782#34
1605.09782#36
1605.09782
[ "1605.02688" ]
1605.09782#36
Adversarial Feature Learning
10 Published as a conference paper at ICLR 2017 # REFERENCES Pulkit Agrawal, Joao Carreira, and Jitendra Malik. Learning to see by moving. In ICCV, 2015. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. In ICLR, 2015. Emily L. Denton, Soumith Chintala, Arthur Szlam, and Rob Fergus.
1605.09782#35
1605.09782#37
1605.09782
[ "1605.02688" ]
1605.09782#37
Adversarial Feature Learning
Deep generative image models using a Laplacian pyramid of adversarial networks. In NIPS, 2015. Carl Doersch, Abhinav Gupta, and Alexei A. Efros. Unsupervised visual representation learning by context prediction. In ICCV, 2015. Jeff Donahue, Yangqing Jia, Oriol Vinyals, Judy Hoffman, Ning Zhang, Eric Tzeng, and Trevor Darrell. DeCAF: A deep convolutional activation feature for generic visual recognition. In ICML, 2014.
1605.09782#36
1605.09782#38
1605.09782
[ "1605.02688" ]
1605.09782#38
Adversarial Feature Learning
Vincent Dumoulin, Ishmael Belghazi, Ben Poole, Alex Lamb, Martin Arjovsky, Olivier Mastropietro, and Aaron Courville. Adversarially learned inference. arXiv:1606.00704, 2016. Mark Everingham, S. M. Ali Eslami, Luc Van Gool, Christopher K. I. Williams, John Winn, and Andrew Zisserman. The PASCAL Visual Object Classes challenge: A retrospective. IJCV, 2014. Ross Girshick. Fast R-CNN. In ICCV, 2015. Ross Girshick, Jeff Donahue, Trevor Darrell, and Jitendra Malik. Rich feature hierarchies for accurate object detection and semantic segmentation. In CVPR, 2014. Ian Goodfellow, David Warde-Farley, Mehdi Mirza, Aaron Courville, and Yoshua Bengio. Maxout networks. In ICML, 2013. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In NIPS, 2014. Alex Graves, Abdel-rahman Mohamed, and Geoffrey E. Hinton. Speech recognition with deep recurrent neural networks. In ICASSP, 2013.
1605.09782#37
1605.09782#39
1605.09782
[ "1605.02688" ]
1605.09782#39
Adversarial Feature Learning
Geoffrey E. Hinton and Ruslan R. Salakhutdinov. Reducing the dimensionality of data with neural networks. Science, 2006. Geoffrey E. Hinton, Simon Osindero, and Yee-Whye Teh. A fast learning algorithm for deep belief nets. Neural Computation, 2006. Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In ICML, 2015. Yangqing Jia, Evan Shelhamer, Jeff Donahue, Sergey Karayev, Jonathan Long, Ross Girshick, Sergio Guadarrama, and Trevor Darrell.
1605.09782#38
1605.09782#40
1605.09782
[ "1605.02688" ]
1605.09782#40
Adversarial Feature Learning
Caffe: Convolutional architecture for fast feature embedding. arXiv:1408.5093, 2014. Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In ICLR, 2015. Philipp Krähenbühl, Carl Doersch, Jeff Donahue, and Trevor Darrell. Data-dependent initializations of convolutional neural networks. In ICLR, 2016. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. ImageNet classiï¬ cation with deep convolu- tional neural networks. In NIPS, 2012. Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proc. IEEE, 1998. Jonathan Long, Evan Shelhamer, and Trevor Darrell.
1605.09782#39
1605.09782#41
1605.09782
[ "1605.02688" ]
1605.09782#41
Adversarial Feature Learning
Fully convolutional networks for semantic segmentation. In CVPR, 2015. Andrew L. Maas, Awni Y. Hannun, and Andrew Y. Ng. Rectiï¬ er nonlinearities improve neural network acoustic models. In ICML, 2013. 11 Published as a conference paper at ICLR 2017 Mehdi Noroozi and Paolo Favaro. Unsupervised learning of visual representations by solving jigsaw puzzles. In ECCV, 2016. Deepak Pathak, Philipp Krähenbühl, Jeff Donahue, Trevor Darrell, and Alexei A. Efros.
1605.09782#40
1605.09782#42
1605.09782
[ "1605.02688" ]
1605.09782#42
Adversarial Feature Learning
Context encoders: Feature learning by inpainting. In CVPR, 2016. Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. In ICLR, 2016. Ali Razavian, Hossein Azizpour, Josephine Sullivan, and Stefan Carlsson. CNN features off-the-shelf: an astounding baseline for recognition. In CVPR Workshops, 2014. Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Fei-Fei Li. ImageNet large scale visual recognition challenge. IJCV, 2015.
1605.09782#41
1605.09782#43
1605.09782
[ "1605.02688" ]
1605.09782#43
Adversarial Feature Learning
Ruslan Salakhutdinov and Geoffrey E. Hinton. Deep Boltzmann machines. In AISTATS, 2009. Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. Sequence to sequence learning with neural networks. In NIPS, 2014. Theano Development Team. Theano: A Python framework for fast computation of mathematical expressions. arXiv:1605.02688, 2016. Oriol Vinyals, Å ukasz Kaiser, Terry Koo, Slav Petrov, Ilya Sutskever, and Geoffrey E. Hinton.
1605.09782#42
1605.09782#44
1605.09782
[ "1605.02688" ]
1605.09782#44
Adversarial Feature Learning
Grammar as a foreign language. In NIPS, 2015. Xiaolong Wang and Abhinav Gupta. Unsupervised learning of visual representations using videos. In ICCV, 2015. Matthew D. Zeiler and Rob Fergus. Visualizing and understanding convolutional networks. In ECCV, 2014. 12 Published as a conference paper at ICLR 2017 # APPENDIX A ADDITIONAL PROOFS A.1 PROOF OF PROPOSITION 1 (OPTIMAL DISCRIMINATOR) Proposition 1 For any E and G, the optimal discriminator Digg := arg max p V (D, E,G) is the dPex. : Q ++ [0,1] of measure Pex with respect to Radon-Nikodym derivative fig := Pax tPoz) measure Pex + Pez. Proof. For measures P and Q on space â ¦, with P absolutely continuous with respect to Q, the RN derivative fP Q := dP exists, and we have [9(x)] = fog dP = Jo Ex~p [9(x)] = fog dP = Jo 956 1Q = Jo gfrg dQ = Exxg [frq(x)g(x)]- (4) Let the probability measure PEG := PEX+PGZ denote the average of measures PEX and PGZ. Both PEX and PGZ are each absolutely continuous with respect to PEG. Hence the RN derivatives fEG := fEG + fGE = dPEX d(PEX+PGZ) + d(PEX+PGZ) = d(PEX+PGZ) dPGZ d(PEX+PGZ) = 1. (5) We use (4) and (5) to rewrite the objective V (3) as a single expectation under measure PEG: V(D,E, G) = E(x,2)~Pex [log D(x, z)| + E(x2)~Poz [log (1â D(x, z))] = Eqz)~Pre faa (x, 2) log D(x, 2)] + Ex,2)~Pec l2fou(x, 2) log (1 â
1605.09782#43
1605.09782#45
1605.09782
[ "1605.02688" ]
1605.09782#45
Adversarial Feature Learning
D(x, z))] Se Sa dPpx dPez dPra dPgc = 2E(,2)~Pro [fec(x,2) log D(x, 2) + faz(x, 2) log (1 â D(x, z))] = 2E(x,2)~Pre [fea(x, z) log D(x, z) + (1 â fea(x,z)) log (1 â D(x, z))]. Note that arg maxy {a log y + (1 â a) log(1 â y)} = a for any a â [0, 1]. Thus, Dâ # Dig = fea. A.2 PROOF OF PROPOSITION 2 (ENCODER AND GENERATOR OBJECTIVE) Proposition 2 The encoder and generatorâ s objective for an optimal discriminator C(E, G) := maxD V (D, E, G) = V (Dâ EG, E, G) can be rewritten in terms of the Jensen-Shannon divergence between measures PEX and PGZ as C(E, G) = 2 DJS ( PEX || PGZ ) â
1605.09782#44
1605.09782#46
1605.09782
[ "1605.02688" ]
1605.09782#46
Adversarial Feature Learning
log 4. Proof. Using Proposition 1 along with (5) (1 â Dâ EG = 1 â fEG = fGE) we rewrite the objective C(E,G) = maxpV(D, E,G) = V (Dig, E.G) = Ex,2)~Prx [log Ding (x, 2)] + Eqx,2)~Pez [log (1 â Digg (x, 2))] = Ex,2)~Prx [log fec(x, Z)] + Eqx,2)~Pez [log fan(x, 2) = Eexz)~Ppx [log (2fea(x, Z))] + Ex.z)~Pez [log (2fen(x,z))] â log 4 = Dut (Pex || Pea) + Dix (Paz || Pea) â log 4 = Dy (Pox || 22422) + Dax. (Po || 42) â log =2Djs (Pex l| Paz) â log4. # A.3 MEASURE DEFINITIONS FOR DETERMINISTIC E AND G While Theorem 1 and Propositions 1 and 2 hold for any encoder pE(z|x) and generator pG(x|z), stochastic or deterministic, Theorems 2 and 3 assume the encoder E and generator G are deterministic functions; i.e., with conditionals pE(z|x) = δ(z â E(x)) and pG(x|z) = δ(x â G(z)) deï¬ ned as δ functions.
1605.09782#45
1605.09782#47
1605.09782
[ "1605.02688" ]
1605.09782#47
Adversarial Feature Learning
13 Published as a conference paper at ICLR 2017 For use in the proofs of those theorems, we simplify the deï¬ nitions of measures PEX and PGZ given in Section 3 for the case of deterministic functions E and G below: Pex(R) = Joy Px(®) fog PE(ZzIX)Ux,2)eR) dz dx = Joye PX(®) (Jog (2 â E(®))1oc2yen| a2) ax = Jog PX) 1x, 2o0) ER] EX Paz(R) = Jo, P2(2) Jog PG(XI2)1 [(,2) eR] Ix dz = Jog P22) (Jog 5(X â G2) )Upx2)en) 4x) da = Jo, pal2)1a(z)z)er) de A.4 PROOF OF THEOREM 2 (OPTIMAL GENERATOR AND ENCODER ARE INVERSES) Theorem 2 If E and G are an optimal encoder and generator, then E = Gâ 1 almost everywhere; that is, G(E(x)) = x for PX-almost every x â
1605.09782#46
1605.09782#48
1605.09782
[ "1605.02688" ]
1605.09782#48
Adversarial Feature Learning
â ¦X, and E(G(z)) = z for PZ-almost every z â â ¦Z. Proof. Let R0 x = G(E(x)) does not hold. We will show that, for optimal E and G, R0 PX (i.e., PX(R0 Let R0 := {(x, z) â â ¦ : z = E(x) â § x â R0 only if x â R0 and the fact that PEX = PGZ for optimal E and G (Theorem 1). Proof. Let R& := {x â ¬ Nx : x # G(E(x))} be the region of Qx in which the inversion property x = G(E(x)) does not hold. We will show that, for optimal E and G, R& has measure zero under Px (i.e., Px (RX) = 0) and therefore x = G(E(x)) holds Px-almost everywhere. X} be the region of â ¦ such that (x, E(x)) â R0 if and X. Weâ ll use the deï¬
1605.09782#47
1605.09782#49
1605.09782
[ "1605.02688" ]
1605.09782#49
Adversarial Feature Learning
nitions of PEX and PGZ for deterministic E and G (Appendix A.3), = Jox Px(X)1 erg dx = fay PXC)1(x,B60) eR] 4 = Ppx(R°) = Poz(R°) = Jo, P2(Z)1a(2) 2)eR0] I = Jor p2(2)1 (2 2(G(2)) AG(2) ERS] dz = Joe pz (2) 1, =0 for any z, as z= E(G(z)) => G(z)=G(B(G(z))) E(G(2)) A G(2)#G(B(G(2)))] dz =0. # PX(R0 # Px(RX) = 0. X has measure zero (PX(R0 X) = 0), and the inversion property x = G(E(x)) holds Hence region R0 PX-almost everywhere. An analogous argument shows that R0 PZ(R0 An analogous argument shows that RZ := {z ⠬ Qz :z # E(G(z))} has measure zero on Pz (ie., Pz(Rz) = 0) and therefore z = E(G(z)) holds Pz-almost everywhere. # A.5 PROOF OF THEOREM 3 (RELATIONSHIP TO AUTOENCODERS) As shown in Proposition 2 (Section 3), the BiGAN objective is equivalent to the Jensen-Shannon divergence between PEX and PGZ. We now go a step further and show that this Jensen-Shannon divergence is closely related to a standard autoencoder loss. Omitting the 1 2 scale factor, a KL divergence term of the Jensen-Shannon divergence is given as
1605.09782#48
1605.09782#50
1605.09782
[ "1605.02688" ]
1605.09782#50
Adversarial Feature Learning
dP; Der (P; Pex+Pez) â Joe 2 4 [ 1 EX ap xu (Pex || pt) = log Q °8 d(Pex + Pox) ** = log2+ [ log f dPex, (6) 2 where we abbreviate as f the Radon-Nikodym derivative fEG := Proposition 1 for most of this proof. dPEX d(PEX+PGZ) â [0, 1] deï¬ ned in 14
1605.09782#49
1605.09782#51
1605.09782
[ "1605.02688" ]
1605.09782#51
Adversarial Feature Learning
Published as a conference paper at ICLR 2017 Weâ ll make use of the deï¬ nitions of PEX and PGZ for deterministic E and G found in Appendix A.3. The integral term of the KL divergence expression given in (6) over a particular region R â â ¦ will be denoted by F (R) := R log dPEX d (PEX + PGZ) dPEX = R log f dPEX. Next we will show that f > 0 holds PEX-almost everywhere, and hence F is always well deï¬ ned and ï¬ nite.
1605.09782#50
1605.09782#52
1605.09782
[ "1605.02688" ]
1605.09782#52
Adversarial Feature Learning
We then show that F is equivalent to an autoencoder-like reconstruction loss function. Proposition 3 f > 0 PEX-almost everywhere. Proof. Let RS=° := {(x,z) â ¬ Q: f(x,z) = 0} be the region of Q in which f = 0. Using the definition of the Radon-Nikodym derivative f, the measure Prx(R! =) = fi pi-o f A(Pex + Paz) = Jnr-o 0d(Pex + Paz) = Vis zero. Hence f > 0 Pzx-almost everywhere. Proposition[3]ensures that log f is defined Pex-almost everywhere, and F'(R) is well-defined. Next we will show that F(R) mimics an autoencoder with @p loss, meaning F is zero for any region in which G(E(x)) 4 x, and non-zero otherwise. Proposition 4 The KL divergence F outside the support of PGZ is zero: F (â ¦ \ supp(PGZ)) = 0. Weâ ll first show that in region Rs := 2 \ supp(Pez), we have f = 1 Pex-almost everywhere. Let R/<! := {(x,z) â ¬ Rg: f(x,z) < 1} be the region of Rg in which f < 1. Letâ s assume that Prx(R! <1) > 0 has non-zero measure. Then, using the definition of the Radon-Nikodym derivative, Pex(RIS") = fraser f U(Pex + Pox) = Sera f dPax + Sara f dPaw < â ¬Pex(R!s?) Pex(RIS") = fraser f U(Pex + Pox) = Sera f dPax + Sara f dPaw < â ¬Pex(R!s?) â ~~ ee <e<l 0 < PEX(Rf <1), where ¢ is a constant smaller than 1. But Pex(R/<!) < Pex(R/<') is a contradiction; hence Prex(Rf<!) = 0 and f = 1 Pex-almost everywhere in Rs, implying log f = 0 Pex-almost everywhere in Rg. Hence F(Rs) = 0.
1605.09782#51
1605.09782#53
1605.09782
[ "1605.02688" ]
1605.09782#53
Adversarial Feature Learning
By definition, F(Q \ supp(Pzx)) = 0 is also zero. The only region where F' might be non-zero is R! := supp(Pex) Nsupp(Paz). Proposition 5 f < 1 PEX-almost everywhere in R1. Let Rf! := {(x, z) ER: f(x,z)= 1} be the region in which f = 1. Letâ s assume the set RS=" F is not empty. By definition of the support] Prx(Rf=1) > 0 and Pez(R/=!) > 0. The Radon-Nikodym derivative on R=" is then given by Ppx(RIâ ') = Spin f Pex + Pox) = Spr 1d(Pex + Pez) = Pux(RI) + Pao(R!=), # Spin f Pex + Pox) = Ppx(RIâ ') = Spin f Pex + Pox) = Spr 1d(Pex + Pez) = Pux(RI) + Pao(R!=), which implies Pez(R/=!) = 0 and contradicts the definition of support. Hence R/=! = ( and f <1 Prx-almost everywhere on R', implying log f < 0 Pex-almost everywhere. Theorem 3 The encoder and generator objective given an optimal discriminator C(E,G) := maxp V(D, E,G) can be rewritten as an â ¬ autoencoder loss function C(E, G) = Exxpx [1 peopeencteea)=x] log fra(x, E(x))| + Eapz [tate ecinen (Cte) =e] log (1 â fra(G(2),2))| with log fEG â (â â , 0) and log (1 â fEG) â (â â
1605.09782#52
1605.09782#54
1605.09782
[ "1605.02688" ]
1605.09782#54
Adversarial Feature Learning
, 0) PEX-almost and PGZ-almost everywhere. Proof. Proposition 4 (F (â ¦ \ supp(PGZ)) = 0) and F (â ¦ \ supp(PEX)) = 0 imply that R1 := supp(PEX) â © supp(PGZ) is the only region of â ¦ where F may be non-zero; hence F (â ¦) = F (R1). "We use the definition UNC #0 => p(UNC) > Ohere.
1605.09782#53
1605.09782#55
1605.09782
[ "1605.02688" ]
1605.09782#55
Adversarial Feature Learning
15 Published as a conference paper at ICLR 2017 Note that supp(PEX) = {(x, E(x)) : x â Ë â ¦X} supp(PGZ) = {(G(z), z) : z â Ë â ¦Z} =â R1 := supp(PEX) â © supp(PGZ) = {(x, z) : E(x) = z â § x â Ë â ¦X â § G(z) = x â § z â Ë â ¦Z} So a point (x, E(x)) is in R1 if x â Ë â ¦X, E(x) â Ë â ¦Z, and G(E(x)) = x. (We can omit the x â Ë â ¦X condition from inside an expectation over PX, as PX-almost all x /â Ë â
1605.09782#54
1605.09782#56
1605.09782
[ "1605.02688" ]
1605.09782#56
Adversarial Feature Learning
¦X have 0 probability.) Therefore, Dut (Pex || 72*$"e2) â log2 = F(Q) = F(Râ ) = Sra log f(x, z) dPex = fo lezyers log f(x, 2) dPax = Eq2)~Pex [L[(x.2)eR1] log f(x, 2)] = Exxpx [1Ge2())eR') log f(x, E(x))] = Exxpx [tee retenc(eooy=x] log f(x, E(x))| Finally, with Propositions 3 and 5, we have f â (0, 1) PEX-almost everywhere in R1, and therefore log f â (â â , 0), taking a ï¬ nite and strictly negative value PEX-almost everywhere. An analogous argument (along with the fact that fEG + fGE = 1) lets us rewrite the other KL divergence term
1605.09782#55
1605.09782#57
1605.09782
[ "1605.02688" ]
1605.09782#57
Adversarial Feature Learning
Dux (Paz || 788 $"84 ) â log 2 = Exnpe [feta eirenz(orn))=2] log far(G(z), 2)| = Exxpz [feta eirenz(orn))=2] log (1 â fra(G(2),2))| # DKL The Jensen-Shannon divergence is the mean of these two KL divergences, giving C(E, G): C(E,G) = 2Djs (Pex || Paz) â log 4 = Dxi (Pex || Pext Pon ) + Dkr (Pez || PextPoz ) â log4 = Exxpx [2 pemetenc era )=x] log fra(x, B(x))| + Ex~pz [1,etaetrxass(ate))=3 log (1 â fec(G(z), z))| # APPENDIX B LEARNING DETAILS In this section we provide additional details on the BiGAN learning protocol summarized in Sec- tion 3.4. Goodfellow et al. (2014) found for GAN training that an objective in which the real and generated labels Y are swapped provides stronger gradient signal to G. We similarly observed in BiGAN training that an â inverseâ objective Î (with the same ï¬ xed point characteristics as V ) provides stronger gradient signal to G and E, where A(D, G, E) = Exnpx [| Ez~pp(-|x) [log (1 â D(x, 2))] ] + Exepz [ Exxpe(-|z) log D(x, z)]]. â
1605.09782#56
1605.09782#58
1605.09782
[ "1605.02688" ]
1605.09782#58
Adversarial Feature Learning
â .â â EE ae log(1â D(x, E(x))) log D(G(z),z) In practice, θG and θE are updated by moving in the positive gradient direction of this inverse objective â θE ,θGÎ , rather than the negative gradient direction of the original objective. We also observed that learning behaved similarly when all parameters θD, θG, θE were updated simultaneously at each iteration rather than alternating between θD updates and θG, θE updates, so we took the simultaneous updating (non-alternating) approach for computational efï¬ ciency. (For standard GAN training, simultaneous updates of θD, θG performed similarly well, so our standard GAN experiments also follow this protocol.)
1605.09782#57
1605.09782#59
1605.09782
[ "1605.02688" ]
1605.09782#59
Adversarial Feature Learning
16 . Published as a conference paper at ICLR 2017 # APPENDIX C MODEL AND TRAINING DETAILS In the following sections we present additional details on the models and training protocols used in the permutation-invariant MNIST and ImageNet evaluations presented in Section 4. Optimization For unsupervised training of BiGANs and baseline methods, we use the Adam optimizer to compute parameter updates, following the hyperparameters (initial step size a = 2 x 10~*, momentum f, = 0.5 and 8 = 0.999) used by [Radford et al 2016). The step size a is decayed exponentially to a = 2 x 10~° starting halfway through training. The mini-batch size is 128. â ¬2 weight decay of 2.5 x 10~° is applied to all multiplicative weights in linear layers (but not to the learned bias { or scale 7 parameters applied after batch normalization). Weights are initialized from a zero-mean normal distribution with a standard deviation of 0.02, with one notable exception: BiGAN discriminator weights that directly multiply z inputs to be added to spatial convolution outputs have initializations scaled by the convolution kernel size â e.g., fora 5 x 5 kernel, weights are initialized with a standard deviation of 0.5, 25 times the standard initialization. Software & hardware We implement BiGANs and baseline feature learning methods using the Theano (Theano Development Team, 2016) framework, based on the convolutional GAN implemen- tation provided by Radford et al. (2016). ImageNet transfer learning experiments (Section 4.3) use the Caffe (Jia et al., 2014) framework, per the Fast R-CNN (Girshick, 2015) and FCN (Long et al., 2015) reference implementations. Most computation is performed on an NVIDIA Titan X or Tesla K40 GPU. C.1 PERMUTATION-INVARIANT MNIST In all permutation-invariant MNIST experiments (Section 4.2), D, G, and E each consist of two hidden layers with 1024 units.
1605.09782#58
1605.09782#60
1605.09782
[ "1605.02688" ]
1605.09782#60
Adversarial Feature Learning
The ï¬ rst hidden layer is followed by a non-linearity; the second is followed by (parameter-free) batch normalization (Ioffe & Szegedy, 2015) and a non-linearity. The second hidden layer in each case is the input to a linear prediction layer of the appropriate size. In D and E, a leaky ReLU (Maas et al., 2013) non-linearity with a â leakâ of 0.2 is used; in G, a standard ReLU non-linearity is used.
1605.09782#59
1605.09782#61
1605.09782
[ "1605.02688" ]
1605.09782#61
Adversarial Feature Learning
All models are trained for 400 epochs. C.2 IMAGENET In all ImageNet experiments (Section 4.3), the encoder E architecture follows AlexNet (Krizhevsky et al., 2012) through the ï¬ fth and last convolution layer (conv5), with local response normalization (LRN) layers removed and batch normalization (Ioffe & Szegedy, 2015) (including the learned scaling and bias) with leaky ReLU non-linearity applied to the output of each convolution at unsupervised training time. (For supervised evaluation, batch normalization is not used, and the pre-trained scale and bias is merged into the preceding convolutionâ s weights and bias.) In most experiments, both the discriminator D and generator G architecture are those used by Radford et al. (2016), consisting of a series of four 5 à 5 convolutions (or â deconvolutionsâ â fractionally- strided convolutions â for the generator G) applied with 2 pixel stride, each followed by batch normalization and rectiï¬ ed non-linearity. The sole exception is our discriminator baseline feature learning experiment, in which we let the discriminator D be the AlexNet variant described above. Generally, using AlexNet (or similar convnet architecture) as the discriminator D is detrimental to the visual ï¬ delity of the resulting generated images, likely due to the relatively large convolutional ï¬ lter kernel size applied to the input image, as well as the max-pooling layers, which explicitly discard information in the input. However, for fair comparison of the discriminatorâ s feature learning abilities with those of BiGANs, we use the same architecture as used in the BiGAN encoder. Preprocessing To produce a data sample x, we ï¬ rst sample an image from the database, and resize it proportionally such that its shorter edge has a length of 72 pixels. Then, a 64 à 64 crop is randomly selected from the resized image. The crop is ï¬ ipped horizontally with probability 1 2 . Finally, the crop is scaled to [â 1, 1], giving the sample x.
1605.09782#60
1605.09782#62
1605.09782
[ "1605.02688" ]