id
stringlengths
12
15
title
stringlengths
8
162
content
stringlengths
1
17.6k
prechunk_id
stringlengths
0
15
postchunk_id
stringlengths
0
15
arxiv_id
stringlengths
10
10
references
listlengths
1
1
1611.02779#36
RL$^2$: Fast Reinforcement Learning via Slow Reinforcement Learning
Trust region policy optimization. CoRR, abs/1502.05477, 2015. John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, and Pieter Abbeel. High- dimensional continuous control using generalized advantage estimation. In Jnternational Con- ference on Learning Representations (ICLR2016), 2016. Nicolas Schweighofer and Kenji Doya. Meta-learning in reinforcement learning. Neural Networks, 16(1):5-9, 2003.
1611.02779#35
1611.02779#37
1611.02779
[ "1511.06295" ]
1611.02779#37
RL$^2$: Fast Reinforcement Learning via Slow Reinforcement Learning
Satinder Pal Singh. Transfer of learning by composing solutions of elemental sequential tasks. Machine Learning, 8(3-4):323-339, 1992. Malcolm Strens. A bayesian framework for reinforcement learning. In JCML, pp. 943-950, 2000. Matthew E Taylor and Peter Stone. Transfer learning for reinforcement learning domains: A survey. Journal of Machine Learning Research, 10(Jul):1633â 1685, 2009. William R Thompson.
1611.02779#36
1611.02779#38
1611.02779
[ "1511.06295" ]
1611.02779#38
RL$^2$: Fast Reinforcement Learning via Slow Reinforcement Learning
On the likelihood that one unknown probability exceeds another in view of the evidence of two samples. Biometrika, 25(3/4):285-294, 1933. Ricardo Vilalta and Youssef Drissi. A perspective view and survey of meta-learning. Artificial Intelligence Review, 18(2):77-95, 2002. Oriol Vinyals, Charles Blundell, Timothy Lillicrap, Koray Kavukcuoglu, and Daan Wierstra. Match- ing networks for one shot learning. arXiv preprint arXiv: 1606.04080, 2016. Niklas Wahlstro6m, Thomas B Schon, and Marc Peter Deisenroth. From pixels to torques: Policy learning with deep dynamical models. arXiv preprint arXiv: 1502.02251, 2015.
1611.02779#37
1611.02779#39
1611.02779
[ "1511.06295" ]
1611.02779#39
RL$^2$: Fast Reinforcement Learning via Slow Reinforcement Learning
Manuel Watter, Jost Springenberg, Joschka Boedecker, and Martin Riedmiller. Embed to control: A locally linear latent dynamics model for control from raw images. In Advances in Neural Information Processing Systems, pp. 2746-2754, 2015. Peter Whittle. Optimization over time. John Wiley & Sons, Inc., 1982. Aaron Wilson, Alan Fern, Soumya Ray, and Prasad Tadepalli.
1611.02779#38
1611.02779#40
1611.02779
[ "1511.06295" ]
1611.02779#40
RL$^2$: Fast Reinforcement Learning via Slow Reinforcement Learning
Multi-task reinforcement learning: a hierarchical bayesian approach. In Proceedings of the 24th international conference on Machine learning, pp. 1015-1022. ACM, 2007. A Steven Younger, Sepp Hochreiter, and Peter R Conwell. Meta-learning with backpropagation. In Neural Networks, 2001. Proceedings. IJCNNâ 0O1. International Joint Conference on, volume 3. IEEE, 2001. 12 Under review as a conference paper at ICLR 2017 # APPENDIX # A DETAILED EXPERIMENT SETUP Common to all experiments: as mentioned in Section 2.2, we use placeholder values when neces- sary. For example, at t = 0 there is no previous action, reward, or termination flag. Since all of our experiments use discrete actions, we use the embedding of the action 0 as a placeholder for actions, and 0 for both the rewards and termination flags. To form the input to the GRU, we use the values for the rewards and termination flags as-is, and embed the states and actions as described separately below for each experiments. These values are then concatenated together to form the joint embedding. For the neural network architecture, We use rectified linear units throughout the experiments as the hidden activation, and we apply weight normalization without data-dependent initialization (Sali- mans & Kingma, 2016) to all weight matrices. The hidden-to-hidden weight matrix uses an orthog- onal initialization (Saxe et al., 2013), and all other weight matrices use Xavier initialization (Glorot & Bengio, 2010). We initialize all bias vectors to 0. Unless otherwise mentioned, the policy and the baseline uses separate neural networks with the same architecture until the final layer, where the number of outputs differ. All experiments are implemented using TensorFlow (Abadi et al., 2016) and rllab (Duan et al., 2016). We use the implementations of classic algorithms provided by the TabulaRL package (Os- band, 2016).
1611.02779#39
1611.02779#41
1611.02779
[ "1511.06295" ]
1611.02779#41
RL$^2$: Fast Reinforcement Learning via Slow Reinforcement Learning
A.1 MULTI-ARMED BANDITS The parameters for TRPO are shown in Table 1. Since the environment is stateless, we use a constant embedding 0 as a placeholder in place of the states, and a one-hot embedding for the actions. Table 1: Hyperparameters for TRPO: multi-armed bandits Discount 0.99 GAE X 0.3 Policy Iters | Up to 1000 #GRU Units | 256 Mean KL 0.01 Batch size 250000 A.2. TABULAR MDPs The parameters for TRPO are shown in Table 2.
1611.02779#40
1611.02779#42
1611.02779
[ "1511.06295" ]
1611.02779#42
RL$^2$: Fast Reinforcement Learning via Slow Reinforcement Learning
We use a one-hot embedding for the states and actions separately, which are then concatenated together. Table 2: Hyperparameters for TRPO: tabular MDPs Discount 0.99 GAE \ 0.3 Policy Iters | Up to 10000 #GRU Units | 256 Mean KL 0.01 Batch size 250000 A.3 VISUAL NAVIGATION The parameters for TRPO are shown in Table 3. For this task, we use a neural network to form the joint embedding. We rescale the images to have width 40 and height 30 with RGB channels preserved, and we recenter the RGB values to lie within range [â 1, 1]. Then, this preprocessed 13 Under review as a conference paper at ICLR 2017 image is passed through 2 convolution layers, each with 16 filters of size 5 x 5 and stride 2. The action is first embedded into a 256-dimensional vector where the embedding is learned, and then concatenated with the flattened output of the final convolution layer. The joint vector is then fed to a fully connected layer with 256 hidden units. Unlike previous experiments, we let the policy and the baseline share the same neural network. We found this to improve the stability of training baselines and also the end performance of the policy, possibly due to regularization effects and better learned features imposed by weight sharing. Similar weight-sharing techniques have also been explored in (Mnih et al., 2016).
1611.02779#41
1611.02779#43
1611.02779
[ "1511.06295" ]
1611.02779#43
RL$^2$: Fast Reinforcement Learning via Slow Reinforcement Learning
Table 3: Hyperparameters for TRPO: visual navigation Discount 0.99 GAE X 0.99 Policy Iters | Up to 5000 #GRU Units | 256 Mean KL 0.01 Batch size 50000 # REFERENCES Martin Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, et al. Tensorflow: Large-scale machine learning on heterogeneous distributed systems. arXiv preprint arXiv: 1603.04467, 2016. Yan Duan, Xi Chen, Rein Houthooft, John Schulman, and Pieter Abbeel. Benchmarking deep reinforcement learning for continuous control. arXiv preprint arXiv: 1604.06778, 2016. Xavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neural networks.
1611.02779#42
1611.02779#44
1611.02779
[ "1511.06295" ]
1611.02779#44
RL$^2$: Fast Reinforcement Learning via Slow Reinforcement Learning
In Aistats, volume 9, pp. 249-256, 2010. Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy P Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning. arXiv preprint arXiv: 1602.01783, 2016. Tan Osband. TabulaRL. https://github.com/iosband/TabulaRL, 2016. Tim Salimans and Diederik P Kingma.
1611.02779#43
1611.02779#45
1611.02779
[ "1511.06295" ]
1611.02779#45
RL$^2$: Fast Reinforcement Learning via Slow Reinforcement Learning
Weight normalization: A simple reparameterization to ac- celerate training of deep neural networks. arXiv preprint arXiv: 1602.07868, 2016. Andrew M Saxe, James L McClelland, and Surya Ganguli. Exact solutions to the nonlinear dynam- ics of learning in deep linear neural networks. arXiv preprint arXiv: 1312.6120, 2013. 14
1611.02779#44
1611.02779
[ "1511.06295" ]
1611.02163#0
Unrolled Generative Adversarial Networks
7 1 0 2 y a M 2 1 ] G L . s c [ 4 v 3 6 1 2 0 . 1 1 6 1 : v i X r a Published as a conference paper at ICLR 2017 # UNROLLED GENERATIVE ADVERSARIAL NETWORKS Luke Metzâ Google Brain [email protected] Ben Pooleâ Stanford University [email protected] David Pfau Google DeepMind [email protected] Jascha Sohl-Dickstein Google Brain [email protected] # ABSTRACT We introduce a method to stabilize Generative Adversarial Networks (GANs) by deï¬ ning the generator objective with respect to an unrolled optimization of the discriminator. This allows training to be adjusted between using the optimal dis- criminator in the generatorâ s objective, which is ideal but infeasible in practice, and using the current value of the discriminator, which is often unstable and leads to poor solutions. We show how this technique solves the common problem of mode collapse, stabilizes training of GANs with complex recurrent generators, and increases diversity and coverage of the data distribution by the generator.
1611.02163#1
1611.02163
[ "1511.06350" ]
1611.02163#1
Unrolled Generative Adversarial Networks
# INTRODUCTION The use of deep neural networks as generative models for complex data has made great advances in recent years. This success has been achieved through a surprising diversity of training losses and model architectures, including denoising autoencoders (Vincent et al., 2010), variational au- toencoders (Kingma & Welling, 2013; Rezende et al., 2014; Gregor et al., 2015; Kulkarni et al., 2015; Burda et al., 2015; Kingma et al., 2016), generative stochastic networks (Alain et al., 2015), diffusion probabilistic models (Sohl-Dickstein et al., 2015), autoregressive models (Theis & Bethge, 2015; van den Oord et al., 2016a;b), real non-volume preserving transformations (Dinh et al., 2014; 2016), Helmholtz machines (Dayan et al., 1995; Bornschein et al., 2015), and Generative Adversar- ial Networks (GANs) (Goodfellow et al., 2014). 1.1 GENERATIVE ADVERSARIAL NETWORKS While most deep generative models are trained by maximizing log likelihood or a lower bound on log likelihood, GANs take a radically different approach that does not require inference or explicit calculation of the data likelihood. Instead, two models are used to solve a minimax game: a genera- tor which samples data, and a discriminator which classiï¬ es the data as real or generated. In theory these models are capable of modeling an arbitrarily complex probability distribution. When using the optimal discriminator for a given class of generators, the original GAN proposed by Goodfellow et al. minimizes the Jensen-Shannon divergence between the data distribution and the generator, and extensions generalize this to a wider class of divergences (Nowozin et al., 2016; Sonderby et al., 2016; Poole et al., 2016). The ability to train extremely ï¬
1611.02163#0
1611.02163#2
1611.02163
[ "1511.06350" ]
1611.02163#2
Unrolled Generative Adversarial Networks
exible generating functions, without explicitly computing likeli- hoods or performing inference, and while targeting more mode-seeking divergences as made GANs extremely successful in image generation (Odena et al., 2016; Salimans et al., 2016; Radford et al., 2015), and image super resolution (Ledig et al., 2016). The ï¬ exibility of the GAN framework has also enabled a number of successful extensions of the technique, for instance for structured predic- tion (Reed et al., 2016a;b; Odena et al., 2016), training energy based models (Zhao et al., 2016), and combining the GAN loss with a mutual information loss (Chen et al., 2016). â Work done as a member of the Google Brain Residency program (g.co/brainresidency) â Work completed as part of a Google Brain internship
1611.02163#1
1611.02163#3
1611.02163
[ "1511.06350" ]
1611.02163#3
Unrolled Generative Adversarial Networks
1 Published as a conference paper at ICLR 2017 In practice, however, GANs suffer from many issues, particularly during training. One common failure mode involves the generator collapsing to produce only a single sample or a small family of very similar samples. Another involves the generator and discriminator oscillating during training, rather than converging to a ï¬ xed point. In addition, if one agent becomes much more powerful than the other, the learning signal to the other agent becomes useless, and the system does not learn. To train GANs many tricks must be employed, such as careful selection of architectures (Radford et al., 2015), minibatch discrimination (Salimans et al., 2016), and noise injection (Salimans et al., 2016; Sonderby et al., 2016). Even with these tricks the set of hyperparameters for which training is successful is generally very small in practice. Once converged, the generative models produced by the GAN training procedure normally do not cover the whole distribution (Dumoulin et al., 2016; Che et al., 2016), even when targeting a mode- covering divergence such as KL. Additionally, because it is intractable to compute the GAN training loss, and because approximate measures of performance such as Parzen window estimates suffer from major ï¬ aws (Theis et al., 2016), evaluation of GAN performance is challenging. Currently, human judgement of sample quality is one of the leading metrics for evaluating GANs. In practice this metric does not take into account mode dropping if the number of modes is greater than the number of samples one is visualizing. In fact, the mode dropping problem generally helps visual sample quality as the model can choose to focus on only the most common modes. These common modes correspond, by deï¬ nition, to more typical samples. Additionally, the generative model is able to allocate more expressive power to the modes it does cover than it would if it attempted to cover all modes. 1.2 DIFFERENTIATING THROUGH OPTIMIZATION Many optimization schemes, including SGD, RMSProp (Tieleman & Hinton, 2012), and Adam (Kingma & Ba, 2014), consist of a sequence of differentiable updates to parameters.
1611.02163#2
1611.02163#4
1611.02163
[ "1511.06350" ]
1611.02163#4
Unrolled Generative Adversarial Networks
Gradients can be backpropagated through unrolled optimization updates in a similar fashion to backpropagation through a recurrent neural network. The parameters output by the optimizer can thus be included, in a differentiable way, in another objective (Maclaurin et al., 2015). This idea was ï¬ rst suggested for minimax problems in (Pearlmutter & Siskind, 2008), while (Zhang & Lesser, 2010) provided a theoretical analysis and experimental results on differentiating through a single step of gradient ascent for simple matrix games. Differentiating through unrolled optimization was ï¬ rst scaled to deep networks in (Maclaurin et al., 2015), where it was used for hyperparameter optimization. More recently, (Belanger & McCallum, 2015; Han et al., 2016; Andrychowicz et al., 2016) backpropagate through optimization procedures in contexts unrelated to GANs or minimax games. In this work we address the challenges of unstable optimization and mode collapse in GANs by unrolling optimization of the discriminator objective during training.
1611.02163#3
1611.02163#5
1611.02163
[ "1511.06350" ]
1611.02163#5
Unrolled Generative Adversarial Networks
2 METHOD 2.1 GENERATIVE ADVERSARIAL NETWORKS The GAN learning problem is to ï¬ nd the optimal parameters θâ in a minimax objective, G for a generator function G (z; θG) θâ G = argmin θG f (θG, θD) (1) # max θD f (θG, θâ = argmin θG D (θG)) (2) θâ D (θG) = argmax f (θG, θD) , θD (3) where f is commonly chosen to be f (θG, θD) = Exâ ¼pdata [log (D (x; θD))] + Ezâ ¼N (0,I) [log (1 â D (G (z; θG) ; θD))] . Here x â X is the data variable, z â Z is the latent variable, pdata is the data distribution, the discriminator D (·; θD) : X â [0, 1] outputs the estimated probability that a sample x comes from the data distribution, θD and θG are the discriminator and generator parameters, and the generator function G (·; θG) : Z â X transforms a sample in the latent space into a sample in the data space. 2 (4)
1611.02163#4
1611.02163#6
1611.02163
[ "1511.06350" ]
1611.02163#6
Unrolled Generative Adversarial Networks
Published as a conference paper at ICLR 2017 For the minimax loss in Eq. 4, the optimal discriminator Dâ (x) is a known smooth function of the generator probability pG (x) (Goodfellow et al., 2014), Dâ (x) = pdata (x) pdata (x) + pG (x) . (5) When the generator loss in Eq. 2 is rewritten directly in terms of pG (x) and Eq. 5 rather than θG and θâ D (θG), then it is similarly a smooth function of pG (x). These smoothness guarantees are typically lost when D (x; θD) and G (z; θG) are drawn from parametric families. They nonetheless suggest that the true generator objective in Eq. 2 will often be well behaved, and is a desirable target for direct optimization. Explicitly solving for the optimal discriminator parameters θâ D (θG) for every update step of the generator G is computationally infeasible for discriminators based on neural networks. Therefore this minimax optimization problem is typically solved by alternating gradient descent on θG and ascent on θD. The optimal solution θâ = {θâ D} is a ï¬ xed point of these iterative learning dynamics. Addition- ally, if f (θG, θD) is convex in θG and concave in θD, then alternating gradient descent (ascent) trust region updates are guaranteed to converge to the ï¬ xed point, under certain additional weak assump- tions (Juditsky et al., 2011). However in practice f (θG, θD) is typically very far from convex in θG and concave in θD, and updates are not constrained in an appropriate way. As a result GAN training suffers from mode collapse, undamped oscillations, and other problems detailed in Section 1.1. In order to address these difï¬ culties, we will introduce a surrogate objective function fK (θG, θD) for training the generator which more closely resembles the true generator objective f (θG, θâ
1611.02163#5
1611.02163#7
1611.02163
[ "1511.06350" ]
1611.02163#7
Unrolled Generative Adversarial Networks
2.2 UNROLLING GANS A local optimum of the discriminator parameters θâ iterative optimization procedure, D can be expressed as the ï¬ xed point of an θ0 D = θD (6) . df (0g, 0% ok + nf (6c, D lim 65, θk+1 D = θk dθk D D (7) θâ D (θG) = lim kâ â (8)
1611.02163#6
1611.02163#8
1611.02163
[ "1511.06350" ]
1611.02163#8
Unrolled Generative Adversarial Networks
where ηk is the learning rate schedule. For clarity, we have expressed Eq. 7 as a full batch steepest gradient ascent equation. More sophisticated optimizers can be similarly unrolled. In our experi- ments we unroll Adam (Kingma & Ba, 2014). By unrolling for Kv steps, we create a surrogate objective for the update of the generator, fic 0a, 9D) = f (8a, 05 (Oa,9D)) - When K = 0 this objective corresponds exactly to the standard GAN objective, while as K â
1611.02163#7
1611.02163#9
1611.02163
[ "1511.06350" ]
1611.02163#9
Unrolled Generative Adversarial Networks
â it corresponds to the true generator objective function f (θG, θâ D (G)). By adjusting the number of unrolling steps K, we are thus able to interpolate between standard GAN training dynamics with their associated pathologies, and more costly gradient descent on the true generator loss. 2.3 PARAMETER UPDATES The generator and discriminator parameter updates using this surrogate loss are # dfK (θG, θD) dθG df (θG, θD) dθD θG â θG â η (10) θD â θD + η . (11)
1611.02163#8
1611.02163#10
1611.02163
[ "1511.06350" ]
1611.02163#10
Unrolled Generative Adversarial Networks
For clarity we use full batch steepest gradient descent (ascent) with stepsize η above, while in ex- periments we instead use minibatch Adam for both updates. The gradient in Eq. 10 requires back- propagating through the optimization process in Eq. 7. A clear description of differentiation through 3 (9) Published as a conference paper at ICLR 2017 â > Forward Pass a â »> 0, Gradients 9, 8, 2 6, Gradients > â » Dp! . 8p) SGD | f 05.) > SGD | £,(8,,8,) Unrolling - â ~ SGD a A 4 Gradients 8, 8, 8, Figure 1: An illustration of the computation graph for an unrolled GAN with 3 unrolling steps. The generator update in Equation 10 involves backpropagating the generator gradient (blue arrows) through the unrolled optimization. Each step k in the unrolled optimization uses the gradients of fk with respect to θk D, as described in Equation 7 and indicated by the green arrows. The discriminator update in Equation 11 does not depend on the unrolled optimization (red arrow). gradient descent is given as Algorithm 2 in (Maclaurin et al., 2015), though in practice the use of an automatic differentiation package means this step does not need to be programmed explicitly. A pictorial representation of these updates is provided in Figure 1. It is important to distinguish this from an approach suggested in (Goodfellow et al., 2014), that several update steps of the discriminator parameters should be run before each single update step for the generator. In that approach, the update steps for both models are still gradient descent (ascent) with respect to ï¬ xed values of the other model parameters, rather than the surrogate loss we describe in Eq. 9. Performing K steps of discriminator update between each single step of generator update corresponds to updating the generator parameters θG using only the ï¬ rst term in Eq. 12 below. 2.4 THE MISSING GRADIENT TERM To better understand the behavior of the surrogate loss fK (θG, θD), we examine its gradient with respect to the generator parameters θG,
1611.02163#9
1611.02163#11
1611.02163
[ "1511.06350" ]
1611.02163#11
Unrolled Generative Adversarial Networks
dfx (Gc,8p) _ OF (8a,05 (Oc,4D)) | AF (8a,9F (8a, OD)) As (8a, 9D) dO 0c 0K (6c,4p) dq (12) Standard GAN training corresponds exactly to updating the generator parameters using only the ï¬ rst term in this gradient, with θK D (θG, θD) being the parameters resulting from the discriminator update step. An optimal generator for any ï¬ xed discriminator is a delta function at the x to which the discriminator assigns highest data probability. Therefore, in standard GAN training, each generator update step is a partial collapse towards a delta function. The second term captures how the discriminator would react to a change in the generator. It reduces the tendency of the generator to engage in mode collapse. For instance, the second term reï¬ ects that as the generator collapses towards a delta function, the discriminator reacts and assigns lower probability to that state, increasing the generator loss. It therefore discourages the generator from collapsing, and may improve stability.
1611.02163#10
1611.02163#12
1611.02163
[ "1511.06350" ]
1611.02163#12
Unrolled Generative Adversarial Networks
As K â â , θK = 0, and therefore the second term in Eq. 12 goes to 0 (Danskin, 1967). The gradient of the unrolled surrogate loss fK (θG, θD) with respect to θG is thus identical to the gradient of the standard GAN loss f (θG, θD) both when K = 0 and when K â â , where we take K â â
1611.02163#11
1611.02163#13
1611.02163
[ "1511.06350" ]
1611.02163#13
Unrolled Generative Adversarial Networks
to imply that in the standard GAN the discriminator is also fully optimized between each generator update. Between these two extremes, fK (θG, θD) captures additional information about the response of the discriminator to changes in the generator. 4 Published as a conference paper at ICLR 2017 2.5 CONSEQUENCES OF THE SURROGATE LOSS GANs can be thought of as a game between the discriminator (D) and the generator (G). The agents take turns taking actions and updating their parameters until a Nash equilibrium is reached. The optimal action for D is to evaluate the probability ratio pG(x)+pdata(x) for the generatorâ s move x (Eq. 5). The optimal generator action is to move its mass to maximize this ratio. The initial move for G will be to move as much mass as its parametric family and update step permits to the single point that maximizes the ratio of probability densities. The action D will then take is quite simple. It will track that point, and to the extent allowed by its own parametric family and update step assign low data probability to it, and uniform probability everywhere else. This cycle of G moving and D following will repeat forever or converge depending on the rate of change of the two agents. This is similar to the situation in simple matrix games like rock-paper-scissors and matching pennies, where alternating gradient descent (ascent) with a ï¬ xed learning rate is known not to converge (Singh et al., 2000; Bowling & Veloso, 2002). In the unrolled case, however, this undesirable behavior no longer occurs.
1611.02163#12
1611.02163#14
1611.02163
[ "1511.06350" ]
1611.02163#14
Unrolled Generative Adversarial Networks
Now Gâ s actions take into account how D will respond. In particular, G will try to make steps that D will have a hard time responding to. This extra information helps the generator spread its mass to make the next D step less effective instead of collapsing to a point. In principle, a surrogate loss function could be used for both D and G. In the case of 1-step unrolled optimization this is known to lead to convergence for games in which gradient descent (ascent) fails (Zhang & Lesser, 2010). However, the motivation for using the surrogate generator loss in Section 2.2, of unrolling the inner of two nested min and max functions, does not apply to using a surrogate discriminator loss. Additionally, it is more common for the discriminator to overpower the generator than vice-versa when training a GAN.
1611.02163#13
1611.02163#15
1611.02163
[ "1511.06350" ]
1611.02163#15
Unrolled Generative Adversarial Networks
Giving more information to G by allowing it to â see into the futureâ may thus help the two models be more balanced. # 3 EXPERIMENTS In this section we demonstrate improved mode coverage and stability by applying this technique to ï¬ ve datasets of increasing complexity. Evaluation of generative models is a notoriously hard problem (Theis et al., 2016). As such the de facto standard in GAN literature has become sample quality as evaluated by a human and/or evaluated by a heuristic (Inception score for example, (Salimans et al., 2016)). While these evaluation metrics do a reasonable job capturing sample quality, they fail to capture sample diversity.
1611.02163#14
1611.02163#16
1611.02163
[ "1511.06350" ]
1611.02163#16
Unrolled Generative Adversarial Networks
In our ï¬ rst 2 experiments diversity is easily evaluated via visual inspection. In our later experiments this is not the case, and we will use a variety of methods to quantify coverage of samples. Our measures are individually strongly suggestive of unrolling reducing mode-collapse and improving stability, but none of them alone are conclusive. We believe that taken together however, they provide extremely compelling evidence for the advantages of unrolling. When doing stochastic optimization, we must choose which minibatches to use in the unrolling updates in Eq. 7. We experimented with both a ï¬ xed minibatch and re-sampled minibatches for each unrolling step, and found it did not signiï¬ cantly impact the result. We use ï¬ xed minibatches for all experiments in this section. We provide a reference implementation of this technique at github.com/poolio/unrolled gan. 3.1 MIXTURE OF GAUSSIANS DATASET To illustrate the impact of discriminator unrolling, we train a simple GAN architecture on a 2D mixture of 8 Gaussians arranged in a circle. For a detailed list of architecture and hyperparameters see Appendix A. Figure 2 shows the dynamics of this model through time.
1611.02163#15
1611.02163#17
1611.02163
[ "1511.06350" ]
1611.02163#17
Unrolled Generative Adversarial Networks
Without unrolling the generator rotates around the valid modes of the data distribution but is never able to spread out mass. When adding in unrolling steps G quickly learns to spread probability mass and the system converges to the data distribution. In Appendix B we perform further experiments on this toy dataset. We explore how unrolling compares to historical averaging, and compares to using the unrolled discriminator to update the 5 Published as a conference paper at ICLR 2017 - FV MO > 22/2: . . a > 7 - - ° Step 0 Step 5k Step 10k Step 15k Step 20k Step 25k Target Figure 2:
1611.02163#16
1611.02163#18
1611.02163
[ "1511.06350" ]
1611.02163#18
Unrolled Generative Adversarial Networks
Unrolling the discriminator stabilizes GAN training on a toy 2D mixture of Gaussians dataset. Columns show a heatmap of the generator distribution after increasing numbers of training steps. The ï¬ nal column shows the data distribution. The top row shows training for a GAN with 10 unrolling steps. Its generator quickly spreads out and converges to the target distribution. The bottom row shows standard GAN training. The generator rotates through the modes of the data distribution. It never converges to a ï¬ xed distribution, and only ever assigns signiï¬ cant probability mass to a single data mode at once.
1611.02163#17
1611.02163#19
1611.02163
[ "1511.06350" ]
1611.02163#19
Unrolled Generative Adversarial Networks
REGAN EROS DAVIS HN WOK ~â ~VeKSourd arHOaâ yy i aPenNOQw new oawoed ~wOeOSENK LHHANL OD O~D 8 ae Ce ee PRIWWRODG wD Im DWN HW AN*OSAND SOvwe~n~ak SSN GG D2 eR 80 GY Fo Fe Ps BIE RAVES KK HOUSES OD PT PWIYPwWwWgG we tS S w PTIBvsxorust OWA PHAVHO~VAT Pee suâ DHRe~ erereraererereres UelererereurGbGEbEGEEEEEELEELEEEL UeleveeewrejeEGggggGggGEbGhGhbeh Uv eGggGggeGeebGsebyeby, UvlereuwwwrwjGEGgggeGgEbGbeGheb UelereerngbGEGEGEEGEEEELEEEELEL UelereverrrrGbEEEEEEEELEELEELEL UorvrwwwnjegGggGbgGEEELELELEL steps 20k steps SOK steps 100k steps $3 4 a4 OL 33 3 & wz & G éé éé éé éé éé éé éé éé Fa é é # é é é é F FH FH TH FH FH % % H % F 10 ~ Figure 3:
1611.02163#18
1611.02163#20
1611.02163
[ "1511.06350" ]
1611.02163#20
Unrolled Generative Adversarial Networks
Unrolled GAN training increases stability for an RNN generator and convolutional dis- criminator trained on MNIST. The top row was run with 20 unrolling steps. The bottom row is a standard GAN, with 0 unrolling steps. Images are samples from the generator after the indicated number of training steps. generator, but without backpropagating through the generator. In both cases we ï¬ nd that the unrolled objective performs better. 3.2 PATHOLOGICAL MODEL WITH MISMATCHED GENERATOR AND DISCRIMINATOR To evaluate the ability of this approach to improve trainability, we look to a traditionally challenging family of models to train â recurrent neural networks (RNNs). In this experiment we try to generate MNIST samples using an LSTM (Hochreiter & Schmidhuber, 1997). MNIST digits are 28x28 pixel images. At each timestep of the generator LSTM, it outputs one column of this image, so that after 28 timesteps it has output the entire sample. We use a convolutional neural network as the discriminator. See Appendix C for the full model and training details. Unlike in all previously successful GAN models, there is no symmetry between the generator and the discriminator in this task, resulting in a more complex power balance. Results can be seen in Figure 3. Once again, without unrolling the model quickly collapses, and rotates through a sequence of single modes. Instead of rotating spatially, it cycles through proto-digit like blobs. When running with unrolling steps the generator disperses and appears to cover the whole data distribution, as in the 2D example.
1611.02163#19
1611.02163#21
1611.02163
[ "1511.06350" ]
1611.02163#21
Unrolled Generative Adversarial Networks
6 Published as a conference paper at ICLR 2017 Unrolling steps 1/4 size of D compared to G Modes generated KL(model ||data) 1/2 size of D compared to G Modes generated KL(model ||data) Discriminator Size 0 30.6 ± 20.73 5.99 ± 0.42 628.0 ± 140.9 2.58 ±0.751 1 65.4 ± 34.75 5.911 ± 0.14 523.6 ± 55.768 2.44 ±0.26 5 236.4 ± 63.30 4.67 ± 0.43 732.0 ± 44.98 1.66 ± 0.090 Table 1: Unrolled GANs cover more discrete modes when modeling a dataset with 1,000 data modes, corresponding to all combinations of three MNIST digits (103 digit combinations). The number of modes covered is given for different numbers of unrolling steps, and for two different architectures. The reverse KL divergence between model and data is also given. Standard error is provided for both measures. 3.3 MODE AND MANIFOLD COLLAPSE USING AUGMENTED MNIST GANs suffer from two different types of model collapse â collapse to a subset of data modes, and collapse to a sub-manifold within the data distribution. In these experiments we isolate both effects using artiï¬ cially constructed datasets, and demonstrate that unrolling can largely rescue both types of collapse. 3.3.1 DISCRETE MODE COLLAPSE To explore the degree to which GANs drop discrete modes in a dataset, we use a technique similar to one from (Che et al., 2016). We construct a dataset by stacking three randomly chosen MNIST digits, so as to construct an RGB image with a different MNIST digit in each color channel. This new dataset has 1,000 distinct modes, corresponding to each combination of the ten MNIST classes in the three channels. We train a GAN on this dataset, and generate samples from the trained model (25,600 samples for all experiments). We then compute the predicted class label of each color channel using a pre-trained MNIST classiï¬
1611.02163#20
1611.02163#22
1611.02163
[ "1511.06350" ]
1611.02163#22
Unrolled Generative Adversarial Networks
er. To evaluate performance, we use two metrics: the number of modes for which the generator produced at least one sample, and the KL divergence between the model and the expected data distribution. Within this discrete label space, a KL divergence can be estimated tractably be- tween the generated samples and the data distribution over classes, where the data distribution is a uniform distribution over all 1,000 classes. As presented in Table 1, as the number of unrolling steps is increased, both mode coverage and re- verse KL divergence improve. Contrary to (Che et al., 2016), we found that reasonably sized models (such as the one used in Section 3.4) covered all 1,000 modes even without unrolling. As such we use smaller convolutional GAN models. Details on the models used are provided in Appendix E. We observe an additional interesting effect in this experiment. The beneï¬
1611.02163#21
1611.02163#23
1611.02163
[ "1511.06350" ]
1611.02163#23
Unrolled Generative Adversarial Networks
ts of unrolling increase as the discriminator size is reduced. We believe unrolling effectively increases the capacity of the discriminator. The unrolled discriminator can better react to any speciï¬ c way in which the generator is producing non-data-like samples. When the discriminator is weak, the positive impact of unrolling is thus larger. # 3.3.2 MANIFOLD COLLAPSE In addition to discrete modes, we examine the effect of unrolling when modeling continuous mani- folds. To get at this quantity, we constructed a dataset consisting of colored MNIST digits. Unlike in the previous experiment, a single MNIST digit was chosen, and then assigned a single monochro- matic color. With a perfect generator, one should be able to recover the distribution of colors used to generate the digits. We use colored MNIST digits so that the generator also has to model the digits, which makes the task sufï¬ ciently complex that the generator is unable to perfectly solve it. The color of each digit is sampled from a 3D normal distribution. Details of this dataset are provided in Appendix F. We will examine the distribution of colors in the samples generated by the trained GAN. As will also be true in the CIFAR10 example in Section 3.4, the lack of diversity in gener- ated colors is almost invisible using only visual inspection of the samples. Samples can be found in Appendix F.
1611.02163#22
1611.02163#24
1611.02163
[ "1511.06350" ]
1611.02163#24
Unrolled Generative Adversarial Networks
7 Published as a conference paper at ICLR 2017 Unrolling steps JS divergence with 1/4 layer size JS divergence with 1/2 layer size JS divergence with 1/1 layer size 0 0.073 ± 0.0058 0.095 ± 0.011 0.034 ± 0.0034 1 0.142 ± 0.028 0.119 ± 0.010 0.050 ± 0.0026 5 0.049 ± 0.0021 0.055 ± 0.0049 0.027 ± 0.0028 10 0.075 ± 0.012 0.074± 0.016 0.025 ± 0.00076 Table 2: Unrolled GANs better model a continuous distribution. GANs are trained to model ran- domly colored MNIST digits, where the color is drawn from a Gaussian distribution. The JS diver- gence between the data and model distributions over digit colors is then reported, along with standard error in the JS divergence. More unrolling steps, and larger models, lead to better JS divergence. Figure 4: Visual perception of sample quality and diversity is very similar for models trained with different numbers of unrolling steps. Actual sample diversity is higher with more unrolling steps. Each pane shows samples generated after training a model on CIFAR10 with 0, 1, 5, and 10 steps of unrolling. In order to recover the color the GAN assigned to the digit, we used k-means with 2 clusters, to pick out the foreground color from the background. We then performed this transformation for both the training data and the generated images.
1611.02163#23
1611.02163#25
1611.02163
[ "1511.06350" ]
1611.02163#25
Unrolled Generative Adversarial Networks
Next we ï¬ t a Gaussian kernel density estimator to both distributions over digit colors. Finally, we computed the JS divergence between the model and data distributions over colors. Results can be found in Table 2 for several model sizes. Details of the models are provided in Appendix F. In general, the best performing models are unrolled for 5-10 steps, and larger models perform better than smaller models. Counter-intuitively, taking 1 unrolling step seems to hurt this measure of diversity. We suspect that this is due to it introducing oscillatory dynamics into training. Taking more unrolling steps however leads to improved performance with unrolling.
1611.02163#24
1611.02163#26
1611.02163
[ "1511.06350" ]
1611.02163#26
Unrolled Generative Adversarial Networks
IMAGE MODELING OF CIFAR10 Here we test our technique on a more traditional convolutional GAN architecture and task, similar to those used in (Radford et al., 2015; Salimans et al., 2016). In the previous experiments we tested models where the standard GAN training algorithm would not converge. In this section we improve a standard model by reducing its tendency to engage in mode collapse. We ran 4 conï¬ gurations of this model, varying the number of unrolling steps to be 0, 1, 5, or 10. Each conï¬ guration was run 5 times with different random seeds. For full training details see Appendix D. Samples from each of the 4 conï¬ gurations can be found in Figure 4. There is no obvious difference in visual quality across these model conï¬ gurations. Visual inspection however provides only a poor measure of sample diversity. By training with an unrolled discriminator, we expect to generate more diverse samples which more closely resemble the underlying data distribution. We introduce two techniques to examine sample diversity: inference via optimization, and pairwise distance distributions.
1611.02163#25
1611.02163#27
1611.02163
[ "1511.06350" ]
1611.02163#27
Unrolled Generative Adversarial Networks
8 Published as a conference paper at ICLR 2017 Unrolling Steps Average MSE Percent Best Rank 0 steps 0.0231 ± 0.0024 0.63% 1 step 0.0195 ± 0.0021 22.97% 5 steps 0.0200 ± 0.0023 15.31% 10 steps 0.0181 ± 0.0018 61.09% Table 3: GANs trained with unrolling are better able to match images in the training set than standard GANs, likely due to mode dropping by the standard GAN. Results show the MSE between training images and the best reconstruction for a model with the given number of unrolling steps. The fraction of training images best reconstructed by a given model is given in the ï¬
1611.02163#26
1611.02163#28
1611.02163
[ "1511.06350" ]
1611.02163#28
Unrolled Generative Adversarial Networks
nal column. The best reconstructions is found by optimizing the latent representation z to produce the closest matching pixel output G (z; θG). Results are averaged over all 5 runs of each model with different random seeds. # INFERENCE VIA OPTIMIZATION Since likelihood cannot be tractably computed, over-ï¬ tting of GANs is typically tested by taking samples and computing the nearest-neighbor images in pixel space from the training data (Goodfel- low et al., 2014). We will do the reverse, and measure the ability of the generative model to generate images that look like speciï¬ c samples from the training data. If we did this by generating random samples from the model, we would need an exponentially large number of samples. We instead treat ï¬ nding the nearest neighbor xnearest to a target image xtarget as an optimization task, ||G (z; θG) â xtarget||2 2 znearest = argmin (13) # z xnearest = G (znearest; θG) . (14)
1611.02163#27
1611.02163#29
1611.02163
[ "1511.06350" ]
1611.02163#29
Unrolled Generative Adversarial Networks
This concept of backpropagating to generate images has been widely used in visualizing features from discriminative networks (Simonyan et al., 2013; Yosinski et al., 2015; Nguyen et al., 2016) and has been applied to explore the visual manifold of GANs in (Zhu et al., 2016). We apply this technique to each of the models trained. We optimize with 3 random starts using LBFGS, which is the optimizer typically used in similar settings such as style transfer (Johnson et al., 2016; Champandard, 2016). Results comparing average mean squared errors between xnearest and xtarget in pixel space can be found in Table 3. In addition we compute the percent of images for which a certain conï¬ guration achieves the lowest loss when compared to the other conï¬ gurations. In the zero step case, there is poor reconstruction and less than 1% of the time does it obtain the lowest error of the 4 conï¬ gurations. Taking 1 unrolling step results in a signiï¬ cant improvement in MSE. Taking 10 unrolling steps results in more modest improvement, but continues to reduce the reconstruction MSE. To visually see this, we compare the result of the optimization process for 0, 1, 5, and 10 step configurations in Figure [5] To select for images where differences in behavior is most apparent, we sort the data by the absolute value of a fractional difference in MSE between the 0 and 10 step lostepâ hostep models, step ester â | This highlights examples where either the 0 or 10 step model cannot T(lostep-tliostep) accurately fit the data example but the other can. In Appendix [G]we show the same comparison for models initialized using different random seeds. Many of the zero step images are fuzzy and ill- defined suggesting that these images cannot be generated by the standard GAN generative model, and come from a dropped mode. As more unrolling steps are added, the outlines become more clear and well defined â the model covers more of the distribution and thus can recreate these samples. # 3.4.2 PAIRWISE DISTANCES A second complementary approach is to compare statistics of data samples to the corresponding statistics for samples generated by the various models.
1611.02163#28
1611.02163#30
1611.02163
[ "1511.06350" ]
1611.02163#30
Unrolled Generative Adversarial Networks
One particularly simple and relevant statistic is the distribution over pairwise distances between random pairs of samples. In the case of mode collapse, greater probability mass will be concentrated in smaller volumes, and the distribution over inter-sample distances should be skewed towards smaller distances. We sample random pairs of images from each model, as well as from the training data, and compute histograms of the (2 distances between those sample pairs. As illustrated in Figure (6 the standard GAN, with zero unrolling steps, has its probability mass skewed towards smaller ¢2 intersample distances, compared
1611.02163#29
1611.02163#31
1611.02163
[ "1511.06350" ]
1611.02163#31
Unrolled Generative Adversarial Networks
9 Published as a conference paper at ICLR 2017 Data 0 step 1 step 5 step 10step Data 0 step 1 step 5 step 10step Data 0 step 1 step 5 step 10step Data 0 step 1 step 5 step 10step Figure 5: Training set images are more accurately reconstructed using GANs trained with unrolling than by a standard (0 step) GAN, likely due to mode dropping by the standard GAN. Raw data is on the left, and the optimized images to reach this target follow for 0, 1, 5, and 10 unrolling steps. The reconstruction MSE is listed below each sample. A random 1280 images where selected from the training set, and corresponding best reconstructions for each model were found via optimiza- tion. Shown here are the eight images with the largest absolute fractional difference between GANs trained with 0 and 10 unrolling steps. to real data. As the number of unrolling steps is increased, the histograms over intersample distances increasingly come to resemble that for the data distribution. This is further evidence in support of unrolling decreasing the mode collapse behavior of GANs.
1611.02163#30
1611.02163#32
1611.02163
[ "1511.06350" ]
1611.02163#32
Unrolled Generative Adversarial Networks
# 4 DISCUSSION In this work we developed a method to stabilize GAN training and reduce mode collapse by deï¬ ning the generator objective with respect to unrolled optimization of the discriminator. We then demon- strated the application of this method to several tasks, where it either rescued unstable training, or reduced the tendency of the model to drop regions of the data distribution. The main drawback to this method is computational cost of each training step, which increases linearly with the number of unrolling steps. There is a tradeoff between better approximating the true generator loss and the computation required to make this estimate. Depending on the architecture, one unrolling step can be enough. In other more unstable models, such as the RNN case, more are needed to stabilize training.
1611.02163#31
1611.02163#33
1611.02163
[ "1511.06350" ]
1611.02163#33
Unrolled Generative Adversarial Networks
We have some initial positive results suggesting it may be sufï¬ cient to further perturb the training gradient in the same direction that a single unrolling step perturbs it. While this is more computationally efï¬ cient, further investigation is required. The method presented here bridges some of the gap between theoretical and practical results for training of GANs. We believe developing better update rules for the generator and discriminator is an important line of work for GAN training. In this work we have only considered a small fraction of the design space. For instance, the approach could be extended to unroll G when updating D as well â letting the discriminator react to how the generator would move. It is also possible to unroll sequences of G and D updates. This would make updates that are recursive: G could react to maximize performance as if G and D had already updated. # ACKNOWLEDGMENTS We would like to thank Laurent Dinh, David Dohan, Vincent Dumoulin, Liam Fedus, Ishaan Gul- rajani, Julian Ibarz, Eric Jang, Matthew Johnson, Marc Lanctot, Augustus Odena, Gabriel Pereyra, 10
1611.02163#32
1611.02163#34
1611.02163
[ "1511.06350" ]
1611.02163#34
Unrolled Generative Adversarial Networks
Published as a conference paper at ICLR 2017 Pairwise L2 Norm Distribution â data >? Bus â lstep £10 Bos > 0.0 235 0.4 0.6 08 1.0 B25 255 â data ais â 5step 1.0 0.5 0.0 3.0 04 0.6 08 1.0 2 â data 15 â 10 step 1.0 0.5 0.0 04 0.6 08 1.0 [2 norm Figure 6: As the number of unrolling steps in GAN training is increased, the distribution of pairwise distances between model samples more closely resembles the same distribution for the data. Here we plot histograms of pairwise distances between randomly selected samples. The red line gives pairwise distances in the data, while each of the ï¬ ve blue lines in each plot represents a model trained with a different random seed. The vertical lines are the medians of each distribution.
1611.02163#33
1611.02163#35
1611.02163
[ "1511.06350" ]
1611.02163#35
Unrolled Generative Adversarial Networks
Colin Raffel, Sam Schoenholz, Ayush Sekhari, Jon Shlens, and Dale Schuurmans for insightful conversation, as well as the rest of the Google Brain Team. 11 Published as a conference paper at ICLR 2017 # REFERENCES Guillaume Alain, Yoshua Bengio, Li Yao, Jason Yosinski, Eric Thibodeau-Laufer, Saizheng Zhang, and Pascal Vincent. Gsns : Generative stochastic networks. arXiv preprint arXiv:1503.05571, 2015.
1611.02163#34
1611.02163#36
1611.02163
[ "1511.06350" ]
1611.02163#36
Unrolled Generative Adversarial Networks
Marcin Andrychowicz, Misha Denil, Sergio Gomez, Matthew W Hoffman, David Pfau, Tom Schaul, and Nando de Freitas. Learning to learn by gradient descent by gradient descent. arXiv preprint arXiv:1606.04474, 2016. David Belanger and Andrew McCallum. Structured prediction energy networks. arXiv preprint arXiv:1511.06350, 2015. Jorg Bornschein, Samira Shabanian, Asja Fischer, and Yoshua Bengio. Bidirectional helmholtz machines. arXiv preprint arXiv:1506.03877, 2015. Michael Bowling and Manuela Veloso.
1611.02163#35
1611.02163#37
1611.02163
[ "1511.06350" ]
1611.02163#37
Unrolled Generative Adversarial Networks
Multiagent learning using a variable learning rate. Artiï¬ cial Intelligence, 136(2):215â 250, 2002. Yuri Burda, Roger B. Grosse, and Ruslan Salakhutdinov. Importance weighted autoencoders. arXiv preprint arXiv:1509.00519, 2015. Alex J. Champandard. Semantic style transfer and turning two-bit doodles into ï¬ ne artworks. arXiv preprint arXiv:1603.01768, 2016. Tong Che, Yanran Li, Athul Paul Jacob, Yoshua Bengio, and Wenjie Li. Mode regularized generative adversarial networks. arXiv preprint arXiv: 1612.02136, 2016. Info- gan: Interpretable representation learning by information maximizing generative adversarial nets. arXiv preprint arXiv:1606.03657, 2016.
1611.02163#36
1611.02163#38
1611.02163
[ "1511.06350" ]
1611.02163#38
Unrolled Generative Adversarial Networks
John M Danskin. The theory of max-min and its application to weapons allocation problems, vol- ume 5. Springer Science & Business Media, 1967. Peter Dayan, Geoffrey E Hinton, Radford M Neal, and Richard S Zemel. The helmholtz machine. Neural computation, 7(5):889â 904, 1995. Laurent Dinh, David Krueger, and Yoshua Bengio. NICE: non-linear independent components esti- mation. arXiv preprint arXiv:1410.8516, 2014. Laurent Dinh, Jascha Sohl-Dickstein, and Samy Bengio. Density estimation using real NVP. arXiv preprint arXiv:1605.08803, 2016. Vincent Dumoulin, Ishmael Belghazi, Ben Poole, Alex Lamb, Martin Arjovsky, Olivier Mastropi- etro, and Aaron Courville. Adversarially learned inference. arXiv preprint arXiv:1606.00704, 2016. Xavier Glorot and Yoshua Bengio. Understanding the difï¬ culty of training deep feedforward neural networks. In JMLR W&CP: Proceedings of the Thirteenth International Conference on Artiï¬ cial Intelligence and Statistics (AISTATS 2010), volume 9, pp. 249â 256, May 2010. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger (eds.), Advances in Neural Information Pro- cessing Systems 27, pp. 2672â 2680. Curran Associates, Inc., 2014. URL http://papers. nips.cc/paper/5423-generative-adversarial-nets.pdf. Karol Gregor, Ivo Danihelka, Alex Graves, and Daan Wierstra. DRAW:
1611.02163#37
1611.02163#39
1611.02163
[ "1511.06350" ]
1611.02163#39
Unrolled Generative Adversarial Networks
A recurrent neural network for image generation. In Proceedings of The 32nd International Conference on Machine Learn- ing, pp. 1462â 1471, 2015. URL http://www.jmlr.org/proceedings/papers/v37/ gregor15.html. Tian Han, Yang Lu, Song-Chun Zhu, and Ying Nian Wu. Alternating back-propagation for generator network, 2016. URL https://arxiv.org/abs/1606.08571.
1611.02163#38
1611.02163#40
1611.02163
[ "1511.06350" ]
1611.02163#40
Unrolled Generative Adversarial Networks
12 Published as a conference paper at ICLR 2017 Sepp Hochreiter and J¨urgen Schmidhuber. Long short-term memory. Neural Comput., 9(8):1735â 1780, November 1997. ISSN 0899-7667. doi: 10.1162/neco.1997.9.8.1735. URL http://dx. doi.org/10.1162/neco.1997.9.8.1735.
1611.02163#39
1611.02163#41
1611.02163
[ "1511.06350" ]
1611.02163#41
Unrolled Generative Adversarial Networks
Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of the 32nd International Conference on Machine Learning, ICML 2015, Lille, France, 6-11 July 2015, pp. 448â 456, 2015. URL http://jmlr. org/proceedings/papers/v37/ioffe15.html. Justin Johnson, Alexandre Alahi, and Fei-Fei Li. Perceptual losses for real-time style transfer and super-resolution. arXiv preprint arXiv:1603.08155, 2016. Anatoli Juditsky, Arkadi Nemirovski, et al. First order methods for nonsmooth convex large-scale optimization, i: general purpose methods. Optimization for Machine Learning, pp. 121â 148, 2011.
1611.02163#40
1611.02163#42
1611.02163
[ "1511.06350" ]
1611.02163#42
Unrolled Generative Adversarial Networks
Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. Diederik P Kingma and Max Welling. Auto-encoding variational bayes, 2013. URL https: //arxiv.org/abs/1312.6114. Diederik P. Kingma, Tim Salimans, and Max Welling. Improving variational inference with inverse autoregressive ï¬
1611.02163#41
1611.02163#43
1611.02163
[ "1511.06350" ]
1611.02163#43
Unrolled Generative Adversarial Networks
ow. 2016. Tejas D. Kulkarni, Will Whitney, Pushmeet Kohli, and Joshua B. Tenenbaum. Deep convolutional inverse graphics network. arXiv preprint arXiv:1503.03167, 2015. Christian Ledig, Lucas Theis, Ferenc Huszar, Jose Caballero, Andrew Aitken, Alykhan Tejani, Jo- hannes Totz, Zehan Wang, and Wenzhe Shi. Photo-realistic single image super-resolution using a generative adversarial network, 2016. URL https://arxiv.org/abs/1609.04802. Dougal Maclaurin, David Duvenaud, and Ryan P. Adams.
1611.02163#42
1611.02163#44
1611.02163
[ "1511.06350" ]
1611.02163#44
Unrolled Generative Adversarial Networks
Gradient-based hyperparameter optimiza- tion through reversible learning, 2015. Anh Nguyen, Alexey Dosovitskiy, Jason Yosinski, Thomas Brox, and Jeff Clune. Synthesizing the preferred inputs for neurons in neural networks via deep generator networks. arXiv preprint arXiv:1605.09304, 2016. Sebastian Nowozin, Botond Cseke, and Ryota Tomioka. f-gan: Training generative neural samplers using variational divergence minimization. arXiv preprint arXiv:1606.00709, 2016. Augustus Odena, Christopher Olah, and Jonathon Shlens.
1611.02163#43
1611.02163#45
1611.02163
[ "1511.06350" ]
1611.02163#45
Unrolled Generative Adversarial Networks
Conditional image synthesis with auxil- iary classiï¬ er gans. arXiv preprint arXiv:1610.09585, 2016. Barak A. Pearlmutter and Jeffrey Mark Siskind. Reverse-mode ad in a functional framework: Lambda the ultimate backpropagator. ACM Trans. Program. Lang. Syst., 30(2):7:1â 7:36, March 2008. ISSN 0164-0925. doi: 10.1145/1330017.1330018. URL http://doi.acm.org/10. 1145/1330017.1330018. Ben Poole, Alexander A Alemi, Jascha Sohl-Dickstein, and Anelia Angelova. Improved generator objectives for gans. arXiv preprint arXiv:1612.02780, 2016. Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434, 2015. Scott Reed, Zeynep Akata, Santosh Mohan, Samuel Tenka, Bernt Schiele, and Honglak Lee. Learn- ing what and where to draw. In NIPS, 2016a. Scott Reed, Zeynep Akata, Xinchen Yan, Lajanugen Logeswaran, Bernt Schiele, and Honglak Lee. Generative adversarial text-to-image synthesis. In Proceedings of The 33rd International Confer- ence on Machine Learning, 2016b.
1611.02163#44
1611.02163#46
1611.02163
[ "1511.06350" ]
1611.02163#46
Unrolled Generative Adversarial Networks
13 Published as a conference paper at ICLR 2017 Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and In International Conference on Machine variational inference in deep latent gaussian models. Learning. Citeseer, 2014. Tim Salimans, Ian J. Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improved techniques for training gans. arXiv preprint arXiv:1606.03498, 2016. Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. Deep inside convolutional networks: Vi- sualising image classiï¬ cation models and saliency maps. arXiv preprint arXiv:1312.6034, 2013. Satinder Singh, Michael Kearns, and Yishay Mansour. Nash convergence of gradient dynamics in general-sum games. In Proceedings of the Sixteenth conference on Uncertainty in artiï¬ cial intelligence, pp. 541â 548. Morgan Kaufmann Publishers Inc., 2000. Jascha Sohl-Dickstein, Eric A. Weiss, Niru Maheswaranathan, and Surya Ganguli. Deep unsuper- vised learning using nonequilibrium thermodynamics. In Proceedings of The 32nd International Conference on Machine Learning, pp. 2256â 2265, 2015. URL http://arxiv.org/abs/ 1503.03585.
1611.02163#45
1611.02163#47
1611.02163
[ "1511.06350" ]
1611.02163#47
Unrolled Generative Adversarial Networks
Casper Kaae Sonderby, Jose Caballero, Lucas Theis, Wenzhe Shi, and Ferenc Huszar. Amortised map inference for image super-resolution, 2016. URL https://arxiv.org/abs/1610. 04490v1. In Advances in Neu- ral Information Processing Systems 28, Dec 2015. URL http://arxiv.org/abs/1506. 03478/. L. Theis, A. van den Oord, and M. Bethge.
1611.02163#46
1611.02163#48
1611.02163
[ "1511.06350" ]
1611.02163#48
Unrolled Generative Adversarial Networks
A note on the evaluation of generative models. In In- ternational Conference on Learning Representations, Apr 2016. URL http://arxiv.org/ abs/1511.01844. T. Tieleman and G. Hinton. Lecture 6.5â RmsProp: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural Networks for Machine Learning, 2012. A¨aron van den Oord, Nal Kalchbrenner, and Koray Kavukcuoglu. Pixel recurrent neural networks. arXiv preprint arXiv:1601.06759, abs/1601.06759, 2016a. URL http://arxiv.org/abs/ 1601.06759. A¨aron van den Oord, Nal Kalchbrenner, Oriol Vinyals, Lasse Espeholt, Alex Graves, and Ko- arXiv preprint ray Kavukcuoglu. Conditional image generation with pixelcnn decoders. arXiv:1606.05328, 2016b. Pascal Vincent, Hugo Larochelle, Isabelle Lajoie, Yoshua Bengio, and Pierre-Antoine Manzagol. Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion.
1611.02163#47
1611.02163#49
1611.02163
[ "1511.06350" ]
1611.02163#49
Unrolled Generative Adversarial Networks
J. Mach. Learn. Res., 11:3371â 3408, December 2010. ISSN 1532-4435. URL http://dl.acm.org/citation.cfm?id=1756006.1953039. Jason Yosinski, Jeff Clune, Anh Nguyen, Thomas Fuchs, and Hod Lipson. Understanding neural networks through deep visualization. arXiv preprint arXiv:1506.06579, 2015. Chongjie Zhang and Victor R Lesser. Multi-agent learning with policy prediction. In Proceedings of the Twenty-Fourth AAAI Conference on Artiï¬ cial Intelligence, 2010. Junbo Zhao, Michael Mathieu, and Yann LeCun. Energy-based generative adversarial network. arXiv preprint arXiv:1609.03126, 2016. Jun-Yan Zhu, Philipp Kr¨ahenb¨uhl, Eli Shechtman, and Alexei A. Efros.
1611.02163#48
1611.02163#50
1611.02163
[ "1511.06350" ]
1611.02163#50
Unrolled Generative Adversarial Networks
Generative visual manipula- tion on the natural image manifold. In Proceedings of European Conference on Computer Vision (ECCV), 2016. 14 Published as a conference paper at ICLR 2017 # Appendix # A 2D GAUSSIAN TRAINING DETAILS Network architecture and experimental details for the experiment in Section 3.1 are as follows: The dataset is sampled from a mixture of 8 Gaussians of standard deviation 0.02. The means are equally spaced around a circle of radius 2. The generator network consists of a fully connected network with 2 hidden layers of size 128 with relu activations followed by a linear projection to 2 dimensions. All weights are initialized to be orthogonal with scaling of 0.8. The discriminator network ï¬ rst scales its input down by a factor of 4 (to roughly scale to (-1,1)), followed by 1 layer fully connected network with relu activations to a linear layer to of size 1 to act as the logit. The generator minimizes LG = log(D(x)) + log(1 â D(G(z))) and the discriminator minimizes LD = â log(D(x)) â log(1 â D(G(z))) where x is sampled from the data distribution and z â ¼ N (0, I256). Both networks are optimized using Adam (Kingma & Ba, 2014) with a learning rate of 1e-4 and β1=0.5. The network is trained by alternating updates of the generator and the discriminator. One step consists of either G or D updating. # B MORE MIXTURE OF GAUSSIAN EXPERIMENTS B.1 EFFECTS OF TIME DELAY / HISTORICAL AVERAGING Another comparison we looked at was with regard to historical averaging based approaches. Re- cently similarly inspired approaches have been used in (Salimans et al., 2016) to stabilize training. For our study, we looked at taking an ensemble of discriminators over time. First, we looked at taking an ensemble of the last N steps, as shown in Figure App.1.
1611.02163#49
1611.02163#51
1611.02163
[ "1511.06350" ]
1611.02163#51
Unrolled Generative Adversarial Networks
. . 1 - . - Fy . - - ° . 2 2 - . - ° 5 e - = a . bd 4 - a i * - bad @ 20 * . = . _ 3 O 5 ° â ¬ 3 . 2 = 50 - . . â ~ 5 - oO 5000 10000 15000 20000 25000 30000 35000 40000 45000 50000 Update Steps Figure App.1:
1611.02163#50
1611.02163#52
1611.02163
[ "1511.06350" ]
1611.02163#52
Unrolled Generative Adversarial Networks
Historical averaging does not visibly increase stability on the mixture of Gaussians task. Each row corresponds to an ensemble of discriminators which consists of the indicated number of immediately preceding discriminators. The columns correspond to different numbers of training steps. To further explore this idea, we ran experiments with an ensemble of 5 discriminators, but with different periods between replacing discriminators in the ensemble. For example, if I sample at a rate of 100, it would take 500 steps to replace all 5 discriminators. Results can be seen in Figure App.2. We observe that given longer and longer time delays, the model becomes less and less stable. We hypothesize that this is due to the initial shape of the discriminator loss surface. When training, the discriminatorâ s estimates of probability densities are only accurate on regions where it was trained. When ï¬
1611.02163#51
1611.02163#53
1611.02163
[ "1511.06350" ]
1611.02163#53
Unrolled Generative Adversarial Networks
xing this discriminator, we are removing the feedback between the generator exploitation 15 Published as a conference paper at ICLR 2017 8, e e ° . ° 2 ba p : . £ - G 2 ce . gi « = |. . c .- - . o g o 2 100 . = = 3 5 a a © 1000 @ . a . o 5000 10000 15000 20000 25000 30000 35000 40000 45000 50000 Update Steps Figure App.2:
1611.02163#52
1611.02163#54
1611.02163
[ "1511.06350" ]
1611.02163#54
Unrolled Generative Adversarial Networks
Introducing longer time delays between the discriminator ensemble results in insta- bility and probability distributions that are not in the window being visualized. The x axis is the number of weight updates and the y axis is how many steps to skip between discriminator updates when selecting the ensemble of 5 discriminators. and the discriminators ability to move. As a result, the generator is able to exploit these ï¬ xed areas of poor performance for older discriminators in the ensemble. New discriminators (over)compensate for this, leading the system to diverge. B.2 EFFECTS OF THE SECOND GRADIENT A second factor we analyzed is the effect of backpropagating the learning signal through the un- rolling in Equation 12. We can turn on or off this backpropagation through the unrolling by in- troducing stop gradient calls into our computation graph between each unrolling step. With the stop gradient in place, the update signal corresponds only to the ï¬ rst term in Equation 12. We looked at 3 conï¬ gurations: without stop gradients; vanilla unrolled GAN, with stop gradients; and with stop gradients but taking the average over the k unrolling steps instead of taking the ï¬ nal value. Results can be see in Figure App.3. We initially observed no difference between unrolling with and without the second gradient, as both required 3 unrolling steps to become stable. When the discriminator is unrolled to convergence, the second gradient term becomes zero. Due to the simplicity of the problem, we suspect that the discriminator nearly converged for every generator step, and the second gradient term was thus irrelevant.
1611.02163#53
1611.02163#55
1611.02163
[ "1511.06350" ]
1611.02163#55
Unrolled Generative Adversarial Networks
To test this, we modiï¬ ed the dynamics to perform ï¬ ve generator steps for each discriminator update. Results are shown in Figure App.4. With the discriminator now kept out of equilibrium, successful training can be achieved with half as many unrolling steps when using both terms in the gradient than when only including the ï¬ rst term. # C RNN MNIST TRAINING DETAILS The network architecture for the experiment in Section 3.2 is as follows: The MNIST dataset is scaled to [-1, 1).
1611.02163#54
1611.02163#56
1611.02163
[ "1511.06350" ]
1611.02163#56
Unrolled Generative Adversarial Networks
The generator ï¬ rst scales the 256D noise vector through a 256 unit fully connected layer with relu activation. This is then fed into the initial state of a 256D LSTM(Hochreiter & Schmidhuber, 1997) that runs 28 steps corresponding to the number of columns in MNIST. The resulting sequence of ac- tivations is projected through a fully connected layer with 28 outputs with a tanh activation function. All weights are initialized via the â
1611.02163#55
1611.02163#57
1611.02163
[ "1511.06350" ]
1611.02163#57
Unrolled Generative Adversarial Networks
Xavierâ initialization (Glorot & Bengio, 2010). The forget bias on the LSTM is initialized to 1. The discriminator network feeds the input into a Convolution(16, stride=2) followed by a Convo- lution(32, stride=2) followed by Convolution(32, stride=2). All convolutions have stride 2. As in (Radford et al., 2015) leaky rectiï¬ ers are used with a 0.3 leak. Batch normalization is applied after each layer (Ioffe & Szegedy, 2015). The resulting 4D tensor is then ï¬ attened and a linear projection is performed to a single scalar.
1611.02163#56
1611.02163#58
1611.02163
[ "1511.06350" ]
1611.02163#58
Unrolled Generative Adversarial Networks
16 Published as a conference paper at ICLR 2017 Unrolled GAN Unrolled GAN without second gradient 0 - o - . . - . C . . . 1 a * *- . o . . . . bw MAO > memos 10 tes. & 4 d oa a 5 a Doe o- eo OO OO > 0 5000 10000 15000 20000 = 25000 _~â s 30000» 35000 += 40000» 45000 = 50000 Update Steps - - 5 5 0 : - - - - . . 1 - bed - - wa ~ - bed o ~ = . a 2 mn, a3 e J : â -?* oO a â = O5 - : Ly «- N £ SS 2 3 o- ees 2c 0 le OC 0 5000 10000 15000 © 20000 :»= 25000 _~â «-30000»S «35000 ©= 40000» «45000 += 50000 Update Steps Figure App.3:
1611.02163#57
1611.02163#59
1611.02163
[ "1511.06350" ]
1611.02163#59
Unrolled Generative Adversarial Networks
If the discriminator remains nearly at its optimum during learning, then performance is nearly identical with and without the second gradient term in Equation 12. As shown in Figure App.4, when the discriminator lags behind the generator, backpropagating through unrolling aids convergence. The generator network minimises LG = log(D(G(z))) and the discriminator minimizes LD = log(D(x)) + log(1 â D(G(z))). Both networks are trained with Adam(Kingma & Ba, 2014) with learning rates of 1e-4 and β1=0.5. The network is trained alternating updating the generator and the discriminator for 150k steps. One step consists of just 1 network update. # D CIFAR10/MNIST TRAINING DETAILS The network architectures for the discriminator, generator, and encoder as as follows. All convolu- tions have a kernel size of 3x3 with batch normalization. The discriminator uses leaky ReLUâ s with a 0.3 leak and the generator uses standard ReLU. The generator network is deï¬ ned as: number outputs Input: z â ¼ N (0, I256) Fully connected Reshape to image 4,4,512 Transposed Convolution Transposed Convolution Transposed Convolution Convolution 4 * 4 * 512 256 128 64 1 or 3 2 2 2 1
1611.02163#58
1611.02163#60
1611.02163
[ "1511.06350" ]
1611.02163#60
Unrolled Generative Adversarial Networks
17 Published as a conference paper at ICLR 2017 Unrolled GAN with 5 G Steps per D ca . 0 * . - . . 1 a ~ ° - a ° c a 2 . m3 e - - . a o . . 0 £ = = : i Ss P " . é. Pore x aie a 2) ens 3 10 Be le @ Yl o> eee - ees i 4 i So Sod ans Dod 30 . be ec Cc ma ote 0 5000 10000 +â -15000+~â « 20000 ~â «25000 «30000 :~=« 35000 ~=«40000+~=«45000 +~â -50000 Update Steps Unrolled GAN with 5 G Steps per D without second gradient .- . 5 ; . ; - a . . a 5 a Gy 7 â 2 3 ns y . 7 C aD = gs - bd - Ee . - 5 oe _â ~~. ik 20 \ ee Ea ane 7 0 5000 10000 15000 20000 25000 «30000» «35000 ©= 40000» «45000 ~â â 50000 Update Steps Figure App.4: Backpropagating through the unrolling process aids convergence when the dis- criminator does not fully converge between generator updates. When taking 5 generator steps per discriminator step unrolling greatly increases stability, requiring only 5 unrolling steps to converge. Without the second gradient it requires 10 unrolling steps. Also see Figure App.3. The discriminator network is deï¬ ned as: number outputs Input: x â ¼ pdata or G Convolution Convolution Convolution Flatten Fully Connected 64 128 256 1 2 2 2 The generator network minimises LG = log(D(G(z))) and the discriminator minimizes LD = log(D(x)) + log(1 â D(G(z))). The networks are trained with Adam with a generator learning rate of 1e-4, and a discriminator learning rate of 2e-4. The network is trained alternating updating the generator and the discriminator for 100k steps. One step consists of just 1 network update.
1611.02163#59
1611.02163#61
1611.02163
[ "1511.06350" ]
1611.02163#61
Unrolled Generative Adversarial Networks
18 Published as a conference paper at ICLR 2017 E 1000 CLASS MNIST number outputs Input: z â ¼ N (0, I256) Fully connected Reshape to image 4,4,64 Transposed Convolution Transposed Convolution Transposed Convolution Convolution 4 * 4 * 64 32 16 8 3 2 2 2 1 # stride The discriminator network is parametrized by a size X and is deï¬ ned as follows. In our tests, we used X of 1/4 and 1/2. number outputs stride Input: x â ¼ pdata or G Convolution Convolution Convolution Flatten Fully Connected 8*X 16*X 32*X 1 2 2 2 F COLORED MNIST DATASET F.1 DATASET To generate this dataset we ï¬ rst took the mnist digit, I, scaled between 0 and 1. For each image we sample a color, C, normally distributed with mean=0 and std=0.5. To generate a colored digit between (-1, 1) we do I â C + (I â 1).
1611.02163#60
1611.02163#62
1611.02163
[ "1511.06350" ]
1611.02163#62
Unrolled Generative Adversarial Networks
Finally, we add a small amount of pixel independent noise sampled from a normal distribution with std=0.2, and the resulting values are cliped between (-1, 1). When visualized, this generates images and samples that can be seen in ï¬ gure App.5. Once again it is very hard to visually see differences in sample diversity when comparing the 128 and the 512 sized models. Figure App.5: Right: samples from the data distribution. Middle: Samples from 1/4 size model with 0 look ahead steps (worst diversity). Left: Samples from 1/1 size model with 10 look ahead steps (most diversity). F.2 MODELS The models used in this section are parametrized by a variable X to control capacity. A value of X=1 is same architecture used in the cifar10 experiments. We used 1/4, 1/2 and 1 as these values.
1611.02163#61
1611.02163#63
1611.02163
[ "1511.06350" ]
1611.02163#63
Unrolled Generative Adversarial Networks
The generator network is deï¬ ned as: 19 Published as a conference paper at ICLR 2017 number outputs stride Input: z â ¼ N (0, I256) Fully connected Reshape to image 4,4,512*X Transposed Convolution Transposed Convolution Transposed Convolution Convolution 4 * 4 * 512*X 256*X 128*X 64*X 3 2 2 2 1 The discriminator network is deï¬ ned as: number outputs stride Input: x â ¼ pdata or G Convolution Convolution Convolution Flatten Fully Connected 64*X 128*X 256*X 1 2 2 2 # G OPTIMIZATION BASED VISUALIZATIONS More examples of model based optimization. We performed 5 runs with different seeds of each of of the unrolling steps conï¬
1611.02163#62
1611.02163#64
1611.02163
[ "1511.06350" ]
1611.02163#64
Unrolled Generative Adversarial Networks
guration. Bellow are comparisons for each run index. Ideally this would be a many to many comparison, but for space efï¬ ciency we grouped the runs by the index in which they were run. 20 Published as a conference paper at ICLR 2017 ' Qivii 0.0133 f- I 7 i iy vl re 0.0128 â i re 0.0133, = CUE . E. 0251 ae % asi oe 0.0465 0.0302 0.0272 0.0206 0.012 0.007 0.0085 eee 0.0252 0.0167 0.0172 0.0268 0.0157 0.0154 0.0121 0.0235 0.0185, * 0.0325 0.0295 0.0222 Figure App.6: Samples from 1/5 with different random seeds.
1611.02163#63
1611.02163#65
1611.02163
[ "1511.06350" ]
1611.02163#65
Unrolled Generative Adversarial Networks
21 Published as a conference paper at ICLR 2017 0.0151 0.0058 0.0035 0.0106 0.0082 mh wd 0.0217 0.011 0.013 0.0168 |___ ge a wi lad wf Bere) a 0627 0.0393 0.0536 0.032 a 4] * ¥ 10286 0.0079 0.0168 0.0101 0.0104 0.0087 0.0276 0.0217 0.0193 0.0146 » 0.0151 0.0129 as ~ 0.0338 0.024 0.0232 0.0178 Fall 0.0273 0.0168 0. 0.0145 Se 0.0355 0.0232 0.0262 BB) 0.0151 0.0127 0.0137 0.008 0.0213 0.0159 0.0221 0.0114 Aaa 0.0305 & 0255 0.0199 0.0046 0.0039 0.0027 Pn a nn 0.0292 0.0239 0.0211 0.016 0.0201 0.0217 0. 4 EVE 0.0213 0.0207 0.0339 0.0215 pepe je 0.0211 0.0291 0.0226 0.015 0.0156 Figure App.7: Samples from 2/5 with different random seeds.
1611.02163#64
1611.02163#66
1611.02163
[ "1511.06350" ]
1611.02163#66
Unrolled Generative Adversarial Networks
22 Published as a conference paper at ICLR 2017 J = 0147 0.0173 â 0.0242 â 0, 0156 0.0144 © tt 0. rez) 0.0174 f ; F | 0.0133 0.0352 02. oo 0.0109 0.0111 0.0221 0.0133 0.0144 0.0086 0.0157 0.0196 0.0111 CE 0.0357 0.0351 0.0258 SA SNEN 0.015 0.0107 0.0219 0. 0105 0.013 = =0.0112 0.0105 0.0177 0.019 d 0.0146 a 0.0169 0.015 Figure App.8: Samples from 3/5 with different random seeds. 23 Published as a conference paper at ICLR 2017
1611.02163#65
1611.02163#67
1611.02163
[ "1511.06350" ]
1611.02163#67
Unrolled Generative Adversarial Networks
0.0259 0.016 0.0288 0.0464 0.0261 0.0269 0.0239 0.0366 0.0248 0.0205 0.0178 ee oe a ng Ry 0.0336 0.0228 0.0557 0.0322 0.0304 0.0282 Data O step 0.024 0.0244 0.0212 Faas 0.0361 0.0289 â â 0.0219 0.0122 Ha ° re] N ~ roo) 0.0314 0.0217 Pte 0.0142 0.0084 »S 0.0294 0.0163 0.0362 0.0494 0.0277 Be 0.0375 0.0323 0.0247 0.0206 Figure App.9: Samples from 4/5 with different random seeds. 24
1611.02163#66
1611.02163#68
1611.02163
[ "1511.06350" ]
1611.02163#68
Unrolled Generative Adversarial Networks
Published as a conference paper at ICLR 2017 0.0128 0. 0058 0. 0065 0.0392 0. ~s] 0.0218 0.0177 0.0402 0.0308 0.0286 0.0184 ms Me _c! 0.0119 0. 0077 0.0402 0.0299 0.0233 0.0188 te fe oe 0.026 0.0144 0.0165 0.0122 0.0097 eae 0061 0.005 0046 Bl 0.0105 0.0051 0.005 LAA omit 0.0236 0.0256 0.0158 0.0557 0.0373 0.0344 0.0271 Cie 0.031 0.0364 0.0276 pth fens be 0.0115 0.0137 0.0154 0.0123 0.0183 0.014 0.0552 â 0. 0314 *o. 0307 0.0271 0.0285 0.0735 0.0204 0.0291 0.0163 0.0261 0.0135 0.015 0. 0286 0. 0189 0.02 0.027 0.019 0.019 0.0135 Mlle dolad 0.0156 0.0091 0.012 0.0078 Figure App.10: Samples from 5/5 with different random seeds. 25
1611.02163#67
1611.02163
[ "1511.06350" ]
1611.02205#0
Playing SNES in the Retro Learning Environment
7 1 0 2 b e F 7 ] G L . s c [ 2 v 5 0 2 2 0 . 1 1 6 1 : v i X r a # PLAYING SNES IN THE RETRO LEARNING ENVIRONMENT Nadav Bhonker*, Shai Rozenberg* and Itay Hubara Department of Electrical Engineering Technion, Israel Institute of Technology (*) indicates equal contribution {nadavbh,shairoz}@tx.technion.ac.il [email protected] # ABSTRACT
1611.02205#1
1611.02205
[ "1609.05143" ]
1611.02205#1
Playing SNES in the Retro Learning Environment
Mastering a video game requires skill, tactics and strategy. While these attributes may be acquired naturally by human players, teaching them to a computer pro- gram is a far more challenging task. In recent years, extensive research was carried out in the ï¬ eld of reinforcement learning and numerous algorithms were intro- duced, aiming to learn how to perform human tasks such as playing video games. As a result, the Arcade Learning Environment (ALE) (Bellemare et al., 2013) has become a commonly used benchmark environment allowing algorithms to train on various Atari 2600 games. In many games the state-of-the-art algorithms outper- form humans. In this paper we introduce a new learning environment, the Retro Learning Environment â RLE, that can run games on the Super Nintendo Enter- tainment System (SNES), Sega Genesis and several other gaming consoles. The environment is expandable, allowing for more video games and consoles to be easily added to the environment, while maintaining the same interface as ALE. Moreover, RLE is compatible with Python and Torch. SNES games pose a signif- icant challenge to current algorithms due to their higher level of complexity and versatility.
1611.02205#0
1611.02205#2
1611.02205
[ "1609.05143" ]
1611.02205#2
Playing SNES in the Retro Learning Environment
# INTRODUCTION Controlling artiï¬ cial agents using only raw high-dimensional input data such as image or sound is a difï¬ cult and important task in the ï¬ eld of Reinforcement Learning (RL). Recent breakthroughs in the ï¬ eld allow its utilization in real-world applications such as autonomous driving (Shalev-Shwartz et al., 2016), navigation (Bischoff et al., 2013) and more. Agent interaction with the real world is usually either expensive or not feasible, as the real world is far too complex for the agent to perceive. Therefore in practice the interaction is simulated by a virtual environment which receives feedback on a decision made by the algorithm. Traditionally, games were used as a RL environment, dating back to Chess (Campbell et al., 2002), Checkers (Schaeffer et al., 1992), backgammon (Tesauro, 1995) and the more recent Go (Silver et al., 2016). Modern games often present problems and tasks which are highly correlated with real-world problems. For example, an agent that masters a racing game, by observing a simulated driverâ s view screen as input, may be usefull for the development of an autonomous driver. For high-dimensional input, the leading benchmark is the Arcade Learning Environment (ALE) (Bellemare et al., 2013) which provides a common interface to dozens of Atari 2600 games, each presents a different challenge. ALE provides an extensive benchmarking plat- form, allowing a controlled experiment setup for algorithm evaluation and comparison. The main challenge posed by ALE is to successfully play as many Atari 2600 games as possible (i.e., achiev- ing a score higher than an expert human player) without providing the algorithm any game-speciï¬ c information (i.e., using the same input available to a human - the game screen and score). A key work to tackle this problem is the Deep Q-Networks algorithm (Mnih et al., 2015), which made a breakthrough in the ï¬ eld of Deep Reinforcement Learning by achieving human level performance on 29 out of 49 games. In this work we present a new environment â the Retro Learning Environ- ment (RLE). RLE sets new challenges by providing a uniï¬ ed interface for Atari 2600 games as well as more advanced gaming consoles.
1611.02205#1
1611.02205#3
1611.02205
[ "1609.05143" ]
1611.02205#3
Playing SNES in the Retro Learning Environment
As a start we focused on the Super Nintendo Entertainment 1 System (SNES). Out of the ï¬ ve SNES games we tested using state-of-the-art algorithms, only one was able to outperform an expert human player. As an additional feature, RLE supports research of multi-agent reinforcement learning (MARL) tasks (Bus¸oniu et al., 2010). We utilize this feature by training and evaluating the agents against each other, rather than against a pre-conï¬ gured in-game AI. We conducted several experiments with this new feature and discovered that agents tend to learn how to overcome their current opponent rather than generalize the game being played. However, if an agent is trained against an ensemble of different opponents, its robustness increases.
1611.02205#2
1611.02205#4
1611.02205
[ "1609.05143" ]
1611.02205#4
Playing SNES in the Retro Learning Environment
The main contributions of the paper are as follows: â ¢ Introducing a novel RL environment with signiï¬ cant challenges and an easy agent evalu- ation technique (enabling agents to compete against each other) which could lead to new and more advanced RL algorithms. â ¢ A new method to train an agent by enabling it to train against several opponents, making the ï¬ nal policy more robust. â ¢ Encapsulating several different challenges to a single RL environment. 2 RELATED WORK 2.1 ARCADE LEARNING ENVIRONMENT The Arcade Learning Environment is a software framework designed for the development of RL algorithms, by playing Atari 2600 games. The interface provided by ALE allows the algorithms to select an action and receive the Atari screen and a reward in every step. The action is the equivalent to a humanâ s joystick button combination and the reward is the difference between the scores at time stamp t and t â
1611.02205#3
1611.02205#5
1611.02205
[ "1609.05143" ]
1611.02205#5
Playing SNES in the Retro Learning Environment
1. The diversity of games for Atari provides a solid benchmark since different games have signiï¬ cantly different goals. Atari 2600 has over 500 games, currently over 70 of them are implemented in ALE and are commonly used for algorithm comparison. 2.2 INFINITE MARIO Inï¬ nite Mario (Togelius et al., 2009) is a remake of the classic Super Mario game in which levels are randomly generated. On these levels the Mario AI Competition was held. During the competition, several algorithms were trained on Inï¬ nite Mario and their performances were measured in terms of the number of stages completed. As opposed to ALE, training is not based on the raw screen data but rather on an indication of Marioâ s (the playerâ s) location and objects in its surrounding.
1611.02205#4
1611.02205#6
1611.02205
[ "1609.05143" ]
1611.02205#6
Playing SNES in the Retro Learning Environment
This environment no longer poses a challenge for state of the art algorithms. Its main shortcoming lie in the fact that it provides only a single game to be learnt. Additionally, the environment provides hand-crafted features, extracted directly from the simulator, to the algorithm. This allowed the use of planning algorithms that highly outperform any learning based algorithm. 2.3 OPENAI GYM The OpenAI gym (Brockman et al., 2016) is an open source platform with the purpose of creating an interface between RL environments and algorithms for evaluation and comparison purposes. OpenAI Gym is currently very popular due to the large number of environments supported by it. For example ALE, Go, MouintainCar and VizDoom (Zhu et al., 2016), an environment for the learning of the 3D ï¬ rst-person-shooter game â Doomâ . OpenAI Gymâ s recent appearance and wide usage indicates the growing interest and research done in the ï¬ eld of RL. 2.4 OPENAI UNIVERSE Universe (Universe, 2016) is a platform within the OpenAI framework in which RL algorithms can train on over a thousand games. Universe includes very advanced games such as GTA V, Portal as well as other tasks (e.g. browser tasks). Unlike RLE, Universe doesnâ t run the games locally and requires a VNC interface to a server that runs the games.
1611.02205#5
1611.02205#7
1611.02205
[ "1609.05143" ]
1611.02205#7
Playing SNES in the Retro Learning Environment
This leads to a lower frame rate and thus longer training times. 2 2.5 MALMO Malmo (Johnson et al., 2016) is an artiï¬ cial intelligence experimentation platform of the famous game â Minecraftâ . Although Malmo consists of only a single game, it presents numerous challenges since the â Minecraftâ game can be conï¬ gured differently each time. The input to the RL algorithms include speciï¬ c features indicating the â stateâ of the game and the current reward.
1611.02205#6
1611.02205#8
1611.02205
[ "1609.05143" ]
1611.02205#8
Playing SNES in the Retro Learning Environment
2.6 DEEPMIND LAB DeepMind Lab (?) is a ï¬ rst-person 3D platform environment which allows training RL algorithms on several different challenges: static/random map navigation, collect fruit (a form of reward) and a laser-tag challenge where the objective is to tag the opponents controlled by the in-game AI. In LAB the agent observations are the game screen (with an additional depth channel) and the velocity of the character. LAB supports four games (one game - four different modes). 2.7 DEEP Q-LEARNING In our work, we used several variant of the Deep Q-Network algorithm (DQN) (Mnih et al., 2015), an RL algorithm whose goal is to ï¬ nd an optimal policy (i.e., given a current state, choose action that maximize the ï¬
1611.02205#7
1611.02205#9
1611.02205
[ "1609.05143" ]
1611.02205#9
Playing SNES in the Retro Learning Environment
nal score). The state of the game is simply the game screen, and the action is a combination of joystick buttons that the game responds to (i.e., moving ,jumping). DQN learns through trial and error while trying to estimate the â Q-functionâ , which predicts the cumulative discounted reward at the end of the episode given the current state and action while following a policy Ï . The Q-function is represented using a convolution neural network that receives the screen as input and predicts the best possible action at itâ s output. The Q-function weights θ are updated according to: O41(S2, an) =O,+ (Rigi + ymax(Qi(se41, a; 6) _ Q1(S¢, at; 4))VoQu(se, at; %), (1) where s;, S;41 are the current and next states, a; is the action chosen, a is the step size, y is the discounting factor R;,,1 is the reward received by applying a; at s;. 6â
1611.02205#8
1611.02205#10
1611.02205
[ "1609.05143" ]
1611.02205#10
Playing SNES in the Retro Learning Environment
represents the previous weights of the network that are updated periodically. Other than DQN, we examined two leading algorithms on the RLE: Double Deep Q-Learning (D-DQN) (Van Hasselt et al. 2015p, a DQN based algorithm with a modified network update rule. Dueling Double DQN (Wang et al} 2015p, a modification of D-DQNâ s architecture in which the Q-function is modeled using a state (screen) dependent estimator and an action dependent estimator. 3 THE RETRO LEARNING ENVIRONMENT 3.1 SUPER NINTENDO ENTERTAINMENT SYSTEM The Super Nintendo Entertainment System (SNES) is a home video game console developed by Nintendo and released in 1990. A total of 783 games were released, among them, the iconic Super Mario World, Donkey Kong Country and The Legend of Zelda. Table (1) presents a comparison between Atari 2600, Sega Genesis and SNES game consoles, from which it is clear that SNES and Genesis games are far more complex.
1611.02205#9
1611.02205#11
1611.02205
[ "1609.05143" ]
1611.02205#11
Playing SNES in the Retro Learning Environment
3.2 IMPLEMENTATION To allow easier integration with current platforms and algorithms, we based our environment on the ALE, with the aim of maintaining as much of its interface as possible. While the ALE is highly coupled with the Atari emulator, Stella1, RLE takes a different approach and separates the learning environment from the emulator. This was achieved by incorporating an interface named LibRetro (li- bRetro site), that allows communication between front-end programs to game-console emulators. Currently, LibRetro supports over 15 game consoles, each containing hundreds of games, at an esti- mated total of over 7,000 games that can potentially be supported using this interface. Examples of supported game consoles include Nintendo Entertainment System, Game Boy, N64, Sega Genesis,
1611.02205#10
1611.02205#12
1611.02205
[ "1609.05143" ]
1611.02205#12
Playing SNES in the Retro Learning Environment
# 1http://stella.sourceforge.net/ 3 Saturn, Dreamcast and Sony PlayStation. We chose to focus on the SNES game console imple- mented using the snes9x2 as itâ s games present interesting, yet plausible to overcome challenges. Additionally, we utilized the Genesis-Plus-GX3 emulator, which supports several Sega consoles: Genesis/Mega Drive, Master System, Game Gear and SG-1000. 3.3 SOURCE CODE RLE is fully available as open source software for use under GNUâ s General Public License4. The environment is implemented in C++ with an interface to algorithms in C++, Python and Lua. Adding a new game to the environment is a relatively simple process. # 3.4 RLE INTERFACE RLE provides a uniï¬ ed interface to all games in its supported consoles, acting as an RL-wrapper to the LibRetro interface. Initialization of the environment is done by providing a game (ROM ï¬ le) and a gaming-console (denoted by â coreâ ). Upon initialization, the ï¬ rst state is the initial frame of the game, skipping all menu selection screens. The cores are provided with the RLE and installed together with the environment.
1611.02205#11
1611.02205#13
1611.02205
[ "1609.05143" ]
1611.02205#13
Playing SNES in the Retro Learning Environment
Actions have a bit-wise representation where each controller button is represented by a one-hot vector. Therefore a combination of several buttons is possible using the bit-wise OR operator. The number of valid buttons combinations is larger than 700, therefore only the meaningful combinations are provided. The environments observation is the game screen, provided as a 3D array of 32 bit per pixel with dimensions which vary depending on the game. The reward can be deï¬ ned differently per game, usually we set it to be the score difference between two consecutive frames. By setting different conï¬ guration to the environment, it is possible to alter in-game properties such as difï¬ culty (i.e easy, medium, hard), its characters, levels, etc.
1611.02205#12
1611.02205#14
1611.02205
[ "1609.05143" ]
1611.02205#14
Playing SNES in the Retro Learning Environment
Table 1: Atari 2600, SNES and Genesis comparison Atari 2600 SNES Genesis Number of Games CPU speed ROM size RAM size Color depth Screen Size Number of controller buttons Possible buttons combinations 565 1.19MHz 2-4KB 128 bytes 8 bit 160x210 5 18 783 3.58MHz 0.5-6MB 128KB 16 bit 256x224 or 512x448 12 over 720 928 7.6 MHz 16 MBytes 72KB 16 bit 320x224 11 over 100
1611.02205#13
1611.02205#15
1611.02205
[ "1609.05143" ]
1611.02205#15
Playing SNES in the Retro Learning Environment
3.5 ENVIRONMENT CHALLENGES Integrating SNES and Genesis with RLE presents new challenges to the ï¬ eld of RL where visual information in the form of an image is the only state available to the agent. Obviously, SNES games are signiï¬ cantly more complex and unpredictable than Atari games. For example in sports games, such as NBA, while the player (agent) controls a single player, all the other nine playersâ behavior is determined by pre-programmed agents, each exhibiting random behavior. In addition, many SNES games exhibit delayed rewards in the course of their play (i.e., reward for an actions is given many time steps after it was performed). Similarly, in some of the SNES games, an agent can obtain a reward that is indirectly related to the imposed task. For example, in platform games, such as Super Mario, reward is received for collecting coins and defeating enemies, while the goal of the challenge is to reach the end of the level which requires to move to keep moving to the right. Moreover, upon completing a level, a score bonus is given according to the time required for its completion. Therefore collecting coins or defeating enemies is not necessarily preferable if it consumes too much time. Analysis of such games is presented in section 4.2. Moreover, unlike Atari that consists of # 2http://www.snes9x.com/ 3https://github.com/ekeeke/Genesis-Plus-GX 4https://github.com/nadavbh12/Retro-Learning-Environment
1611.02205#14
1611.02205#16
1611.02205
[ "1609.05143" ]
1611.02205#16
Playing SNES in the Retro Learning Environment
4 eight directions and one action button, SNES has eight-directions pad and six actions buttons. Since combinations of buttons are allowed, and required at times, the actual actions space may be larger than 700, compared to the maximum of 18 actions in Atari. Furthermore, the background in SNES is very rich, ï¬ lled with details which may move locally or across the screen, effectively acting as non-stationary noise since it provided little to no information regarding the state itself. Finally, we note that SNES utilized the ï¬ rst 3D games. In the game Wolfenstein, the player must navigate a maze from a ï¬ rst-person perspective, while dodging and attacking enemies. The SNES offers plenty of other 3D games such as ï¬ ight and racing games which exhibit similar challenges. These games are much more realistic, thus inferring from SNES games to â real worldâ tasks, as in the case of self driving cars, might be more beneï¬ cial.
1611.02205#15
1611.02205#17
1611.02205
[ "1609.05143" ]
1611.02205#17
Playing SNES in the Retro Learning Environment
A visual comparison of two games, Atari and SNES, is presented in Figure (1). pT q Figure 1: Atari 2600 and SNES game screen comparison: Left: â Boxingâ an Atari 2600 ï¬ ghting game , Right: â Mortal Kombatâ a SNES ï¬ ghting game. Note the exceptional difference in the amount of details between the two games. Therefore, distinguishing a relevant signal from noise is much more difï¬
1611.02205#16
1611.02205#18
1611.02205
[ "1609.05143" ]
1611.02205#18
Playing SNES in the Retro Learning Environment
cult. Table 2: Comparison between RLE and the latest RL environments Characteristics Number of Games In game adjustments1 Frame rate Observation (Input) RLE 8 out of 7000+ Yes 530fps2(SNES) screen, RAM OpenAI Universe 1000+ NO 60fps Screen Iniï¬ nte Mario 1 No 5675fps2 hand crafted features ALE 74 No 120fps screen, RAM Project Malmo 1 Yes <7000fps hand crafted features DeepMind Lab 4 Yes <1000fps screen + depth and velocity 1 Allowing changes in-the game conï¬ gurations (e.g., changing difï¬ culty, characters, etc.)
1611.02205#17
1611.02205#19
1611.02205
[ "1609.05143" ]
1611.02205#19
Playing SNES in the Retro Learning Environment
2 Measured on an i7-5930k CPU 4 EXPERIMENTS 4.1 EVALUATION METHODOLOGY The evaluation methodology that we used for benchmarking the different algorithms is the popular method proposed by (Mnih et al., 2015). Each examined algorithm is trained until either it reached convergence or 100 epochs (each epoch corresponds to 50,000 actions), thereafter it is evaluated by performing 30 episodes of every game. Each episode ends either by reaching a terminal state or after 5 minutes. The results are averaged per game and compared to the average result of a human player. For each game the human player was given two hours for training, and his performances were evaluated over 20 episodes. As the various algorithms donâ t use the game audio in the learning process, the audio was muted for both the agent and the human. From both, humans and agents
1611.02205#18
1611.02205#20
1611.02205
[ "1609.05143" ]
1611.02205#20
Playing SNES in the Retro Learning Environment
5 score, a random agent score (an agent performing actions randomly) was subtracted to assure that learning indeed occurred. It is important to note that DQNâ s e-greedy approach (select a random action with a small probability â ¬) is present during testing thus assuring that the same sequence of actions isnâ t repeated. While the screen dimensions in SNES are larger than those of Atari, in our experiments we maintained the same pre-processing of DQN (i.e., downscaling the image to 84x84 pixels and converting to gray-scale). We argue that downscaling the image size doesnâ
1611.02205#19
1611.02205#21
1611.02205
[ "1609.05143" ]