id
stringlengths
12
15
title
stringlengths
8
162
content
stringlengths
1
17.6k
prechunk_id
stringlengths
0
15
postchunk_id
stringlengths
0
15
arxiv_id
stringlengths
10
10
references
listlengths
1
1
1703.04933#50
Sharp Minima Can Generalize For Deep Nets
ow. In Lee, D. D., Sugiyama, M., Luxburg, U. V., Guyon, I., and Garnett, R. (eds.), Advances in Neural Information Processing Systems 29, pp. 4743â 4751. Curran Associates, Inc., 2016. Klyachko, Alexander A. Random walks on symmetric spaces and inequalities for matrix spectra. Linear Algebra and its Applications, 319(1-3):37â 59, 2000. Krizhevsky, Alex, Sutskever, Ilya, and Hinton, Geoffrey E.
1703.04933#49
1703.04933#51
1703.04933
[ "1609.03193" ]
1703.04933#51
Sharp Minima Can Generalize For Deep Nets
Ima- genet classiï¬ cation with deep convolutional neural networks. In Advances in neural information processing systems, pp. 1097â 1105, 2012. Hardt, Moritz, Recht, Ben, and Singer, Yoram. Train faster, gener- alize better: Stability of stochastic gradient descent. In Balcan, Maria-Florina and Weinberger, Kilian Q. (eds.), Proceedings of the 33nd International Conference on Machine Learning, ICML 2016, New York City, NY, USA, June 19-24, 2016, vol- ume 48 of JMLR Workshop and Conference Proceedings, pp. 1225â
1703.04933#50
1703.04933#52
1703.04933
[ "1609.03193" ]
1703.04933#52
Sharp Minima Can Generalize For Deep Nets
1234. JMLR.org, 2016. URL http://jmlr.org/ proceedings/papers/v48/hardt16.html. Lafond, Jean, Vasilache, Nicolas, and Bottou, Léon. About diago- nal rescaling applied to neural nets. ICML Workshop on Opti- mization Methods for the Next Generation of Machine Learning, 2016. Larsen, Anders Boesen Lindbo, Sønderby, Søren Kaae, and Winther, Ole. Autoencoding beyond pixels using a learned similarity metric. CoRR, abs/1512.09300, 2015. URL http: //arxiv.org/abs/1512.09300.
1703.04933#51
1703.04933#53
1703.04933
[ "1609.03193" ]
1703.04933#53
Sharp Minima Can Generalize For Deep Nets
He, Kaiming, Zhang, Xiangyu, Ren, Shaoqing, and Sun, Jian. Delving deep into rectiï¬ ers: Surpassing human-level perfor- mance on imagenet classiï¬ cation. In Proceedings of the IEEE international conference on computer vision, pp. 1026â 1034, 2015. Montufar, Guido F, Pascanu, Razvan, Cho, Kyunghyun, and Ben- gio, Yoshua.
1703.04933#52
1703.04933#54
1703.04933
[ "1609.03193" ]
1703.04933#54
Sharp Minima Can Generalize For Deep Nets
On the number of linear regions of deep neural networks. In Advances in neural information processing sys- tems, pp. 2924â 2932, 2014. He, Kaiming, Zhang, Xiangyu, Ren, Shaoqing, and Sun, Jian. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recog- nition, pp. 770â 778, 2016. Nair, Vinod and Hinton, Geoffrey E.
1703.04933#53
1703.04933#55
1703.04933
[ "1609.03193" ]
1703.04933#55
Sharp Minima Can Generalize For Deep Nets
Rectiï¬ ed linear units improve In Proceedings of the 27th restricted boltzmann machines. international conference on machine learning (ICML-10), pp. 807â 814, 2010. Sharp Minima Can Generalize For Deep Nets Nesterov, Yurii and Vial, Jean-Philippe. Conï¬ dence level solutions for stochastic programming. Automatica, 44(6):1559â 1568, 2008. Neyshabur, Behnam, Salakhutdinov, Ruslan R, and Srebro, Nati.
1703.04933#54
1703.04933#56
1703.04933
[ "1609.03193" ]
1703.04933#56
Sharp Minima Can Generalize For Deep Nets
Path-sgd: Path-normalized optimization in deep neural net- works. In Advances in Neural Information Processing Systems, pp. 2422â 2430, 2015. Pascanu, Razvan and Bengio, Yoshua. Revisiting natural gradient for deep networks. ICLR, 2014. Wu, Yonghui, Schuster, Mike, Chen, Zhifeng, Le, Quoc V, Norouzi, Mohammad, Macherey, Wolfgang, Krikun, Maxim, Cao, Yuan, Gao, Qin, Macherey, Klaus, et al. Googleâ s neural machine translation system: Bridging the gap between human and ma- chine translation. arXiv preprint arXiv:1609.08144, 2016. Zhang, Chiyuan, Bengio, Samy, Hardt, Moritz, Recht, Benjamin, and Vinyals, Oriol. Understanding deep learning requires re- In ICLRâ 2017, arXiv:1611.03530, thinking generalization. 2017. Raghu, Maithra, Poole, Ben, Kleinberg, Jon, Ganguli, Surya, and Sohl-Dickstein, Jascha. On the expressive power of deep neural networks. arXiv preprint arXiv:1606.05336, 2016. # A Radial transformations
1703.04933#55
1703.04933#57
1703.04933
[ "1609.03193" ]
1703.04933#57
Sharp Minima Can Generalize For Deep Nets
Rezende, Danilo Jimenez and Mohamed, Shakir. Variational in- ference with normalizing ï¬ ows. In Bach & Blei (2015), pp. 1530â 1538. URL http://jmlr.org/proceedings/ papers/v37/rezende15.html. We show an elementary transformation to locally perturb the geometry of a ï¬ nite-dimensional vector space and therefore affect the relative ï¬ atness between a ï¬ nite number minima, at least in terms of spectral norm of the Hessian. We deï¬ ne the function:
1703.04933#56
1703.04933#58
1703.04933
[ "1609.03193" ]
1703.04933#58
Sharp Minima Can Generalize For Deep Nets
Sagun, Levent, Bottou, Léon, and LeCun, Yann. Singularity of the hessian in deep learning. arXiv preprint arXiv:1611.07476, 2016. Salimans, Tim and Kingma, Diederik P. Weight normalization: A simple reparameterization to accelerate training of deep neu- ral networks. In Advances in Neural Information Processing Systems, pp. 901â 901, 2016. Salimans, Tim, Goodfellow, Ian, Zaremba, Wojciech, Cheung, Vicki, Radford, Alec, and Chen, Xi.
1703.04933#57
1703.04933#59
1703.04933
[ "1609.03193" ]
1703.04933#59
Sharp Minima Can Generalize For Deep Nets
Improved techniques for training gans. In Advances in Neural Information Processing Systems, pp. 2226â 2234, 2016. V5 > 0,Vp â ¬J0, 5,V(r,#) â ¬ Ry x]0, 6, W(r,?,6,p) = I(r â ¬ [0,d]) r+ 1(r â ¬ [0,4]) p . +1(r â ¬]F,6]) ((- 5) me +6) wi (r,#,5,p) = U(r ¢ [0,4]) + U(r â ¬ (0,7) c + 1(r â ¬l?,6]) £= °
1703.04933#58
1703.04933#60
1703.04933
[ "1609.03193" ]
1703.04933#60
Sharp Minima Can Generalize For Deep Nets
Saxe, Andrew M., McClelland, James L., and Ganguli, Surya. Exact solutions to the nonlinear dynamics of learning in deep linear neural networks. CoRR, abs/1312.6120, 2013. URL http://arxiv.org/abs/1312.6120. For a parameter Ë Î¸ â Î and δ > 0, Ï â ]0, δ[, Ë r â ]0, δ[, inspired by the radial ï¬ ows (Rezende & Mohamed, 2015) in we can deï¬ ne the radial transformations Simonyan, Karen and Zisserman, Andrew. Very deep convolutional In ICLRâ 2015, networks for large-scale image recognition. arXiv:1409.1556, 2015. v(@â 4ll.F50) ( * ) (0-6) +0 |9 â All vo â ¬ O, g-*(0) Sutskever, Ilya, Vinyals, Oriol, and Le, Quoc V.
1703.04933#59
1703.04933#61
1703.04933
[ "1609.03193" ]
1703.04933#61
Sharp Minima Can Generalize For Deep Nets
Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pp. 3104â 3112, 2014. Swirszcz, Grzegorz, Czarnecki, Wojciech Marian, and Pascanu, Razvan. Local minima in training of deep networks. CoRR, abs/1611.06310, 2016. Szegedy, Christian, Zaremba, Wojciech, Sutskever, Ilya, Bruna, Joan, Erhan, Dumitru, Goodfellow, Ian, and Fergus, Rob. In ICLRâ 2014, Intriguing properties of neural networks. arXiv:1312.6199, 2014. with Jacobian V0 â ¬O, (Vgu1)(8) = w"(r, 7,5, p) In â U(r â ¬]f, 5) Pe (0 6)"(4â ) +1(r â ¬}i,6)) â Tn, with r = ||0 â O|lo.
1703.04933#60
1703.04933#62
1703.04933
[ "1609.03193" ]
1703.04933#62
Sharp Minima Can Generalize For Deep Nets
Szegedy, Christian, Liu, Wei, Jia, Yangqing, Sermanet, Pierre, Reed, Scott, Anguelov, Dragomir, Erhan, Dumitru, Vanhoucke, Vincent, and Rabinovich, Andrew. Going deeper with convo- lutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1â 9, 2015. First, we can observe in Figure 6 that these transformations are purely local: they only have an effect inside the ball B2(Ë Î¸, δ). Through these transformations, you can arbitrarily perturb the ranking between several minima in terms of ï¬ atness as described in Subsection 5.1. Theis, Lucas, Oord, Aäron van den, and Bethge, Matthias.
1703.04933#61
1703.04933#63
1703.04933
[ "1609.03193" ]
1703.04933#63
Sharp Minima Can Generalize For Deep Nets
A note on the evaluation of generative models. In ICLRâ 2016 arXiv:1511.01844, 2016. Sharp Minima Can Generalize For Deep Nets # in y= drect (Greer ++ bpect(% +01 +b1) +++) OK + bx-1) On + 0K = brect (Greet +++ Greet (w+ 0x81 + arb) +++) K-1 -aK-19K-1 + Il axbic-1) -aKdK + dK k=l for Ty a, = 1. This can decrease the amount of eigen- values of the Hessian that can be arbitrarily influenced. (a) Ï (r, Ë r, δ, Ï ) # C Rectiï¬
1703.04933#62
1703.04933#64
1703.04933
[ "1609.03193" ]
1703.04933#64
Sharp Minima Can Generalize For Deep Nets
ed neural network and Lipschitz continuity Relative to recent works (Hardt et al., 2016; Gonen & Shalev-Shwartz, 2017) assuming Lipschitz continuity of the loss function to derive uniform stability bound, we make the following observation: Theorem 6. For a one-hidden layer rectiï¬ ed neural network of the form y = Ï rect(x · θ1) · θ2, if L is not constant, then it is not Lipschitz continuous. (b) gâ 1(θ) Figure 6: An example of a radial transformation on a 2- dimensional space. We can see that only the area in blue and red, i.e. inside B2(Ë Î¸, δ), are affected. Best seen with colors. Proof.
1703.04933#63
1703.04933#65
1703.04933
[ "1609.03193" ]
1703.04933#65
Sharp Minima Can Generalize For Deep Nets
Since a Lipschitz function is necessarily absolutely continuous, we will consider the cases where L is absolutely continuous. First, if L has zero gradient almost everywhere, then L is constant. Now, if there is a point θ with non-zero gradient, then by writing # B Considering the bias parameter (â L)(θ1, θ2) = [(â θ1L)(θ1, θ2) (â θ2L)(θ1, θ2)], When we consider the bias parameter for a one (hidden) layer neural network, the non-negative homogeneity prop- erty translates into
1703.04933#64
1703.04933#66
1703.04933
[ "1609.03193" ]
1703.04933#66
Sharp Minima Can Generalize For Deep Nets
we have (â L)(αθ1, αâ 1θ2) = [αâ 1(â θ1L)(θ1, θ2) α(â θ2L)(θ1, θ2)]. Without loss of generality, we consider (Vo, L)(01, 92) 4 0. Then the limit of the norm y = Ï rect(x · θ1 + b1) · θ2 + b2 = Ï rect(x · αθ1 + αb1) · αâ 1θ2 + b2, I(VL)(01, a 02)||3 = a *||(Vo, L)(01, 62) 3 +07 ||(Vo,L)(01,42)I|3 of the gradient goes to +â as α goes to 0. Therefore, L is not Lipschitz continuous. which results in conclusions similar to section 4.
1703.04933#65
1703.04933#67
1703.04933
[ "1609.03193" ]
1703.04933#67
Sharp Minima Can Generalize For Deep Nets
For a deeper rectiï¬ ed neural network, this property results This result can be generalized to several other models con- taining a one-hidden layer rectiï¬ ed neural network, includ- ing deeper rectiï¬ ed networks. Sharp Minima Can Generalize For Deep Nets # D Euclidean distance and input representation A natural consequence of Subsection 5.2 is that metrics re- lying on Euclidean metric like mean square error or Earth- mover distance will rank very differently models depending on the input representation chosen. Therefore, the choice of input representation is critical when ranking different models based on these metrics. Indeed, bijective transfor- mations as simple as feature standardization or whitening can change the metric signiï¬
1703.04933#66
1703.04933#68
1703.04933
[ "1609.03193" ]
1703.04933#68
Sharp Minima Can Generalize For Deep Nets
cantly. On the contrary, ranking resulting from metrics like f- divergence and log-likelihood are not perturbed by bijective transformations because of the change of variables formula.
1703.04933#67
1703.04933
[ "1609.03193" ]
1703.03664#0
Parallel Multiscale Autoregressive Density Estimation
7 1 0 2 r a M 0 1 ] V C . s c [ 1 v 4 6 6 3 0 . 3 0 7 1 : v i X r a # Parallel Multiscale Autoregressive Density Estimation # Scott Reed 1 A¨aron van den Oord 1 Nal Kalchbrenner 1 Sergio G´omez Colmenarejo 1 Ziyu Wang 1 Dan Belov 1 Nando de Freitas 1 # Abstract
1703.03664#1
1703.03664
[ "1701.05517" ]
1703.03664#1
Parallel Multiscale Autoregressive Density Estimation
PixelCNN achieves state-of-the-art results in density estimation for natural images. Although training is fast, inference is costly, requiring one network evaluation per pixel; O(N) for N pix- els. This can be sped up by caching activations, but still involves generating each pixel sequen- In this work, we propose a parallelized tially. PixelCNN that allows more eï¬ cient inference by modeling certain pixel groups as condition- ally independent. Our new PixelCNN model achieves competitive density estimation and or- ders of magnitude speedup - O(log N) sampling instead of O(N) - enabling the practical genera- tion of 512 à 512 images. We evaluate the model on class-conditional image generation, text-to- image synthesis, and action-conditional video generation, showing that our model achieves the best results among non-pixel-autoregressive den- sity models that allow eï¬ cient sampling. â
1703.03664#0
1703.03664#2
1703.03664
[ "1701.05517" ]
1703.03664#2
Parallel Multiscale Autoregressive Density Estimation
A yellow bird with a black head, orange eyes and an orange bill. Figure 1. Samples from our model at resolutions from 4 à 4 to 256 à 256, conditioned on text and bird part locations in the CUB data set. See Fig. 4 and the supplement for more examples. # 1. Introduction case for WaveNet (Oord et al., 2016; Ramachandran et al., 2017). However, even with this optimization, generation is still in serial order by pixel. Many autoregressive image models factorize the joint dis- tribution of images into per-pixel factors: T pear) =| | posi) (1) t=1 Ideally we would generate multiple pixels in parallel, In the autore- which could greatly accelerate sampling. gressive framework this only works if the pixels are mod- eled as independent. Thus we need a way to judiciously break weak dependencies among pixels; for example im- mediately neighboring pixels should not be modeled as in- dependent since they tend to be highly correlated. For example PixelCNN (van den Oord et al., 2016b) uses a deep convolutional network with carefully designed ï¬ l- ter masking to preserve causal structure, so that all factors in equation 1 can be learned in parallel for a given image. However, a remaining diï¬ culty is that due to the learned causal structure, inference proceeds sequentially pixel-by- pixel in raster order. Multiscale image generation provides one such way to break weak dependencies. In particular, we can model cer- tain groups of pixels as conditionally independent given a lower resolution image and various types of context infor- mation, such as preceding frames in a video. The basic idea is obvious, but nontrivial design problems stand between the idea and a workable implementation. In the naive case, this requires a full network evaluation per pixel. Caching hidden unit activations can be used to reduce the amount of computation per pixel, as in the 1D 1DeepMind. [email protected]>. Correspondence to: Scott Reed <reed-
1703.03664#1
1703.03664#3
1703.03664
[ "1701.05517" ]
1703.03664#3
Parallel Multiscale Autoregressive Density Estimation
First, what is the right way to transmit global information from a low-resolution image to each generated pixel of the high-resolution image? Second, which pixels can we gen- erate in parallel? And given that choice, how can we avoid border artifacts when merging sets of pixels that were gen- erated in parallel, blind to one another? Parallel Multiscale Autoregressive Density Estimation In this work we show how a very substantial portion of the spatial dependencies in PixelCNN can be cut, with only modest degradation in performance. Our formulation al- lows sampling in O(log N) time for N pixels, instead of O(N) as in the original PixelCNN, resulting in orders of In the case of video, in magnitude speedup in practice. which we have access to high-resolution previous frames, we can even sample in O(1) time, with much better perfor- mance than comparably-fast baselines. conditional image generation schemes such as text and spa- tial structure to image (Mansimov et al., 2015; Reed et al., 2016b;a; Wang & Gupta, 2016). The addition of multiscale structure has also been shown Denton et al. to be useful in adversarial networks. (2015) used a Laplacian pyramid to generate images in a coarse-to-ï¬
1703.03664#2
1703.03664#4
1703.03664
[ "1701.05517" ]
1703.03664#4
Parallel Multiscale Autoregressive Density Estimation
ne manner. Zhang et al. (2016) composed a low-resolution and high-resolution text-conditional GAN, yielding higher quality 256 à 256 bird and ï¬ ower images. At a high level, the proposed approach can be viewed as a way to merge per-pixel factors in equation 1. If we merge the factors for, e.g. xi and x j, then that dependency is â cutâ , so the model becomes slightly less expressive. However, we get the beneï¬ t of now being able to sample xi and x j in parallel. If we divide the N pixels into G groups of T pixels each, the joint distribution can be written as a product of the corresponding G factors: Generator networks can be combined with a trained model, such as an image classiï¬ er or captioning network, to gen- erate high-resolution images via optimization and sam- pling procedures (Nguyen et al., 2016). Wu et al. (2017) state that it is diï¬ cult to quantify GAN performance, and propose Monte Carlo methods to approximate the log- likelihood of GANs on MNIST images. G pet) =| [pesky )) CO) gel Above we assumed that each of the G groups contains ex- actly T pixels, but in practice the number can vary. In this work, we form pixel groups from successively higher- resolution views of an image, arranged into a sub-sampling pyramid, such that G â O(log N). Both auto-regressive and non auto-regressive deep net- works have recently been applied successfully to image super-resolution. Shi et al. (2016) developed a sub-pixel convolutional network well-suited to this problem. Dahl et al. (2017) use a PixelCNN as a prior for image super- resolution with a convolutional neural network. Johnson et al. (2016) developed a perceptual loss function useful for both style transfer and super-resolution. GAN variants have also been successful in this domain (Ledig et al., 2016; Sønderby et al., 2017). In section 3 we describe this group structure implemented as a deep convolutional network. In section 4 we show that the model excels in density estimation and can produce quality high-resolution samples at high speed. # 2. Related work
1703.03664#3
1703.03664#5
1703.03664
[ "1701.05517" ]
1703.03664#5
Parallel Multiscale Autoregressive Density Estimation
Deep neural autoregressive models have been applied to image generation for many years, showing promise as a tractable yet expressive density model (Larochelle & Mur- ray, 2011; Uria et al., 2013). Autoregressive LSTMs have been shown to produce state-of-the-art performance in density estimation on large-scale datasets such as Ima- geNet (Theis & Bethge, 2015; van den Oord et al., 2016a). Causally-structured convolutional networks such as Pixel- CNN (van den Oord et al., 2016b) and WaveNet (Oord et al., 2016) improved the speed and scalability of train- ing. These led to improved autoregressive models for video generation (Kalchbrenner et al., 2016b) and machine trans- lation (Kalchbrenner et al., 2016a). Several other deep, tractable density models have recently been developed. Real NVP (Dinh et al., 2016) learns a mapping from images to a simple noise distribution, which is by construction trivially invertible. It is built from smaller invertible blocks called coupling layers whose Jacobian is lower-triangular, and also has a multiscale structure. Inverse Autoregressive Flows (Kingma & Sal- imans, 2016) use autoregressive structures in the latent space to learn more ï¬ exible posteriors for variational auto- encoders. Autoregressive models have also been combined with VAEs as decoder models (Gulrajani et al., 2016). The original PixelRNN paper (van den Oord et al., 2016a) actually included a multiscale autoregressive version, in which PixelRNNs or PixelCNNs were trained at multiple resolutions. The network producing a given resolution im- age was conditioned on the image at the next lower reso- lution. This work is similarly motivated by the usefulness of multiscale image structure (and the very long history of coarse-to-ï¬ ne modeling). Non-autoregressive convolutional generator networks have been successful and widely adopted for image generation as well. Instead of maximizing likelihood, Generative Ad- versarial Networks (GANs) train a generator network to fool a discriminator network adversary (Goodfellow et al., 2014).
1703.03664#4
1703.03664#6
1703.03664
[ "1701.05517" ]
1703.03664#6
Parallel Multiscale Autoregressive Density Estimation
These networks have been used in a wide variety of Our novel contributions in this work are (1) asymptotically and empirically faster inference by modeling conditional independence structure, (2) scaling to much higher reso- lution, (3) evaluating the model on a diverse set of chal- lenging benchmarks including class-, text- and structure- conditional image generation and video generation. Parallel Multiscale Autoregressive Density Estimation 1 1 1--2--17-2 1),2/)1)2 1) 2)1) 2 v ti +7 Y 3 3 g3+44+344 > oR EH 1 1 1+ 2+-1+2 1),2)1)2 1) 2) 1) 2 tS ra 3 3 3+4+3t4 Figure 2. Example pixel grouping and ordering for a 4 à 4 image. The upper-left corners form group 1, the upper-right group 2, and so on. For clarity we only use arrows to indicate immediately-neighboring dependencies, but note that all pixels in preceding groups can be used to predict all pixels in a given group. For example all pixels in group 2 can be used to predict pixels in group 4. In our image experiments pixels in group 1 originate from a lower-resolution image. For video, they are generated given the previous frames. TUT TTT TTT TTT TT] ResNet Split _ Bot ResNet, Split Shallow PixelCNN Merge TTT TTT | Split Merge Figure 3. A simple form of causal upscaling network, mapping from a K à K image to K à 2K. The same procedure can be applied in the vertical direction to produce a 2K à 2K image. In reference to ï¬ gure 2, the leftmost images could be considered â group 1â pixels; i.e. the upper-left corners. The network shown here produces â group 2â
1703.03664#5
1703.03664#7
1703.03664
[ "1701.05517" ]
1703.03664#7
Parallel Multiscale Autoregressive Density Estimation
pixels; i.e. the upper-right corners, completing the top-corners half of the image. (A) In the simplest version, a deep convolutional network (in our case ResNet) directly produces the right image from the left image, and merges column-wise. (B) A more sophisticated version extracts features from a convolutional net, splits the feature map into spatially contiguous blocks, and feeds these in parallel through a shallow PixelCNN. The result is then merged as in (A). # 3. Model The main design principle that we follow in building the model is a coarse-to-ï¬ ne ordering of pixels. Successively higher-resolution frames are generated conditioned on the previous resolution (See for example Figure 1). Pixels are grouped so as to exploit spatial locality at each resolution, which we describe in detail below. Concretely, to create groups we tile the image with 2 à 2 blocks. The corners of these 2à 2 blocks form the four pixel groups at a given scale; i.e. upper-left, upper-right, lower- left, lower-right. Note that some pairs of pixels both within each block and also across blocks can still be dependent. These additional dependencies are important for capturing local textures and avoiding border artifacts.
1703.03664#6
1703.03664#8
1703.03664
[ "1701.05517" ]
1703.03664#8
Parallel Multiscale Autoregressive Density Estimation
The training objective is to maximize log P(x; θ). Since the joint distribution factorizes over pixel groups and scales, the training can be trivially parallelized. # 3.1. Network architecture Figure 2 shows how we divide an image into disjoint groups of pixels, with autoregressive structure among the groups. The key property to notice is that no two adjacent pixels of the high-resolution image are in the same group. Also, pixels can depend on other pixels below and to the right, which would have been inaccessible in the standard PixelCNN. Each group of pixels corresponds to a factor in the joint distribution of equation 2. Figure 3 shows an instantiation of one of these factors as a neural network. Similar to the case of PixelCNN, at train- ing time losses and gradients for all of the pixels within a group can be computed in parallel. At test time, infer- ence proceeds sequentially over pixel groups, in parallel within each group. Also as in PixelCNN, we model the color channel dependencies - i.e. green sees red, blue sees red and green - using channel masking. In the case of type-A upscaling networks (See Figure 3A), sampling each pixel group thus requires 3 network evalua- tions 1. In the case of type-B upscaling, the spatial feature 1However, one could also use a discretized mixture of logistics as output instead of a softmax as in Salimans et al. (2017), in which case only one network evaluation is needed. Parallel Multiscale Autoregressive Density Estimation map for predicting a group of pixels is divided into contigu- ous M à M patches for input to a shallow PixelCNN (See ï¬ gure 3B). This entails M2 very small network evaluations, for each color channel. We used M = 4, and the shallow PixelCNN weights are shared across patches. The division into non-overlapping patches may appear to risk border artifacts when merging.
1703.03664#7
1703.03664#9
1703.03664
[ "1701.05517" ]
1703.03664#9
Parallel Multiscale Autoregressive Density Estimation
However, this does not occur for several reasons. First, each predicted pixel is di- rectly adjacent to several context pixels fed into the upscal- ing network. Second, the generated patches are not directly adjacent in the 2K Ã 2K output image; there is always a row or column of pixels on the border of any pair. Note that the only learnable portions of the upscaling mod- ule are (1) the ResNet encoder of context pixels, and (2) the shallow PixelCNN weights in the case of type-B upscaling.
1703.03664#8
1703.03664#10
1703.03664
[ "1701.05517" ]
1703.03664#10
Parallel Multiscale Autoregressive Density Estimation
The â mergeâ and â splitâ operations shown in ï¬ gure 3 only marshal data and are not associated with parameters. Given the ï¬ rst group of pixels, the rest of the groups at a given scale can be generated autoregressively. The ï¬ rst group of pixels can be modeled using the same approach as detailed above, recursively, down to a base resolution at which we use a standard PixelCNN. At each scale, the number of evaluations is O(1), and the resolution doubles after each upscaling, so the overall complexity is O(log N) to produce images with N pixels.
1703.03664#9
1703.03664#11
1703.03664
[ "1701.05517" ]
1703.03664#11
Parallel Multiscale Autoregressive Density Estimation
# 3.2. Conditional image modeling across 200 bird species, with 10 captions per image. As conditioning information we used a 32 à 32 spatial encoding of the 15 annotated bird part locations. â ¢ MPII (Andriluka et al., 2014) has around 25K images of 410 human activities, with 3 captions per image. We kept only the images depicting a single person, and cropped the image centered around the person, leaving us about 14K images. We used a 32 à 32 en- coding of the 17 annotated human part locations. â ¢ MS-COCO (Lin et al., 2014) has 80K training images with 5 captions per image. As conditioning we used the 80-class segmentation scaled to 32 à 32. â ¢ Robot Pushing (Finn et al., 2016) contains sequences of 20 frames of size 64 à 64 showing a robotic arm pushing objects in a basket. There are 50, 000 training sequences and a validation set with the same objects but diï¬ erent arm trajectories. One test set involves a subset of the objects seen during training and another involving novel objects, both captured on an arm and camera viewpoint not seen during training. All models for ImageNet, CUB, MPII and MS-COCO were trained using RMSprop with hyperparameter â
1703.03664#10
1703.03664#12
1703.03664
[ "1701.05517" ]
1703.03664#12
Parallel Multiscale Autoregressive Density Estimation
¬ = le â 8, with batch size 128 for 200K steps. The learning rate was set initially to le â 4 and decayed to le â 5. Given some context information c, such as a text descrip- tion, a segmentation, or previous video frames, we maxi- mize the conditional likelihood log P(x|c; θ). Each factor in equation 2 simply adds c as an additional conditioning variable. The upscaling neural network corresponding to each factor takes c as an additional input. For encoding text we used a character-CNN-GRU as in (Reed et al., 2016a). For spatially structured data such as segmentation masks we used a standard convolutional net- work. For encoding previous frames in a video we used a ConvLSTM as in (Kalchbrenner et al., 2016b). For all of the samples we show, the queries are drawn from the validation split of the corresponding data set. That is, the captions, key points, segmentation masks, and low- resolution images for super-resolution have not been seen by the model during training. When we evaluate negative log-likelihood, we only quan- tize pixel values to [0, ..., 255] at the target resolution, not separately at each scale. The lower resolution images are then created by sub-sampling this quantized image. # 4.2. Text and location-conditional generation # 4. Experiments # 4.1. Datasets We evaluate our model on ImageNet, Caltech-UCSD Birds (CUB), the MPII Human Pose dataset (MPII), the Mi- crosoft Common Objects in Context dataset (MS-COCO), and the Google Robot Pushing dataset.
1703.03664#11
1703.03664#13
1703.03664
[ "1701.05517" ]
1703.03664#13
Parallel Multiscale Autoregressive Density Estimation
â ¢ For ImageNet (Deng et al., 2009), we trained a class- conditional model using the 1000 leaf node classes. â ¢ CUB (Wah et al., 2011) contains 11, 788 images In this section we show results for CUB, MPII and MS- COCO. For each dataset we trained type-B upscaling net- works with 12 ResNet layers and 4 PixelCNN layers, with 128 hidden units per layer. The base resolution at which we train a standard PixelCNN was set to 4 Ã 4. To encode the captions we padded to 201 characters, then fed into a character-level CNN with three convolutional layers, followed by a GRU and average pooling over time. Upscaling networks to 8 Ã 8, 16 Ã 16 and 32 Ã 32 shared a single text encoder. For higher-resolution upscaling net- works we trained separate text encoders. In principle all upscalers could share an encoder, but we trained separably to save memory and time. Parallel Multiscale Autoregressive Density Estimation
1703.03664#12
1703.03664#14
1703.03664
[ "1701.05517" ]
1703.03664#14
Parallel Multiscale Autoregressive Density Estimation
Captions Keypoints Samples tail beak tail beak tail beak vail beak This is a large brown bird with a bright green head, yellow bill and orange feet. With long brown upper converts and giant white wings, the grey breasted bird flies through the air. Agrey bird witha small head and short beak with lighter grey wing bars and a bright rane . yellow belly. A white large bird with orange legs and gray secondaries and primaries, and a short yellow bill.
1703.03664#13
1703.03664#15
1703.03664
[ "1701.05517" ]
1703.03664#15
Parallel Multiscale Autoregressive Density Estimation
Figure 4. Text-to-image bird synthesis. The leftmost column shows the entire sampling process starting by generating 4 à 4 images, followed by six upscaling steps, to produce a 256 à 256 image. The right column shows the ï¬ nal sampled images for several other queries. For each query the associated part keypoints and caption are shown to the left of the samples. Captions A fisherman sitting along the edge of a creek preparing his equipment to cast. Two teams of players are competing in a game at a gym. Aman in blue pants and a blue t-shirt, wearing brown sneakers, is working on a roof. Awoman in black work out clothes is kneeling on an exercise mat. Keypoints nme Samples head head pelvis head pelvis head pelvis head pelvis Figure 5. Text-to-image human synthesis.The leftmost column again shows the sampling process, and the right column shows the ï¬ nal frame for several more examples. We ï¬ nd that the samples are diverse and usually match the color and position constraints. For CUB and MPII, we have body part keypoints for birds and humans, respectively. We encode these into a 32 à 32 à P binary feature map, where P is the number of parts; 17 for MPII and 15 for CUB. A 1 indicates the part is visible, and 0 indicates the part is not visible. For MS-COCO, we resize the class segmentation mask to 32 à 32 à 80. the target resolution for an upscaler network is higher than 32 à 32, these conditioning features are randomly cropped along with the target image to a 32 à 32 patch. Because the network is fully convolutional, the network can still gen- erate the full resolution at test time, but we can massively save on memory and computation during training. For all datasets, we then encode these spatial features us- ing a 12-layer ResNet. These features are then depth- concatenated with the text encoding and resized with bi- If linear interpolation to the spatial size of the image. Figure 4 shows examples of text- and keypoint-to-bird image synthesis. Figure 5 shows examples of text- and keypoint-to-human image synthesis. Figure 6 shows ex- amples of text- and segmentation-to-image synthesis. Parallel Multiscale Autoregressive Density Estimation
1703.03664#14
1703.03664#16
1703.03664
[ "1701.05517" ]
1703.03664#16
Parallel Multiscale Autoregressive Density Estimation
me me A young man riding on the back of a brown horse. ov Old time railroad caboose sitting on track with two people inside. Sn â fi L â d La 1 A large passenger jet taxis on an airport tarmac. uy rap A professional baseball player is ready to hit the ball. Aman sitting at a desk covered with papers. Figure 6. Text and segmentation-to-image synthesis. The left column shows the full sampling trajectory from 4 Ã 4 to 256 Ã 256. The caption queries are shown beneath the samples. Beneath each image we show the image masked with the largest object in each scene; i.e. only the foreground pixels in the sample are shown. More samples with all categories masked are included in the supplement. CUB PixelCNN Multiscale PixelCNN MPII PixelCNN Multiscale PixelCNN MS-COCO PixelCNN Multiscale PixelCNN Train Val 2.93 2.91 2.99 2.98 Train Val 2.92 2.90 2.91 3.03 Train Val 3.08 3.07 3.16 3.14 Test 2.92 2.98 Test 2.92 3.03 Test - - The motivation for training the O(T ) model is that previous frames in a video provide very detailed cues for predicting the next frame, so that our pixel groups could be condition- ally independent even without access to a low-resolution image. Without the need to upscale from a low-resolution image, we can produce â
1703.03664#15
1703.03664#17
1703.03664
[ "1701.05517" ]
1703.03664#17
Parallel Multiscale Autoregressive Density Estimation
group 1â pixels - i.e. the upper-left corner group - directly by conditioning on previous frames. Then a constant number of network evaluations are needed to sample the next three pixel groups at the ï¬ nal scale. Table 1. Text and structure-to image negative conditional log- likelihood in nats per sub-pixel. Quantitatively, the Multiscale PixelCNN results are not far from those obtained using the original PixelCNN (Reed In addition, we in- et al., 2016c), as shown in Table 1. creased the sample resolution by 8Ã
1703.03664#16
1703.03664#18
1703.03664
[ "1701.05517" ]
1703.03664#18
Parallel Multiscale Autoregressive Density Estimation
. Qualitatively, the sample quality appears to be on par, but with much greater realism due to the higher resolution. # 4.3. Action-conditional video generation The second version is our multi-step upscaler used in previ- ous experiments, conditioned on both previous frames and robot arm state and actions. The complexity of sampling from this model is O(T log N), because at every time step the upscaling procedure must be run, taking O(log N) time. The models were trained for 200K steps with batch size 64, using the RMSprop optimizer with centering and â
1703.03664#17
1703.03664#19
1703.03664
[ "1701.05517" ]
1703.03664#19
Parallel Multiscale Autoregressive Density Estimation
¬ = le-8. The learning rate was initialized to le â 4 and decayed by factor 0.3 after 83K steps and after 113K steps. For the O(T) model we used a mixture of discretized logistic out- puts (Salimans et al., 2017) and for the O(T log N) mode we used a softmax ouptut. In this section we present results on Robot Pushing videos. All models were trained to perform future frame prediction conditioned on 2 starting frames and also on the robot arm actions and state, which are each 5-dimensional vectors. We trained two versions of the model, both versions using type-A upscaling networks (See Fig. 3).
1703.03664#18
1703.03664#20
1703.03664
[ "1701.05517" ]
1703.03664#20
Parallel Multiscale Autoregressive Density Estimation
The ï¬ rst is de- signed to sample in O(T ) time, for T video frames. That is, the number of network evaluations per frame is constant with respect to the number of pixels. Table 2 compares two variants of our model with the origi- nal VPN. Compared to the O(T ) baseline - a convolutional LSTM model without spatial dependencies - our O(T ) model performs dramatically better. On the validation set, in which the model needs to generalize to novel combina- tions of objects and arm trajectories, the O(T log N) model does much better than our O(T ) model, although not as well as the original O(T N) model. Parallel Multiscale Autoregressive Density Estimation
1703.03664#19
1703.03664#21
1703.03664
[ "1701.05517" ]
1703.03664#21
Parallel Multiscale Autoregressive Density Estimation
8x8 â 128x128 8x8 > 512x512 | | 16x16 â 128x128 32x32 â 128x128 Figure 7. Upscaling low-resolution images to 128 à 128 and 512 à 512. In each group of images, the left column is made of real images, and the right columns of samples from the model. = = Monastery Cardoon Figure 8. Class-conditional 128 à 128 samples from a model trained on ImageNet. On the testing sets, we observed that the O(T ) model per- formed as well as on the validation set, but the O(T log N) model showed a drop in performance. However, this drop does not occur due to the presence of novel objects (in fact this setting actually yields better results), but due to the novel arm and camera conï¬ guration used during testing 2. It appears that the O(T log N) model may have overï¬ t to the background details and camera position of the 10 train- ing arms, but not necessarily to the actual arm and object motions. It should be possible to overcome this eï¬ ect with better regularization and perhaps data augmentation such as mirroring and jittering frames, or simply training on data with more diverse camera positions. 2From communication with the Robot Pushing dataset author. The supplement contains example videos generated on the validation set arm trajectories from our O(T log N) model. We also trained 64 â 128 and 128 â 256 upscalers con- ditioned on low-resolution and a previous high-resolution frame, so that we can produce 256 à 256 videos. # 4.4. Class-conditional generation
1703.03664#20
1703.03664#22
1703.03664
[ "1701.05517" ]
1703.03664#22
Parallel Multiscale Autoregressive Density Estimation
To compare against other image density models, we trained our Multiscale PixelCNN on ImageNet. We used type-B upscaling networks (Seee ï¬ gure 3) with 12 ResNet (He et al., 2016) layers and 4 PixelCNN layers, with 256 hidden units per layer. For all PixelCNNs in the model, we used the same architecture as in (van den Oord et al., 2016b). We generated images with a base resolution of 8 à 8 and
1703.03664#21
1703.03664#23
1703.03664
[ "1701.05517" ]
1703.03664#23
Parallel Multiscale Autoregressive Density Estimation
Parallel Multiscale Autoregressive Density Estimation Tr Model - O(T) baseline - O(TN) VPN O(T) VPN 1.03 O(T log N) VPN 0.74 Val 2.06 0.62 1.04 0.74 Ts-seen Ts-novel 2.08 0.64 1.04 1.06 2.07 0.64 1.04 0.97 Table 2. Robot videos neg. log-likelihood in nats per sub-pixel. â
1703.03664#22
1703.03664#24
1703.03664
[ "1701.05517" ]
1703.03664#24
Parallel Multiscale Autoregressive Density Estimation
Trâ is the training set, â Ts-seenâ is the test set with novel arm and camera conï¬ guration and previously seen objects, and â Ts- novelâ is the same as â Ts-seenâ but with novel objects. Model O(N) PixelCNN O(log N) PixelCNN O(log N) PixelCNN, in-graph O(T N) VPN O(T ) VPN O(T ) VPN, in-graph O(T log N) VPN O(T log N) VPN, in-graph scale 32 32 32 64 64 64 64 64 time 120.0 1.17 1.14 1929.8 0.38 0.37 3.82 3.07 speedup 1.0Ã
1703.03664#23
1703.03664#25
1703.03664
[ "1701.05517" ]
1703.03664#25
Parallel Multiscale Autoregressive Density Estimation
102à 105à 1.0à 5078à 5215à 505à 628à trained four upscaling networks to produce up to 128 à 128 samples.At scales 64 à 64 and above, during training we randomly cropped the image to 32 à 32. This accelerates training but does not pose a problem at test time because all of the networks are fully convolutional. Table 4. Sampling speed of several models in seconds per frame on an Nvidia Quadro M4000 GPU. The top three rows were mea- sured on 32à 32 ImageNet, with batch size of 30. The bottom ï¬ ve rows were measured on generating 64 à 64 videos of 18 frames each, averaged over 5 videos. Table 3 shows the results. On both 32 à 32 and 64 à 64 ImageNet it achieves signiï¬ cantly better likelihood scores than have been reported for any non-pixel-autoregressive density models, such as ConvDRAW and Real NVP, that also allow eï¬ cient sampling. ing from 8 à 8, but less realistic results due to the more challenging nature of the problem. Upscaling starting from 32 à 32 results in much more realistic images. Here the diversity is apparent in the samples (as in the data, condi- tioned on low-resolution) in the local details such as the dogâ s fur patterns or the frogâ s eye contours.
1703.03664#24
1703.03664#26
1703.03664
[ "1701.05517" ]
1703.03664#26
Parallel Multiscale Autoregressive Density Estimation
Of course, performance of these approaches varies consid- erably depending on the implementation details, especially in the design and capacity of deep neural networks used. But it is notable that the very simple and direct approach developed here can surpass the state-of-the-art among fast- sampling density models. # 4.5. Sampling time comparison As expected, we observe a very large speedup of our model compared to sampling from a standard PixelCNN at the same resolution (see Table 4). Even at 32 à 32 we ob- serve two orders of magnitude speedup, and the speedup is greater for higher resolution. 32 Model 3.86 (3.83) PixelRNN 3.83 (3.77) PixelCNN Real NVP 4.28(4.26) Conv. DRAW 4.40(4.35) 3.95(3.92) Ours 64 3.64(3.57) 3.57(3.48) 3.98(3.75) 4.10(4.04) 3.70(3.67) 128 - - - - 3.55(3.42) Since our model only requires O(log N) network evalua- tions to sample, we can ï¬ t the entire computation graph for sampling into memory, for reasonable batch sizes. In- graph computation in TensorFlow can further improve the speed of both image and video generation, due to reduced overhead by avoiding repeated calls to sess.run.
1703.03664#25
1703.03664#27
1703.03664
[ "1701.05517" ]
1703.03664#27
Parallel Multiscale Autoregressive Density Estimation
Table 3. ImageNet negative log-likelihood in bits per sub-pixel at 32 à 32, 64 à 64 and 128 à 128 resolution. In Figure 8 we show examples of diverse 128 à 128 class conditional image generation. Since our model has a PixelCNN at the lowest resolution, it can also be accelerated by caching PixelCNN hidden unit activations, recently implemented b by Ramachandran et al. (2017). This could allow one to use higher-resolution base PixelCNNs without sacriï¬ cing speed. Interestingly, the model often produced quite realistic bird images from scratch when trained on CUB, and these sam- ples looked more realistic than any animal image generated by our ImageNet models. One plausible explanation for this diï¬ erence is a lack of model capacity; a single network modeling the 1000 very diverse ImageNet categories can devote only very limited capacity to each one, compared to a network that only needs to model birds. This sug- gests that ï¬ nding ways to increase capacity without slowing down training or sampling could be a promising direction. # 5. Conclusions
1703.03664#26
1703.03664#28
1703.03664
[ "1701.05517" ]
1703.03664#28
Parallel Multiscale Autoregressive Density Estimation
In this paper, we developed a parallelized, multiscale ver- sion of PixelCNN. It achieves competitive density estima- tion results on CUB, MPII, MS-COCO, ImageNet, and Robot Pushing videos, surpassing all other density models that admit fast sampling. Qualitatively, it can achieve com- pelling results in text-to-image synthesis and video gener- ation, as well as diverse super-resolution from very small images all the way to 512 Ã
1703.03664#27
1703.03664#29
1703.03664
[ "1701.05517" ]
1703.03664#29
Parallel Multiscale Autoregressive Density Estimation
512. Figure 7 shows upscaling starting from ground-truth im- ages of size 8Ã 8, 16Ã 16 and 32Ã 32. We observe the largest diversity of samples in terms of global structure when start- Many more samples from all of our models can be found in the appendix and supplementary material. Parallel Multiscale Autoregressive Density Estimation # References Andriluka, Mykhaylo, Pishchulin, Leonid, Gehler, Peter, and Schiele, Bernt. 2d human pose estimation: New benchmark and state of the art analysis. In CVPR, pp. 3686â 3693, 2014. Dahl, Ryan, Norouzi, Mohammad, and Shlens, Jonathon. arXiv preprint Pixel arXiv:1702.00783, 2017. recursive super resolution. Deng, Jia, Dong, Wei, Socher, Richard, Li, Li-Jia, Li, Kai, ImageNet: A large-scale hierarchical and Fei-Fei, Li. image database. In CVPR, 2009. Larochelle, Hugo and Murray, Iain. The neural autoregres- sive distribution estimator. In AISTATS, 2011.
1703.03664#28
1703.03664#30
1703.03664
[ "1701.05517" ]
1703.03664#30
Parallel Multiscale Autoregressive Density Estimation
Ledig, Christian, Theis, Lucas, Huszar, Ferenc, Caballero, Jose, Cunningham, Andrew, Acosta, Alejandro, Aitken, Andrew, Tejani, Alykhan, Totz, Johannes, Wang, Zehan, and Shi, Wenzhe. Photo-realistic single image super- resolution using a generative adversarial network. 2016. Lin, Tsung-Yi, Maire, Michael, Belongie, Serge, Hays, James, Perona, Pietro, Ramanan, Deva, Doll´ar, Piotr, and Zitnick, C Lawrence. Microsoft COCO: Common objects in context. In ECCV, pp. 740â 755, 2014. Denton, Emily L, Chintala, Soumith, Szlam, Arthur, and Fergus, Rob.
1703.03664#29
1703.03664#31
1703.03664
[ "1701.05517" ]
1703.03664#31
Parallel Multiscale Autoregressive Density Estimation
Deep generative image models using a Laplacian pyramid of adversarial networks. In NIPS, pp. 1486â 1494, 2015. Dinh, Laurent, Sohl-Dickstein, Jascha, and Bengio, Samy. Density estimation using Real NVP. In NIPS, 2016. Mansimov, Elman, Parisotto, Emilio, Ba, Jimmy Lei, and Salakhutdinov, Ruslan. Generating images from cap- tions with attention. In ICLR, 2015. Nguyen, Anh, Yosinski, Jason, Bengio, Yoshua, Dosovit- skiy, Alexey, and Clune, Jeï¬ . Plug & play generative networks: Conditional iterative generation of images in latent space. arXiv preprint arXiv:1612.00005, 2016.
1703.03664#30
1703.03664#32
1703.03664
[ "1701.05517" ]
1703.03664#32
Parallel Multiscale Autoregressive Density Estimation
Finn, Chelsea, Goodfellow, Ian, and Levine, Sergey. Unsu- pervised learning for physical interaction through video prediction. In NIPS, 2016. Goodfellow, Ian J., Pouget-Abadie, Jean, Mirza, Mehdi, Xu, Bing, Warde-Farley, David, Ozair, Sherjil, Courville, Aaron C., and Bengio, Yoshua. Generative adversarial nets. In NIPS, 2014. Gulrajani, Ishaan, Kumar, Kundan, Ahmed, Faruk, Taiga, Adrien Ali, Visin, Francesco, Vazquez, David, and Courville, Aaron. PixelVAE: A latent variable model for natural images. arXiv preprint arXiv:1611.05013, 2016. He, Kaiming, Zhang, Xiangyu, Ren, Shaoqing, and Sun, Jian. Identity mappings in deep residual networks. In ECCV, pp. 630â 645, 2016. Johnson, Justin, Alahi, Alexandre, and Fei-Fei, Li.
1703.03664#31
1703.03664#33
1703.03664
[ "1701.05517" ]
1703.03664#33
Parallel Multiscale Autoregressive Density Estimation
Per- ceptual losses for real-time style transfer and super- resolution. In ECCV, 2016. Kalchbrenner, Nal, Espeholt, Lasse, Simonyan, Karen, Oord, Aaron van den, Graves, Alex, and Kavukcuoglu, Koray. Neural machine translation in linear time. arXiv preprint arXiv:1610.10099, 2016a. Kalchbrenner, Nal, Oord, Aaron van den, Simonyan, Karen, Danihelka, Ivo, Vinyals, Oriol, Graves, Alex, and Kavukcuoglu, Koray. Video pixel networks. Preprint arXiv:1610.00527, 2016b. Kingma, Diederik P and Salimans, Tim. Improving vari- ational inference with inverse autoregressive ï¬
1703.03664#32
1703.03664#34
1703.03664
[ "1701.05517" ]
1703.03664#34
Parallel Multiscale Autoregressive Density Estimation
ow. In NIPS, 2016. Oord, Aaron van den, Dieleman, Sander, Zen, Heiga, Si- monyan, Karen, Vinyals, Oriol, Graves, Alex, Kalch- brenner, Nal, Senior, Andrew, and Kavukcuoglu, Ko- ray. Wavenet: A generative model for raw audio. arXiv preprint arXiv:1609.03499, 2016.
1703.03664#33
1703.03664#35
1703.03664
[ "1701.05517" ]
1703.03664#35
Parallel Multiscale Autoregressive Density Estimation
Ramachandran, Prajit, Paine, Tom Le, Khorrami, Pooya, Babaeizadeh, Mohammad, Chang, Shiyu, Zhang, Yang, Hasegawa-Johnson, Mark, Campbell, Roy, and Huang, Thomas. Fast generation for convolutional autoregres- sive models. 2017. Reed, Scott, Akata, Zeynep, Mohan, Santosh, Tenka, Samuel, Schiele, Bernt, and Lee, Honglak.
1703.03664#34
1703.03664#36
1703.03664
[ "1701.05517" ]
1703.03664#36
Parallel Multiscale Autoregressive Density Estimation
Learning what and where to draw. In NIPS, 2016a. Reed, Scott, Akata, Zeynep, Yan, Xinchen, Logeswaran, Lajanugen, Schiele, Bernt, and Lee, Honglak. Gen- In ICML, erative adversarial text-to-image synthesis. 2016b. Reed, Scott, van den Oord, A¨aron, Kalchbrenner, Nal, Bapst, Victor, Botvinick, Matt, and de Freitas, Nando. Generating interpretable images with controllable struc- ture. Technical report, 2016c. Salimans, Tim, Karpathy, Andrej, Chen, Xi, and Kingma, Diederik P. PixelCNN++: Improving the PixelCNN with discretized logistic mixture likelihood and other modiï¬ cations. arXiv preprint arXiv:1701.05517, 2017.
1703.03664#35
1703.03664#37
1703.03664
[ "1701.05517" ]
1703.03664#37
Parallel Multiscale Autoregressive Density Estimation
Shi, Wenzhe, Caballero, Jose, Husz´ar, Ferenc, Totz, Jo- hannes, Aitken, Andrew P, Bishop, Rob, Rueckert, Daniel, and Wang, Zehan. Real-time single image and video super-resolution using an eï¬ cient sub-pixel con- volutional neural network. In CVPR, 2016. Parallel Multiscale Autoregressive Density Estimation Sønderby, Casper Kaae, Caballero, Jose, Theis, Lucas, Shi, Wenzhe, and Husz´ar, Ferenc. Amortised MAP inference for image super-resolution. 2017. # 6. Appendix Below we show additional samples.
1703.03664#36
1703.03664#38
1703.03664
[ "1701.05517" ]
1703.03664#38
Parallel Multiscale Autoregressive Density Estimation
Theis, L. and Bethge, M. Generative image modeling using spatial LSTMs. In NIPS, 2015. Iain, and Larochelle, Hugo. RNADE: The real-valued neural autoregressive density- estimator. In NIPS, 2013. and Kavukcuoglu, Koray. Pixel recurrent neural networks. In ICML, pp. 1747â 1756, 2016a. van den Oord, A¨aron, Kalchbrenner, Nal, Vinyals, Oriol, Espeholt, Lasse, Graves, Alex, and Kavukcuoglu, Koray.
1703.03664#37
1703.03664#39
1703.03664
[ "1701.05517" ]
1703.03664#39
Parallel Multiscale Autoregressive Density Estimation
Conditional image generation with PixelCNN decoders. In NIPS, 2016b. Wah, Catherine, Branson, Steve, Welinder, Peter, Perona, Pietro, and Belongie, Serge. The Caltech-UCSD birds- 200-2011 dataset. 2011. Wang, Xiaolong and Gupta, Abhinav. Generative image modeling using style and structure adversarial networks. In ECCV, pp. 318â 335, 2016. Wu, Yuhuai, Burda, Yuri, Salakhutdinov, Ruslan, and Grosse, Roger. On the quantitative analysis of decoder- based generative models. 2017. Zhang, Han, Xu, Tao, Li, Hongsheng, Zhang, Shaoting, Huang, Xiaolei, Wang, Xiaogang, and Metaxas, Dim- itris. StackGAN: Text to photo-realistic image synthe- sis with stacked generative adversarial networks. arXiv preprint arXiv:1612.03242, 2016. Parallel Multiscale Autoregressive Density Estimation
1703.03664#38
1703.03664#40
1703.03664
[ "1701.05517" ]
1703.03664#40
Parallel Multiscale Autoregressive Density Estimation
oe ae ee Be ite striped breast, 5 â < A bird with a short neck, yellow eyebrows and brown and whi neck and primaries. sail oak fam ak soi beak ail A yellow bird with a black head, orange eyes and an orange bill. ak. sail Beak tail 503 This little bird has a thin long curved down beak white under-body and brow head wings back and tail. Awhite large bird with orange legs and gray secondaries and primaries, and a short yellow bill. beak rail eak au 3k sail The bird has a small bill that is black and a white breast. pu peak gout beak oi beak rau beak i Ld a White bellied bird has black and orange breast, black head and straight black tail. tai beak sail beak tit beak toi beak The bird is round with a green crown and white belly. ake ai beak you eae â Small light brown bird with black rectricles and a long white beak. ti beak rail eae soi beak tit beak The small brown bird has an ivory belly with dark brown stripes on its crown. This bird has a white belly and breast with a black back and red crown and nape. beak beak _ beak An aquatic bird with a long, two toned neck with red eyes. rai weak sail beak fm 3k oi beak i This is a large brown bird with a bright green head, yellow bill and orange feet. pau beak sail beak ait beak toi beak 4 This magnificent specimen has a white belly, pink breast and neck, with black superciliary and white winabars. no Es A bird with a red bill that has a pointed black tip, white wing bars, a small head, white throat and belly. beak it beak st 3 a =| = This bird has a white back , breast and belly with a black crown and long The bird has curved feet that are black and a small bill. oak beak soi With long brown upper converts and giant white wings, the grey breasted bird flies through the air.
1703.03664#39
1703.03664#41
1703.03664
[ "1701.05517" ]
1703.03664#41
Parallel Multiscale Autoregressive Density Estimation
Figure 9. Additional CUB samples randomly chosen from the validation set. Parallel Multiscale Autoregressive Density Estimation A blurry photo of a woman swimming underwater in a pool pelvis head gelvis head elvis ead Aman ina black shirt and blue jeans is washing a black car. Aman wearing camo is fixing a large gun on the table. head elvis head ee a head ead head An elderly man in a black striped shirt holding a yellow handle. eag has ead ead |_| Aman in a white shirt with a black vest is driving a boat. ie elvis head pelvis head vis A middle-aged man is wearing a bicycling outfit and a red helmet and has the number 96 on his handlebars. Aman in a white and green shirt is standing next to a tiller. head head ead Aman ina tight fitting red outfit is doing a gymnastics move. a man ina blue shirt and pants is doing a pull up on metal bar at wooden poles. head elvis head vis head This man i holding a large package and wheeling it down a hallway. Aman in a red shirt and blue overalls who is chopping wood with a large ax.
1703.03664#40
1703.03664#42
1703.03664
[ "1701.05517" ]
1703.03664#42
Parallel Multiscale Autoregressive Density Estimation
Figure 10. Additional MPII samples randomly chosen from the validation set. Parallel Multiscale Autoregressive Density Estimation person 05 ed | Three people on the beach with one holding a surfboard rT - ry | = Sopanaeeret it â See ee e â Two horses are in the grass by the woods. arte, = A set of four buses parked next to each other ona parking lot. A bus is being towed by a blue tow truck A building with a clock mounted on it. A train is standing at a railway station and a car is A giraffe walking through a grassy area near some parked in front of it. â woman holding a baby while sitting in front of a cake layer bunting a baseball at a game. airplane oka Alarge white airplane parked in a stationary position. A bunch of trucks parked next to each other. Three horses and a foal in an enclosed field. person airplane f a \ eel j é a nif I nif Mi ed Sat â Some white sheep are in a brown pen. â A young skier is looking away while people in the large commercial airplane taking off from the landing background look on. stripe. A black roman numeral clock on a building.
1703.03664#41
1703.03664#43
1703.03664
[ "1701.05517" ]
1703.03664#43
Parallel Multiscale Autoregressive Density Estimation
Aman sitting at a desk covered with papers. A smart phone sitting next to a receipt on a table Figure 11. Additional MS-COCO samples randomly chosen from the validation set. Parallel Multiscale Autoregressive Density Estimation N iN) = = 2 2 = = o ae 88 BP Be EP SF ro pm BP BF SE SP os Py P95 OM KS5 XH Qo FO ©5 OH KS XH x xo x xe &® oe 25 209 x xe ® oe Ny N = aS & B x xe 4 4 g z 3 8 BB Ss & BB o © Fy Q © x 3 â ¢ ioe ao if a â iz lon cas <i 4 17 a 4 L er | 7 ES te 4 fo baal aoe "N â ¢ 4 4 as a & = eo as ear a, a 1 ee ee es _â * Se Se ee re Se cod nel ne al ae re fe] a 4 4 2 poe poe poe a a ae rane Fae iw at ae ey ae a Ae tee Cw Ew Cw | a ap be Log ai (> <a Av | ee gle ral sel ral sa bea lis Ts boar boos bene | = Tei enon aa ie } fe 7 } fe Hot red eb Eas ioe ea | Figure 12. Robot pushing videos at 64 à 64, 128 à 128 and 256 à 256.
1703.03664#42
1703.03664#44
1703.03664
[ "1701.05517" ]
1703.03664#44
Parallel Multiscale Autoregressive Density Estimation
Parallel Multiscale Autoregressive Density Estimation Â¥ ai] Pomegranate Figure 13. Label-conditional 128 à 128 ImageNet samples. Parallel Multiscale Autoregressive Density Estimation 32x32 â 512 x 512 8x8 > 512 x 512 woe a. Js 5 $ Figure 14. Additional upscaling samples.
1703.03664#43
1703.03664
[ "1701.05517" ]
1703.03400#0
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
7 1 0 2 l u J 8 1 ] G L . s c [ 3 v 0 0 4 3 0 . 3 0 7 1 : v i X r a # Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks # Chelsea Finn 1 Pieter Abbeel 1 2 Sergey Levine 1 # Abstract the form of computation required to complete the task. We propose an algorithm for meta-learning that is model-agnostic, in the sense that it is com- patible with any model trained with gradient de- scent and applicable to a variety of different learning problems, including classiï¬ cation, re- gression, and reinforcement learning. The goal of meta-learning is to train a model on a vari- ety of learning tasks, such that it can solve new learning tasks using only a small number of train- ing samples. In our approach, the parameters of the model are explicitly trained such that a small number of gradient steps with a small amount of training data from a new task will produce good generalization performance on that task. In effect, our method trains the model to be easy to ï¬
1703.03400#1
1703.03400
[ "1612.00796" ]
1703.03400#1
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
ne-tune. We demonstrate that this approach leads to state-of-the-art performance on two few- shot image classiï¬ cation benchmarks, produces good results on few-shot regression, and acceler- ates ï¬ ne-tuning for policy gradient reinforcement learning with neural network policies. # 1. Introduction Learning quickly is a hallmark of human intelligence, whether it involves recognizing objects from a few exam- ples or quickly learning new skills after just minutes of experience. Our artiï¬ cial agents should be able to do the same, learning and adapting quickly from only a few exam- ples, and continuing to adapt as more data becomes avail- able. This kind of fast and ï¬ exible learning is challenging, since the agent must integrate its prior experience with a small amount of new information, while avoiding overï¬ t- ting to the new data. Furthermore, the form of prior ex- perience and new data will depend on the task. As such, for the greatest applicability, the mechanism for learning to learn (or meta-learning) should be general to the task and
1703.03400#0
1703.03400#2
1703.03400
[ "1612.00796" ]
1703.03400#2
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
1University of California, Berkeley 2OpenAI. Correspondence to: Chelsea Finn <cbï¬ [email protected]>. Proceedings of the 34 th International Conference on Machine Learning, Sydney, Australia, PMLR 70, 2017. Copyright 2017 by the author(s). In this work, we propose a meta-learning algorithm that is general and model-agnostic, in the sense that it can be directly applied to any learning problem and model that is trained with a gradient descent procedure. Our focus is on deep neural network models, but we illustrate how our approach can easily handle different architectures and different problem settings, including classiï¬ cation, regres- sion, and policy gradient reinforcement learning, with min- imal modiï¬ cation.
1703.03400#1
1703.03400#3
1703.03400
[ "1612.00796" ]
1703.03400#3
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
In meta-learning, the goal of the trained model is to quickly learn a new task from a small amount of new data, and the model is trained by the meta-learner to be able to learn on a large number of different tasks. The key idea underlying our method is to train the modelâ s initial parameters such that the model has maximal perfor- mance on a new task after the parameters have been up- dated through one or more gradient steps computed with a small amount of data from that new task. Unlike prior meta-learning methods that learn an update function or learning rule (Schmidhuber, 1987; Bengio et al., 1992; Andrychowicz et al., 2016; Ravi & Larochelle, 2017), our algorithm does not expand the number of learned param- eters nor place constraints on the model architecture (e.g. by requiring a recurrent model (Santoro et al., 2016) or a Siamese network (Koch, 2015)), and it can be readily com- bined with fully connected, convolutional, or recurrent neu- ral networks. It can also be used with a variety of loss func- tions, including differentiable supervised losses and non- differentiable reinforcement learning objectives. The process of training a modelâ s parameters such that a few gradient steps, or even a single gradient step, can pro- duce good results on a new task can be viewed from a fea- ture learning standpoint as building an internal representa- tion that is broadly suitable for many tasks. If the internal representation is suitable to many tasks, simply ï¬ ne-tuning the parameters slightly (e.g. by primarily modifying the top layer weights in a feedforward model) can produce good results. In effect, our procedure optimizes for models that are easy and fast to ï¬ ne-tune, allowing the adaptation to happen in the right space for fast learning. From a dynami- cal systems standpoint, our learning process can be viewed as maximizing the sensitivity of the loss functions of new tasks with respect to the parameters: when the sensitivity is high, small local changes to the parameters can lead to Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks large improvements in the task loss. The primary contribution of this work is a simple model- and task-agnostic algorithm for meta-learning that trains a modelâ
1703.03400#2
1703.03400#4
1703.03400
[ "1612.00796" ]
1703.03400#4
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
s parameters such that a small number of gradi- ent updates will lead to fast learning on a new task. We demonstrate the algorithm on different model types, includ- ing fully connected and convolutional networks, and in sev- eral distinct domains, including few-shot regression, image classiï¬ cation, and reinforcement learning. Our evaluation shows that our meta-learning algorithm compares favor- ably to state-of-the-art one-shot learning methods designed speciï¬ cally for supervised classiï¬ cation, while using fewer parameters, but that it can also be readily applied to regres- sion and can accelerate reinforcement learning in the pres- ence of task variability, substantially outperforming direct pretraining as initialization. # 2. Model-Agnostic Meta-Learning We aim to train models that can achieve rapid adaptation, a problem setting that is often formalized as few-shot learn- ing.
1703.03400#3
1703.03400#5
1703.03400
[ "1612.00796" ]
1703.03400#5
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
In this section, we will deï¬ ne the problem setup and present the general form of our algorithm. # 2.1. Meta-Learning Problem Set-Up The goal of few-shot meta-learning is to train a model that can quickly adapt to a new task using only a few datapoints and training iterations. To accomplish this, the model or learner is trained during a meta-learning phase on a set of tasks, such that the trained model can quickly adapt to new tasks using only a small number of examples or trials. In effect, the meta-learning problem treats entire tasks as training examples. In this section, we formalize this meta- learning problem setting in a general manner, including brief examples of different learning domains. We will dis- cuss two different learning domains in detail in Section 3. We consider a model, denoted f , that maps observa- tions x to outputs a. During meta-learning, the model is trained to be able to adapt to a large or inï¬
1703.03400#4
1703.03400#6
1703.03400
[ "1612.00796" ]
1703.03400#6
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
nite num- ber of tasks. Since we would like to apply our frame- work to a variety of learning problems, from classiï¬ ca- tion to reinforcement learning, we introduce a generic notion of a learning task below. Formally, each task T = {L(x1, a1, . . . , xH , aH ), q(x1), q(xt+1|xt, at), H} consists of a loss function L, a distribution over initial ob- servations q(x1), a transition distribution q(xt+1|xt, at), and an episode length H. In i.i.d. supervised learning prob- lems, the length H = 1. The model may generate samples of length H by choosing an output at at each time t. The loss L(x1, a1, . . . , xH , aH ) â R, provides task-speciï¬ c feedback, which might be in the form of a misclassiï¬ cation loss or a cost function in a Markov decision process. θ â L1 â L3 â L2 θâ 3 θâ 1 θâ 2 Figure 1.
1703.03400#5
1703.03400#7
1703.03400
[ "1612.00796" ]
1703.03400#7
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
Diagram of our model-agnostic meta-learning algo- rithm (MAML), which optimizes for a representation θ that can quickly adapt to new tasks. In our meta-learning scenario, we consider a distribution over tasks p(T ) that we want our model to be able to adapt to. In the K-shot learning setting, the model is trained to learn a new task Ti drawn from p(T ) from only K samples drawn from qi and feedback LTi generated by Ti. During meta-training, a task Ti is sampled from p(T ), the model is trained with K samples and feedback from the corre- sponding loss LTi from Ti, and then tested on new samples from Ti. The model f is then improved by considering how the test error on new data from qi changes with respect to the parameters. In effect, the test error on sampled tasks Ti serves as the training error of the meta-learning process. At the end of meta-training, new tasks are sampled from p(T ), and meta-performance is measured by the modelâ s perfor- mance after learning from K samples. Generally, tasks used for meta-testing are held out during meta-training. # 2.2. A Model-Agnostic Meta-Learning Algorithm In contrast to prior work, which has sought to train re- current neural networks that ingest entire datasets (San- toro et al., 2016; Duan et al., 2016b) or feature embed- dings that can be combined with nonparametric methods at test time (Vinyals et al., 2016; Koch, 2015), we propose a method that can learn the parameters of any standard model via meta-learning in such a way as to prepare that model for fast adaptation. The intuition behind this approach is that some internal representations are more transferrable than others. For example, a neural network might learn internal features that are broadly applicable to all tasks in p(T ), rather than a single individual task. How can we en- courage the emergence of such general-purpose representa- tions? We take an explicit approach to this problem: since the model will be ï¬ ne-tuned using a gradient-based learn- ing rule on a new task, we will aim to learn a model in such a way that this gradient-based learning rule can make rapid progress on new tasks drawn from p(T ), without overï¬
1703.03400#6
1703.03400#8
1703.03400
[ "1612.00796" ]
1703.03400#8
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
t- ting. In effect, we will aim to ï¬ nd model parameters that are sensitive to changes in the task, such that small changes in the parameters will produce large improvements on the loss function of any task drawn from p(T ), when altered in the direction of the gradient of that loss (see Figure 1). We Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks Algorithm 1 Model-Agnostic Meta-Learning Require: p(T ): distribution over tasks Require: α, β: step size hyperparameters 1: randomly initialize θ 2: while not done do 3: 4: 5: 6: Sample batch of tasks Ti â ¼ p(T ) for all Ti do Evaluate Vg Ll7,; (fo) with respect to K examples Compute adapted parameters with gradient de- scent: 6; = 0 â aVoLlr, (fo) # end for Update θ â θ â βâ θ # 7: 8: 9: end while 8: Update 0 + 0 â BVo TPT) Lr, (fo)
1703.03400#7
1703.03400#9
1703.03400
[ "1612.00796" ]
1703.03400#9
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
products, which is supported by standard deep learning li- In our braries such as TensorFlow (Abadi et al., 2016). experiments, we also include a comparison to dropping this backward pass and using a ï¬ rst-order approximation, which we discuss in Section 5.2. 3. Species of MAML In this section, we discuss speciï¬ c instantiations of our meta-learning algorithm for supervised learning and rein- forcement learning. The domains differ in the form of loss function and in how data is generated by the task and pre- sented to the model, but the same basic adaptation mecha- nism can be applied in both cases. make no assumption on the form of the model, other than to assume that it is parametrized by some parameter vector θ, and that the loss function is smooth enough in θ that we can use gradient-based learning techniques. Formally, we consider a model represented by a parametrized function fg with parameters 6. When adapt- ing to a new task 7;, the modelâ s parameters 6 become 64. In our method, the updated parameter vector / is computed using one or more gradient descent updates on task 7;. For example, when using one gradient update, 0, = 0â aVoLlr,(fo)- The step size a may be fixed as a hyperparameter or meta- learned. For simplicity of notation, we will consider one gradient update for the rest of this section, but using multi- ple gradient updates is a straightforward extension. The model parameters are trained by optimizing for the per- formance of fg, with respect to across tasks sampled from p(T). More concretely, the meta-objective is as follows: min > Lr (for) = > Lr: (fo-oVo£7,(fo)) min > Lr (for) = > Lr: (fo-oVo£7,(fo)) Ti~p(T) Ti~p(T) # 3.1. Supervised Regression and Classiï¬ cation Few-shot learning is well-studied in the domain of super- vised tasks, where the goal is to learn a new function from only a few input/output pairs for that task, using prior data from similar tasks for meta-learning.
1703.03400#8
1703.03400#10
1703.03400
[ "1612.00796" ]
1703.03400#10
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
For example, the goal might be to classify images of a Segway after seeing only one or a few examples of a Segway, with a model that has previously seen many other types of objects. Likewise, in few-shot regression, the goal is to predict the outputs of a continuous-valued function from only a few datapoints sampled from that function, after training on many func- tions with similar statistical properties. To formalize the supervised regression and classiï¬ cation problems in the context of the meta-learning deï¬ nitions in Section 2.1, we can deï¬ ne the horizon H = 1 and drop the timestep subscript on xt, since the model accepts a single input and produces a single output, rather than a sequence of inputs and outputs. The task Ti generates K i.i.d. ob- servations x from qi, and the task loss is represented by the error between the modelâ s output for x and the correspond- ing target values y for that observation and task. Note that the meta-optimization is performed over the model parameters 0, whereas the objective is computed us- ing the updated model parameters 6â
1703.03400#9
1703.03400#11
1703.03400
[ "1612.00796" ]
1703.03400#11
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
. In effect, our pro- posed method aims to optimize the model parameters such that one or a small number of gradient steps on a new task will produce maximally effective behavior on that task. The meta-optimization across tasks is performed via stochastic gradient descent (SGD), such that the model pa- rameters 6 are updated as follows: 6<-8-BVo YS. Lr( fo) (1) Ti~p(T) Two common loss functions used for supervised classiï¬ ca- tion and regression are cross-entropy and mean-squared er- ror (MSE), which we will describe below; though, other su- pervised loss functions may be used as well. For regression tasks using mean-squared error, the loss takes the form: Lr(fe)= Yo Wife) -yÂ¥P|3, @ x), yOAT; where x(j), y(j) are an input/output pair sampled from task Ti. In K-shot regression tasks, K input/output pairs are provided for learning for each task. where β is the meta step size. The full algorithm, in the general case, is outlined in Algorithm 1. The MAML meta-gradient update involves a gradient through a gradient. Computationally, this requires an addi- tional backward pass through f to compute Hessian-vector Similarly, for discrete classification tasks with a cross- entropy loss, the loss takes the form: Lo (fe) = SD Â¥ 0g folx) LTi(fÏ ) = x(j),y(j)â ¼Ti + (1 â y(j)) log(1 â fÏ (x(j))) (3)
1703.03400#10
1703.03400#12
1703.03400
[ "1612.00796" ]
1703.03400#12
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks Algorithm 2 MAML for Few-Shot Supervised Learning Require: p(T ): distribution over tasks Require: α, β: step size hyperparameters Algorithm 3 MAML for Reinforcement Learning Require: p(T ): distribution over tasks Require: α, β: step size hyperparameters 1: randomly initialize θ 2: while not done do 3: 4: 5: 6: Sample batch of tasks Ti â ¼ p(T ) for all Ti do Sample K datapoints D = {x,y} from T; Evaluate Vo £7; (fo) using D and £7, in Equation (2) or (3) Compute adapted parameters with gradient descent: 0, = 0 â aVoLr,(fo) Sample datapoints D} = {x,y} from 7; for the meta-update 7: 8: 9: 10: end for Update θ â θ â βâ θ and LTi in Equation 2 or 3 9: end for 10: Update @ â 0â BVe YF ty LT: (for) using each D/ and £7; in Equation 2 or 3 1: randomly initialize θ 2: while not done do 3: 4: 5: Sample batch of tasks Ti â ¼ p(T ) for all Ti do Sample K trajectories D = {(x1, a1, ...xx)} using fo in 7; Evaluate Vo £7; (fo) using D and £7, in Equation 4 Compute adapted parameters with gradient descent: 9; = 0 â aVoLr, (fo) Sample trajectories Dj = {(x1, a1, ...x)} using fg: in Ti end for Update @ â 8 â BV0 Yo¢. ur) £7; (for) using each Di and £7; in Equation 4 6: 7: 8: 9: 10: 9: end for 11: end while 11: end while
1703.03400#11
1703.03400#13
1703.03400
[ "1612.00796" ]
1703.03400#13
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
According to the conventional terminology, K-shot classi- ï¬ cation tasks use K input/output pairs from each class, for a total of N K data points for N -way classiï¬ cation. Given a distribution over tasks p(Ti), these loss functions can be di- rectly inserted into the equations in Section 2.2 to perform meta-learning, as detailed in Algorithm 2. # 3.2. Reinforcement Learning In reinforcement learning (RL), the goal of few-shot meta- learning is to enable an agent to quickly acquire a policy for a new test task using only a small amount of experience in the test setting. A new task might involve achieving a new goal or succeeding on a previously trained goal in a new environment. For example, an agent might learn to quickly ï¬ gure out how to navigate mazes so that, when faced with a new maze, it can determine how to reliably reach the exit with only a few samples. In this section, we will discuss how MAML can be applied to meta-learning for RL. Since the expected reward is generally not differentiable due to unknown dynamics, we use policy gradient meth- ods to estimate the gradient both for the model gradient update(s) and the meta-optimization. Since policy gradi- ents are an on-policy algorithm, each additional gradient step during the adaptation of fg requires new samples from the current policy fo,,. We detail the algorithm in Algo- rithm 3. This algorithm has the same structure as Algo- rithm 2, with the principal difference being that steps 5 and 8 require sampling trajectories from the environment cor- responding to task 7;. Practical implementations of this method may also use a variety of improvements recently proposed for policy gradient algorithms, including state or action-dependent baselines and trust regions (Schulman et al., 2015).
1703.03400#12
1703.03400#14
1703.03400
[ "1612.00796" ]
1703.03400#14
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
# 4. Related Work Each RL task Ti contains an initial state distribution qi(x1) and a transition distribution qi(xt+1|xt, at), and the loss LTi corresponds to the (negative) reward function R. The entire task is therefore a Markov decision process (MDP) with horizon H, where the learner is allowed to query a limited number of sample trajectories for few-shot learn- ing. Any aspect of the MDP may change across tasks in p(T ). The model being learned, fθ, is a policy that maps from states xt to a distribution over actions at at each timestep t â {1, ..., H}. The loss for task Ti and model fÏ takes the form H Lai (fo) = â Exi ain fo.a7; > a) - 4 t=1
1703.03400#13
1703.03400#15
1703.03400
[ "1612.00796" ]
1703.03400#15
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
The method that we propose in this paper addresses the general problem of meta-learning (Thrun & Pratt, 1998; Schmidhuber, 1987; Naik & Mammone, 1992), which in- cludes few-shot learning. A popular approach for meta- learning is to train a meta-learner that learns how to up- date the parameters of the learnerâ s model (Bengio et al., 1992; Schmidhuber, 1992; Bengio et al., 1990). This ap- proach has been applied to learning to optimize deep net- works (Hochreiter et al., 2001; Andrychowicz et al., 2016; Li & Malik, 2017), as well as for learning dynamically changing recurrent networks (Ha et al., 2017). One recent approach learns both the weight initialization and the opti- mizer, for few-shot image recognition (Ravi & Larochelle, 2017). Unlike these methods, the MAML learnerâ s weights are updated using the gradient, rather than a learned update; our method does not introduce additional parameters for meta-learning nor require a particular learner architecture. In K-shot reinforcement learning, K rollouts from fθ and task Ti, (x1, a1, ...xH ), and the corresponding rewards R(xt, at), may be used for adaptation on a new task Ti. Few-shot learning methods have also been developed for Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
1703.03400#14
1703.03400#16
1703.03400
[ "1612.00796" ]
1703.03400#16
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
speciï¬ c tasks such as generative modeling (Edwards & Storkey, 2017; Rezende et al., 2016) and image recogni- tion (Vinyals et al., 2016). One successful approach for few-shot classiï¬ cation is to learn to compare new exam- ples in a learned metric space using e.g. Siamese net- works (Koch, 2015) or recurrence with attention mech- anisms (Vinyals et al., 2016; Shyam et al., 2017; Snell et al., 2017). These approaches have generated some of the most successful results, but are difï¬ cult to directly extend to other problems, such as reinforcement learning. Our method, in contrast, is agnostic to the form of the model and to the particular learning task. model learned with MAML continue to improve with addi- tional gradient updates and/or examples? All of the meta-learning problems that we consider require some amount of adaptation to new tasks at test-time. When possible, we compare our results to an oracle that receives the identity of the task (which is a problem-dependent rep- resentation) as an additional input, as an upper bound on the performance of the model. All of the experiments were performed using TensorFlow (Abadi et al., 2016), which al- lows for automatic differentiation through the gradient up- date(s) during meta-learning.
1703.03400#15
1703.03400#17
1703.03400
[ "1612.00796" ]
1703.03400#17
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
The code is available online1. Another approach to meta-learning is to train memory- augmented models on many tasks, where the recurrent learner is trained to adapt to new tasks as it is rolled out. Such networks have been applied to few-shot image recog- nition (Santoro et al., 2016; Munkhdalai & Yu, 2017) and learning â fastâ reinforcement learning agents (Duan et al., 2016b; Wang et al., 2016). Our experiments show that our method outperforms the recurrent approach on few- shot classiï¬
1703.03400#16
1703.03400#18
1703.03400
[ "1612.00796" ]
1703.03400#18
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
cation. Furthermore, unlike these methods, our approach simply provides a good weight initialization and uses the same gradient descent update for both the learner and meta-update. As a result, it is straightforward to ï¬ ne- tune the learner for additional gradient steps. Our approach is also related to methods for initialization of deep networks. In computer vision, models pretrained on large-scale image classiï¬ cation have been shown to learn effective features for a range of problems (Donahue et al., In contrast, our method explicitly optimizes the 2014). model for fast adaptability, allowing it to adapt to new tasks with only a few examples. Our method can also be viewed as explicitly maximizing sensitivity of new task losses to the model parameters. A number of prior works have ex- plored sensitivity in deep networks, often in the context of initialization (Saxe et al., 2014; Kirkpatrick et al., 2016). Most of these works have considered good random initial- izations, though a number of papers have addressed data- dependent initializers (Kr¨ahenb¨uhl et al., 2016; Salimans & Kingma, 2016), including learned initializations (Husken & Goerick, 2000; Maclaurin et al., 2015). In contrast, our method explicitly trains the parameters for sensitivity on a given task distribution, allowing for extremely efï¬ cient adaptation for problems such as K-shot learning and rapid reinforcement learning in only one or a few gradient steps.
1703.03400#17
1703.03400#19
1703.03400
[ "1612.00796" ]
1703.03400#19
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
# 5. Experimental Evaluation The goal of our experimental evaluation is to answer the following questions: (1) Can MAML enable fast learning of new tasks? (2) Can MAML be used for meta-learning in multiple different domains, including supervised regres- sion, classiï¬ cation, and reinforcement learning? (3) Can a # 5.1. Regression We start with a simple regression problem that illustrates the basic principles of MAML. Each task involves regress- ing from the input to the output of a sine wave, where the amplitude and phase of the sinusoid are varied between tasks. Thus, p(T ) is continuous, where the amplitude varies within [0.1, 5.0] and the phase varies within [0, Ï ], and the input and output both have a dimensionality of 1. During training and testing, datapoints x are sampled uni- formly from [â 5.0, 5.0]. The loss is the mean-squared error between the prediction f (x) and true value. The regres- sor is a neural network model with 2 hidden layers of size 40 with ReLU nonlinearities. When training with MAML, we use one gradient update with K = 10 examples with a ï¬ xed step size α = 0.01, and use Adam as the meta- optimizer (Kingma & Ba, 2015). The baselines are like- wise trained with Adam. To evaluate performance, we ï¬ ne- tune a single meta-learned model on varying numbers of K examples, and compare performance to two baselines: (a) pretraining on all of the tasks, which entails training a net- work to regress to random sinusoid functions and then, at test-time, ï¬ ne-tuning with gradient descent on the K pro- vided points, using an automatically tuned step size, and (b) an oracle which receives the true amplitude and phase as input. In Appendix C, we show comparisons to addi- tional multi-task and adaptation methods. We evaluate performance by ï¬ ne-tuning the model learned by MAML and the pretrained model on K = {5, 10, 20} datapoints. During ï¬ ne-tuning, each gradient step is com- puted using the same K datapoints.
1703.03400#18
1703.03400#20
1703.03400
[ "1612.00796" ]
1703.03400#20
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
The qualitative results, shown in Figure 2 and further expanded on in Appendix B show that the learned model is able to quickly adapt with only 5 datapoints, shown as purple triangles, whereas the model that is pretrained using standard supervised learning on all tasks is unable to adequately adapt with so few dat- apoints without catastrophic overï¬ tting. Crucially, when the K datapoints are all in one half of the input range, the 1Code for the regression and supervised experiments is at github.com/cbfinn/maml and code for the RL experi- ments is at github.com/cbfinn/maml_rl Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks MAML, K=5 MAML, K=10 pretrained, K=5, step size=0.01 retrained, K=10, step size=0.02 retrained, K=10, step size=0.02 pretrained, K=5, step size=0.01 MAML, K=5 MAML, K=10 pre-update lgradstep --+ 10 grad steps â - groundtruth « 4 used for grad lgradstep <= 10 grad steps \ pre-update
1703.03400#19
1703.03400#21
1703.03400
[ "1612.00796" ]
1703.03400#21
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
Figure 2. Few-shot adaptation for the simple regression task. Left: Note that MAML is able to estimate parts of the curve where there are no datapoints, indicating that the model has learned about the periodic structure of sine waves. Right: Fine-tuning of a model pretrained on the same distribution of tasks without MAML, with a tuned step size. Due to the often contradictory outputs on the pre-training tasks, this model is unable to recover a suitable representation and fails to extrapolate from the small number of test-time samples. k-shot regression, k=10 â
1703.03400#20
1703.03400#22
1703.03400
[ "1612.00796" ]
1703.03400#22
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
*â MAMI (ours) = *- pretrained, step=0.02 sor oracle mean squared error number of gradient steps Figure 3. Quantitative sinusoid regression results showing the learning curve at meta test-time. Note that MAML continues to improve with additional gradient steps without overï¬ tting to the extremely small dataset during meta-testing, achieving a loss that is substantially lower than the baseline ï¬ ne-tuning approach. model trained with MAML can still infer the amplitude and phase in the other half of the range, demonstrating that the MAML trained model f has learned to model the periodic nature of the sine wave. Furthermore, we observe both in the qualitative and quantitative results (Figure 3 and Ap- pendix B) that the model learned with MAML continues to improve with additional gradient steps, despite being trained for maximal performance after one gradient step. This improvement suggests that MAML optimizes the pa- rameters such that they lie in a region that is amenable to fast adaptation and is sensitive to loss functions from p(T ), as discussed in Section 2.2, rather than overï¬ tting to pa- rameters θ that only improve after one step. # 5.2. Classiï¬ cation
1703.03400#21
1703.03400#23
1703.03400
[ "1612.00796" ]
1703.03400#23
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
To evaluate MAML in comparison to prior meta-learning and few-shot learning algorithms, we applied our method to few-shot image recognition on the Omniglot (Lake et al., 2011) and MiniImagenet datasets. The Omniglot dataset consists of 20 instances of 1623 characters from 50 dif- ferent alphabets. Each instance was drawn by a different person. The MiniImagenet dataset was proposed by Ravi & Larochelle (2017), and involves 64 training classes, 12 validation classes, and 24 test classes. The Omniglot and MiniImagenet image recognition tasks are the most com- mon recently used few-shot learning benchmarks (Vinyals et al., 2016; Santoro et al., 2016; Ravi & Larochelle, 2017). We follow the experimental protocol proposed by Vinyals et al. (2016), which involves fast learning of N -way clas- siï¬
1703.03400#22
1703.03400#24
1703.03400
[ "1612.00796" ]
1703.03400#24
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
cation with 1 or 5 shots. The problem of N -way classi- ï¬ cation is set up as follows: select N unseen classes, pro- vide the model with K different instances of each of the N classes, and evaluate the modelâ s ability to classify new in- stances within the N classes. For Omniglot, we randomly select 1200 characters for training, irrespective of alphabet, and use the remaining for testing. The Omniglot dataset is augmented with rotations by multiples of 90 degrees, as proposed by Santoro et al. (2016). Our model follows the same architecture as the embedding function used by Vinyals et al. (2016), which has 4 mod- ules with a 3 à 3 convolutions and 64 ï¬ lters, followed by batch normalization (Ioffe & Szegedy, 2015), a ReLU non- linearity, and 2 à 2 max-pooling. The Omniglot images are downsampled to 28 à 28, so the dimensionality of the last hidden layer is 64. As in the baseline classiï¬ er used by Vinyals et al. (2016), the last layer is fed into a soft- max. For Omniglot, we used strided convolutions instead of max-pooling. For MiniImagenet, we used 32 ï¬ lters per layer to reduce overï¬ tting, as done by (Ravi & Larochelle, 2017). In order to also provide a fair comparison against memory-augmented neural networks (Santoro et al., 2016) and to test the ï¬ exibility of MAML, we also provide re- sults for a non-convolutional network. For this, we use a network with 4 hidden layers with sizes 256, 128, 64, 64, each including batch normalization and ReLU nonlineari- ties, followed by a linear layer and softmax. For all models, the loss function is the cross-entropy error between the pre- dicted and true class. Additional hyperparameter details are included in Appendix A.1. We present the results in Table 1.
1703.03400#23
1703.03400#25
1703.03400
[ "1612.00796" ]
1703.03400#25
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
The convolutional model learned by MAML compares well to the state-of-the-art re- sults on this task, narrowly outperforming the prior meth- ods. Some of these existing methods, such as matching networks, Siamese networks, and memory models are de- signed with few-shot classiï¬ cation in mind, and are not readily applicable to domains such as reinforcement learn- ing. Additionally, the model learned with MAML uses Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
1703.03400#24
1703.03400#26
1703.03400
[ "1612.00796" ]
1703.03400#26
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
Table 1. Few-shot classiï¬ cation on held-out Omniglot characters (top) and the MiniImagenet test set (bottom). MAML achieves results that are comparable to or outperform state-of-the-art convolutional and recurrent models. Siamese nets, matching nets, and the memory module approaches are all speciï¬ c to classiï¬ cation, and are not directly applicable to regression or RL scenarios. The ± shows 95% conï¬ dence intervals over tasks. Note that the Omniglot results may not be strictly comparable since the train/test splits used in the prior work were not available. The MiniImagenet evaluation of baseline methods and matching networks is from Ravi & Larochelle (2017). 5-way Accuracy 20-way Accuracy Omniglot (Lake et al., 2011) MANN, no conv (Santoro et al., 2016) MAML, no conv (ours) Siamese nets (Koch, 2015) matching nets (Vinyals et al., 2016) neural statistician (Edwards & Storkey, 2017) memory mod. (Kaiser et al., 2017) MAML (ours) 1-shot 82.8% 5-shot 94.9% 89.7 ± 1.1% 97.5 ± 0.6% 97.3% 98.1% 98.1% 98.4% 98.4% 98.9% 99.5% 99.6% 1-shot â
1703.03400#25
1703.03400#27
1703.03400
[ "1612.00796" ]
1703.03400#27
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
â 88.2% 93.8% 93.2% 95.0% 5-shot â â 97.0% 98.5% 98.1% 98.6% 98.7 ± 0.4% 99.9 ± 0.1% 95.8 ± 0.3% 98.9 ± 0.2% 5-way Accuracy MiniImagenet (Ravi & Larochelle, 2017) ï¬ ne-tuning baseline nearest neighbor baseline matching nets (Vinyals et al., 2016) meta-learner LSTM (Ravi & Larochelle, 2017) MAML, ï¬ rst order approx. (ours) MAML (ours) 5-shot 1-shot 49.79 ± 0.79% 28.86 ± 0.54% 51.04 ± 0.65% 41.08 ± 0.70% 55.31 ± 0.73% 43.56 ± 0.84% 43.44 ± 0.77% 60.60 ± 0.71% 48.07 ± 1.75% 63.15 ± 0.91% 48.70 ± 1.84% 63.11 ± 0.92% fewer overall parameters compared to matching networks and the meta-learner LSTM, since the algorithm does not introduce any additional parameters beyond the weights of the classiï¬
1703.03400#26
1703.03400#28
1703.03400
[ "1612.00796" ]
1703.03400#28
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
er itself. Compared to these prior methods, memory-augmented neural networks (Santoro et al., 2016) speciï¬ cally, and recurrent meta-learning models in gen- eral, represent a more broadly applicable class of meth- ods that, like MAML, can be used for other tasks such as reinforcement learning (Duan et al., 2016b; Wang et al., 2016). However, as shown in the comparison, MAML sig- niï¬ cantly outperforms memory-augmented networks and the meta-learner LSTM on 5-way Omniglot and MiniIm- agenet classiï¬ cation, both in the 1-shot and 5-shot case. A significant computational expense in MAML comes from the use of second derivatives when backpropagat- ing the meta-gradient through the gradient operator in the meta-objective (see Equation (1)). On Minilmagenet, we show a comparison to a first-order approximation of MAML, where these second derivatives are omitted. Note that the resulting method still computes the meta-gradient at the post-update parameter values 0, which provides for effective meta-learning. Surprisingly however, the perfor- mance of this method is nearly the same as that obtained with full second derivatives, suggesting that most of the improvement in MAML comes from the gradients of the objective at the post-update parameter values, rather than the second order updates from differentiating through the gradient update. Past work has observed that ReLU neu- ral networks are locally almost linear (Goodfellow et al., 2015), which suggests that second derivatives may be close to zero in most cases, partially explaining the good perfor- point robot, 2d navigation WANE Gus) pretrained + random -10! += oracle average return (log scale) 1 2 number of gradient steps MAML as pre-update pre-update oa â 3steps oa 3 â kk goal position |) °2 oa a ool] â 3steps pretrained -o| 21} |e A goal position point robot, 2d navigation WANE Gus) pretrained + random -10! += oracle average return (log scale) 1 2 number of gradient steps
1703.03400#27
1703.03400#29
1703.03400
[ "1612.00796" ]
1703.03400#29
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
MAML as pre-update oa â 3steps 3 â kk goal position |) oa a -o| pre-update oa °2 ool] â 3steps pretrained 21} |e A goal position Figure 4. Top: quantitative results from 2D navigation task, Bot- tom: qualitative comparison between model learned with MAML and with ï¬ ne-tuning from a pretrained network. mance of the ï¬ rst-order approximation. This approxima- tion removes the need for computing Hessian-vector prod- ucts in an additional backward pass, which we found led to roughly 33% speed-up in network computation. # 5.3.
1703.03400#28
1703.03400#30
1703.03400
[ "1612.00796" ]
1703.03400#30
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
Reinforcement Learning To evaluate MAML on reinforcement learning problems, we constructed several sets of tasks based off of the sim- ulated continuous control environments in the rllab bench- mark suite (Duan et al., 2016a). We discuss the individual domains below. In all of the domains, the model trained by MAML is a neural network policy with two hidden lay- ers of size 100, with ReLU nonlinearities. The gradient updates are computed using vanilla policy gradient (RE- INFORCE) (Williams, 1992), and we use trust-region pol- icy optimization (TRPO) as the meta-optimizer (Schulman et al., 2015). In order to avoid computing third derivatives, Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks half-cheetah, forward/backward half-cheetah, goal velocity average a 0 2 0 1 2 number of gradient steps number of gradient steps 1 2 number of gradient steps ant, goal velocity ant, forward/backward MAML (ours) pretrained random â *> oracle 1 2 3 number of gradient steps half-cheetah, forward/backward a 0 2 1 2 number of gradient steps 2 0 1 2 number of gradient steps ant, goal velocity ant, forward/backward MAML (ours) pretrained random â *> oracle 1 2 3 number of gradient steps half-cheetah, goal velocity average return a number of gradient steps MAML (ours) pretrained random â
1703.03400#29
1703.03400#31
1703.03400
[ "1612.00796" ]
1703.03400#31
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
*> oracle Figure 5. Reinforcement learning results for the half-cheetah and ant locomotion tasks, with the tasks shown on the far right. Each gradient step requires additional samples from the environment, unlike the supervised learning tasks. The results show that MAML can adapt to new goal velocities and directions substantially faster than conventional pretraining or random initialization, achieving good performs in just two or three gradient steps. We exclude the goal velocity, random baseline curves, since the returns are much worse (< â
1703.03400#30
1703.03400#32
1703.03400
[ "1612.00796" ]
1703.03400#32
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
200 for cheetah and < â 25 for ant). we use ï¬ nite differences to compute the Hessian-vector products for TRPO. For both learning and meta-learning updates, we use the standard linear feature baseline pro- posed by Duan et al. (2016a), which is ï¬ tted separately at each iteration for each sampled task in the batch. We com- pare to three baseline models: (a) pretraining one policy on all of the tasks and then ï¬ ne-tuning, (b) training a policy from randomly initialized weights, and (c) an oracle policy which receives the parameters of the task as input, which for the tasks below corresponds to a goal position, goal di- rection, or goal velocity for the agent. The baseline models of (a) and (b) are ï¬ ne-tuned with gradient descent with a manually tuned step size.
1703.03400#31
1703.03400#33
1703.03400
[ "1612.00796" ]
1703.03400#33
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
Videos of the learned policies can be viewed at sites.google.com/view/maml 2D Navigation. In our ï¬ rst meta-RL experiment, we study a set of tasks where a point agent must move to different goal positions in 2D, randomly chosen for each task within a unit square. The observation is the current 2D position, and actions correspond to velocity commands clipped to be in the range [â 0.1, 0.1]. The reward is the negative squared distance to the goal, and episodes terminate when the agent is within 0.01 of the goal or at the horizon of H = 100. The policy was trained with MAML to maximize performance after 1 policy gradient update using 20 trajectories. Ad- ditional hyperparameter settings for this problem and the following RL problems are in Appendix A.2. In our evalu- ation, we compare adaptation to a new task with up to 4 gra- dient updates, each with 40 samples. The results in Figure 4 show the adaptation performance of models that are initial- ized with MAML, conventional pretraining on the same set of tasks, random initialization, and an oracle policy that receives the goal position as input. The results show that MAML can learn a model that adapts much more quickly in a single gradient update, and furthermore continues to improve with additional updates. the negative absolute value between the current velocity of the agent and a goal, which is chosen uniformly at random between 0.0 and 2.0 for the cheetah and between 0.0 and 3.0 for the ant. In the goal direction experiments, the re- ward is the magnitude of the velocity in either the forward or backward direction, chosen at random for each task in p(T ). The horizon is H = 200, with 20 rollouts per gradi- ent step for all problems except the ant forward/backward task, which used 40 rollouts per step. The results in Fig- ure 5 show that MAML learns a model that can quickly adapt its velocity and direction with even just a single gra- dient update, and continues to improve with more gradi- ent steps. The results also show that, on these challenging tasks, the MAML initialization substantially outperforms random initialization and pretraining.
1703.03400#32
1703.03400#34
1703.03400
[ "1612.00796" ]
1703.03400#34
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
In fact, pretraining is in some cases worse than random initialization, a fact observed in prior RL work (Parisotto et al., 2016). # 6. Discussion and Future Work We introduced a meta-learning method based on learning easily adaptable model parameters through gradient de- scent. Our approach has a number of beneï¬ ts. It is simple and does not introduce any learned parameters for meta- learning. It can be combined with any model representation that is amenable to gradient-based training, and any differ- entiable objective, including classiï¬ cation, regression, and reinforcement learning. Lastly, since our method merely produces a weight initialization, adaptation can be per- formed with any amount of data and any number of gra- dient steps, though we demonstrate state-of-the-art results on classiï¬ cation with only one or ï¬ ve examples per class.
1703.03400#33
1703.03400#35
1703.03400
[ "1612.00796" ]
1703.03400#35
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
We also show that our method can adapt an RL agent using policy gradients and a very modest amount of experience. Locomotion. To study how well MAML can scale to more complex deep RL problems, we also study adaptation on high-dimensional locomotion tasks with the MuJoCo sim- ulator (Todorov et al., 2012). The tasks require two sim- ulated robots â a planar cheetah and a 3D quadruped (the â antâ ) â
1703.03400#34
1703.03400#36
1703.03400
[ "1612.00796" ]