id
stringlengths 12
15
| title
stringlengths 8
162
| content
stringlengths 1
17.6k
| prechunk_id
stringlengths 0
15
| postchunk_id
stringlengths 0
15
| arxiv_id
stringlengths 10
10
| references
sequencelengths 1
1
|
---|---|---|---|---|---|---|
1511.06279#41 | Neural Programmer-Interpreters | 11 Published as a conference paper at ICLR 2016 # 6 APPENDIX 6.1 LISTING OF LEARNED PROGRAMS Below we list the programs learned by our model: Program ADD ADD1 CARRY LSHIFT RSHIFT ACT BUBBLESORT BUBBLE RESET BSTEP COMPSWAP LSHIFT RSHIFT ACT GOTO HGOTO LGOTO RGOTO VGOTO UGOTO DGOTO ACT RJMP MAX Descriptions Perform multi-digit addition Perform single-digit addition Mark a 1 in the carry row one unit left Shift a speciï¬ ed pointer one step left Shift a speciï¬ ed pointer one step right Move a pointer or write to the scratch pad Perform bubble sort (ascending order) Perform one sweep of pointers left to right Move both pointers all the way left Conditionally swap and advance pointers Conditionally swap two elements Shift a speciï¬ ed pointer one step left Shift a speciï¬ ed pointer one step right Swap two values at pointer locations or move a pointer Change 3D car pose to match the target Move horizontally to the target angle Move left to match the target angle Move right to match the target angle Move vertically to the target elevation Move up to match the target elevation Move down to match the target elevation Move camera 15â ¦ up, down, left or right Move all pointers to the rightmost posiiton Find maximum element of an array Calls ADD1, LSHIFT ACT, CARRY ACT ACT ACT - BUBBLE, RESET ACT, BSTEP LSHIFT COMPSWAP, RSHIFT ACT ACT ACT - HGOTO, VGOTO LGOTO, RGOTO ACT ACT UGOTO, DGOTO ACT ACT - RSHIFT BUBBLESORT,RJMP | 1511.06279#40 | 1511.06279#42 | 1511.06279 | [
"1511.04834"
] |
1511.06279#42 | Neural Programmer-Interpreters | Table 2: Programs learned for addition, sorting and 3D car canonicalization. Note the the ACT program has a different effect depending on the environment and on the passed-in arguments. 6.2 GENERATED EXECUTION TRACE OF BUBBLESORT Figure 8 shows the sequence of program calls for BUBBLESORT. Pointers 1 and 2 are used to im- Figure 8: Generated execution trace from our trained NPI sorting the array [9,2,5]. # BUBBLESORT BUBBLE BUBBLE BUBBLE PTR 2 RIGHT PTR 2 RIGHT PTR 2 RIGHT BSTEP BSTEP BSTEP COMPSWAP COMPSWAP COMPSWAP SWAP 1 2 RSHIFT RSHIFT RSHIFT PTR 1 RIGHT PTR 1 RIGHT PTR 1 RIGHT PTR 2 RIGHT PTR 2 RIGHT PTR 2 RIGHT BSTEP BSTEP BSTEP COMPSWAP COMPSWAP COMPSWAP SWAP 1 2 RSHIFT RSHIFT RSHIFT PTR 1 RIGHT PTR 1 RIGHT PTR 1 RIGHT PTR 2 RIGHT PTR 2 RIGHT PTR 2 RIGHT RESET RESET RESET LSHIFT LSHIFT LSHIFT PTR 1 LEFT PTR 1 LEFT PTR 1 LEFT PTR 2 LEFT PTR 2 LEFT PTR 2 LEFT LSHIFT LSHIFT LSHIFT PTR 1 LEFT PTR 1 LEFT PTR 1 LEFT PTR 2 LEFT PTR 2 LEFT PTR 2 LEFT PTR 3 RIGHT PTR 3 RIGHT PTR 3 RIGHT plement the â bubbleâ operation involving the comparison and swapping of adjacent array elements. The third pointer (referred to in the trace as â PTR 3â ) is used to count the number of calls to BUB- BLE. After every call to RESET the swapping pointers are moved to the beginning of the array and the counting pointer is advanced by 1. When it has reached the end of the scratch pad, the model learns to halt execution of BUBBLESORT. | 1511.06279#41 | 1511.06279#43 | 1511.06279 | [
"1511.04834"
] |
1511.06279#43 | Neural Programmer-Interpreters | 12 Published as a conference paper at ICLR 2016 6.3 ADDITIONAL EXPERIMENT ON ADDITION GENERALIZATION Based on reviewer feedback, we conducted an additional comparison of NPI and sequence-to- sequence models for the addition task, to evaluate the generalization ability. we implemented addi- tion in a sequence to sequence model, training to model sequences of the following form, e.g. for â 90 + 160 = 250â we represent the sequence as: 90X160X250 For the simple Seq2Seq baseline above (same number of LSTM layers and hidden units as NPI), we observed that the model could predict one or two digits reliably, but did not generalize even up to 20-digit addition. However, we are aware that others have gotten multi-digit addition of the above form to work to some extent with curriculum learning (Zaremba & Sutskever, 2014). In order to make a more competitive baseline, we helped Seq2Seq in two ways: 1) reverse input digits and stack the two numbers on top of each other to form a 2-channel sequence, and 2) reverse input digits and generate reversed output digits immediately at each time step. In the approach of 1), the seq2seq model schematically looks like this: output: XXXX250 input 1: 090XXXX input 2: 061XXXX In the approach of 2), the sequence looks like this: # output: 052 input 1: 090 input 2: 061 Both 1) which we call s2s-stacked and 2) which we call s2s-easy are much stronger competitors to NPI than even the proposed addition baseline. We compare the generalization performance of NPI to these baselines in the ï¬ gure below: Addition generalization: | 1511.06279#42 | 1511.06279#44 | 1511.06279 | [
"1511.04834"
] |
1511.06279#44 | Neural Programmer-Interpreters | NPI vs Seq2Seq 100.0% eee @-© _¢ nPI@32 per- sequence â @ S2S-stack@32 75.0% per-character â ®- S2S-stack@512 per-character 7 â @ S2S-easy@32 50.0% per-sequence â @ S2S-easy@64 per-sequence 25.0% 0.0% 10 100 1000 Test sequence length Figure 9: Comparing NPI and Seq2Seq variants on addition generalization to longer sequences. We found that NPI trained on 32 examples for problem lengths 1,...,20 generalizes with 100% ac- curacy to all the lengths we tried (up to 3000). s2s-easy trained on twice as many examples gen- eralizes to just over length 2000 problems. s2s-stacked barely generalizes beyond 5, even with far more data. This suggests that locality of computation makes a large impact on generalization per- formance. Even when we carefully ordered and stacked the input numbers for Seq2Seq, NPI still had an edge in performance. In contrast to Seq2Seq, NPI is taught (supervised for now) to move its pointers so that the key operations (e.g. single digit add, carry) can be done using only local information, and this appears to help generalization. | 1511.06279#43 | 1511.06279#45 | 1511.06279 | [
"1511.04834"
] |
1511.06279#45 | Neural Programmer-Interpreters | 13 | 1511.06279#44 | 1511.06279 | [
"1511.04834"
] |
|
1511.06434#0 | Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks | 6 1 0 2 # n a J 7 ] G L . s c [ 2 v 4 3 4 6 0 . 1 1 5 1 : v i X r a Under review as a conference paper at ICLR 2016 UNSUPERVISED REPRESENTATION LEARNING WITH DEEP CONVOLUTIONAL GENERATIVE ADVERSARIAL NETWORKS Alec Radford & Luke Metz indico Research Boston, MA {alec,luke}@indico.io # Soumith Chintala Facebook AI Research New York, NY [email protected] # ABSTRACT | 1511.06434#1 | 1511.06434 | [
"1505.00853"
] |
|
1511.06434#1 | Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks | In recent years, supervised learning with convolutional networks (CNNs) has seen huge adoption in computer vision applications. Comparatively, unsupervised learning with CNNs has received less attention. In this work we hope to help bridge the gap between the success of CNNs for supervised learning and unsuper- vised learning. We introduce a class of CNNs called deep convolutional generative adversarial networks (DCGANs), that have certain architectural constraints, and demonstrate that they are a strong candidate for unsupervised learning. Training on various image datasets, we show convincing evidence that our deep convolu- tional adversarial pair learns a hierarchy of representations from object parts to scenes in both the generator and discriminator. Additionally, we use the learned features for novel tasks - demonstrating their applicability as general image repre- sentations. | 1511.06434#0 | 1511.06434#2 | 1511.06434 | [
"1505.00853"
] |
1511.06434#2 | Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks | 1 # INTRODUCTION Learning reusable feature representations from large unlabeled datasets has been an area of active research. In the context of computer vision, one can leverage the practically unlimited amount of unlabeled images and videos to learn good intermediate representations, which can then be used on a variety of supervised learning tasks such as image classiï¬ cation. We propose that one way to build good image representations is by training Generative Adversarial Networks (GANs) (Goodfellow et al., 2014), and later reusing parts of the generator and discriminator networks as feature extractors for supervised tasks. GANs provide an attractive alternative to maximum likelihood techniques. One can additionally argue that their learning process and the lack of a heuristic cost function (such as pixel-wise independent mean-square error) are attractive to representation learning. GANs have been known to be unstable to train, often resulting in generators that produce nonsensical outputs. There has been very limited published research in trying to understand and visualize what GANs learn, and the intermediate representations of multi-layer GANs. | 1511.06434#1 | 1511.06434#3 | 1511.06434 | [
"1505.00853"
] |
1511.06434#3 | Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks | In this paper, we make the following contributions â ¢ We propose and evaluate a set of constraints on the architectural topology of Convolutional GANs that make them stable to train in most settings. We name this class of architectures Deep Convolutional GANs (DCGAN) â ¢ We use the trained discriminators for image classiï¬ cation tasks, showing competitive per- formance with other unsupervised algorithms. â ¢ We visualize the ï¬ lters learnt by GANs and empirically show that speciï¬ c ï¬ lters have learned to draw speciï¬ c objects. 1 | 1511.06434#2 | 1511.06434#4 | 1511.06434 | [
"1505.00853"
] |
1511.06434#4 | Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks | # Under review as a conference paper at ICLR 2016 â ¢ We show that the generators have interesting vector arithmetic properties allowing for easy manipulation of many semantic qualities of generated samples. 2 RELATED WORK 2.1 REPRESENTATION LEARNING FROM UNLABELED DATA Unsupervised representation learning is a fairly well studied problem in general computer vision research, as well as in the context of images. A classic approach to unsupervised representation learning is to do clustering on the data (for example using K-means), and leverage the clusters for improved classiï¬ | 1511.06434#3 | 1511.06434#5 | 1511.06434 | [
"1505.00853"
] |
1511.06434#5 | Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks | cation scores. In the context of images, one can do hierarchical clustering of image patches (Coates & Ng, 2012) to learn powerful image representations. Another popular method is to train auto-encoders (convolutionally, stacked (Vincent et al., 2010), separating the what and where components of the code (Zhao et al., 2015), ladder structures (Rasmus et al., 2015)) that encode an image into a compact code, and decode the code to reconstruct the image as accurately as possible. These methods have also been shown to learn good feature representations from image pixels. Deep belief networks (Lee et al., 2009) have also been shown to work well in learning hierarchical representations. 2.2 GENERATING NATURAL IMAGES Generative image models are well studied and fall into two categories: parametric and non- parametric. The non-parametric models often do matching from a database of existing images, often matching patches of images, and have been used in texture synthesis (Efros et al., 1999), super-resolution (Freeman et al., 2002) and in-painting (Hays & Efros, 2007). Parametric models for generating images has been explored extensively (for example on MNIST digits or for texture synthesis (Portilla & Simoncelli, 2000)). However, generating natural images of the real world have had not much success until recently. A variational sampling approach to generating images (Kingma & Welling, 2013) has had some success, but the samples often suffer from being blurry. Another approach generates images using an iterative forward diffusion process (Sohl-Dickstein et al., 2015). Generative Adversarial Networks (Goodfellow et al., 2014) generated images suffering from being noisy and incomprehensible. A laplacian pyramid extension to this approach (Denton et al., 2015) showed higher quality images, but they still suffered from the objects looking wobbly because of noise introduced in chaining multiple models. A recurrent network approach (Gregor et al., 2015) and a deconvolution network approach (Dosovitskiy et al., 2014) have also recently had some success with generating natural images. However, they have not leveraged the generators for supervised tasks. 2.3 VISUALIZING THE INTERNALS OF CNNS | 1511.06434#4 | 1511.06434#6 | 1511.06434 | [
"1505.00853"
] |
1511.06434#6 | Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks | One constant criticism of using neural networks has been that they are black-box methods, with little understanding of what the networks do in the form of a simple human-consumable algorithm. In the context of CNNs, Zeiler et. al. (Zeiler & Fergus, 2014) showed that by using deconvolutions and ï¬ ltering the maximal activations, one can ï¬ nd the approximate purpose of each convolution ï¬ lter in the network. Similarly, using a gradient descent on the inputs lets us inspect the ideal image that activates certain subsets of ï¬ lters (Mordvintsev et al.). # 3 APPROACH AND MODEL ARCHITECTURE | 1511.06434#5 | 1511.06434#7 | 1511.06434 | [
"1505.00853"
] |
1511.06434#7 | Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks | Historical attempts to scale up GANs using CNNs to model images have been unsuccessful. This motivated the authors of LAPGAN (Denton et al., 2015) to develop an alternative approach to it- eratively upscale low resolution generated images which can be modeled more reliably. We also encountered difï¬ culties attempting to scale GANs using CNN architectures commonly used in the supervised literature. However, after extensive model exploration we identiï¬ ed a family of archi- 2 | 1511.06434#6 | 1511.06434#8 | 1511.06434 | [
"1505.00853"
] |
1511.06434#8 | Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks | # Under review as a conference paper at ICLR 2016 tectures that resulted in stable training across a range of datasets and allowed for training higher resolution and deeper generative models. Core to our approach is adopting and modifying three recently demonstrated changes to CNN archi- tectures. The ï¬ rst is the all convolutional net (Springenberg et al., 2014) which replaces deterministic spatial pooling functions (such as maxpooling) with strided convolutions, allowing the network to learn its own spatial downsampling. We use this approach in our generator, allowing it to learn its own spatial upsampling, and discriminator. Second is the trend towards eliminating fully connected layers on top of convolutional features. The strongest example of this is global average pooling which has been utilized in state of the art image classiï¬ cation models (Mordvintsev et al.). We found global average pooling increased model stability but hurt convergence speed. A middle ground of directly connecting the highest convolutional features to the input and output respectively of the generator and discriminator worked well. | 1511.06434#7 | 1511.06434#9 | 1511.06434 | [
"1505.00853"
] |
1511.06434#9 | Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks | The ï¬ rst layer of the GAN, which takes a uniform noise distribution Z as input, could be called fully connected as it is just a matrix multiplication, but the result is reshaped into a 4-dimensional tensor and used as the start of the convolution stack. For the discriminator, the last convolution layer is ï¬ attened and then fed into a single sigmoid output. See Fig. 1 for a visualization of an example model architecture. Third is Batch Normalization (Ioffe & Szegedy, 2015) which stabilizes learning by normalizing the input to each unit to have zero mean and unit variance. This helps deal with training problems that arise due to poor initialization and helps gradient ï¬ ow in deeper models. This proved critical to get deep generators to begin learning, preventing the generator from collapsing all samples to a single point which is a common failure mode observed in GANs. Directly applying batchnorm to all layers however, resulted in sample oscillation and model instability. This was avoided by not applying batchnorm to the generator output layer and the discriminator input layer. The ReLU activation (Nair & Hinton, 2010) is used in the generator with the exception of the output layer which uses the Tanh function. We observed that using a bounded activation allowed the model to learn more quickly to saturate and cover the color space of the training distribution. Within the discriminator we found the leaky rectiï¬ ed activation (Maas et al., 2013) (Xu et al., 2015) to work well, especially for higher resolution modeling. This is in contrast to the original GAN paper, which used the maxout activation (Goodfellow et al., 2013). # Architecture guidelines for stable Deep Convolutional GANs | 1511.06434#8 | 1511.06434#10 | 1511.06434 | [
"1505.00853"
] |
1511.06434#10 | Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks | ⠢ Replace any pooling layers with strided convolutions (discriminator) and fractional-strided convolutions (generator). Use batchnorm in both the generator and the discriminator. ⠢ Remove fully connected hidden layers for deeper architectures. ⠢ Use ReLU activation in generator for all layers except for the output, which uses Tanh. ⠢ Use LeakyReLU activation in the discriminator for all layers. # 4 DETAILS OF ADVERSARIAL TRAINING We trained DCGANs on three datasets, Large-scale Scene Understanding (LSUN) (Yu et al., 2015), Imagenet-1k and a newly assembled Faces dataset. Details on the usage of each of these datasets are given below. No pre-processing was applied to training images besides scaling to the range of the tanh activation function [-1, 1]. All models were trained with mini-batch stochastic gradient descent (SGD) with a mini-batch size of 128. All weights were initialized from a zero-centered Normal distribution with standard deviation 0.02. In the LeakyReLU, the slope of the leak was set to 0.2 in all models. While previous GAN work has used momentum to accelerate training, we used the Adam optimizer (Kingma & Ba, 2014) with tuned hyperparameters. We found the suggested learning rate of 0.001, to be too high, using 0.0002 instead. Additionally, we found leaving the momentum term β1 at the | 1511.06434#9 | 1511.06434#11 | 1511.06434 | [
"1505.00853"
] |
1511.06434#11 | Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks | 3 # Under review as a conference paper at ICLR 2016 Project and reshape G@) Figure 1: DCGAN generator used for LSUN scene modeling. A 100 dimensional uniform distribu- tion Z is projected to a small spatial extent convolutional representation with many feature maps. A series of four fractionally-strided convolutions (in some recent papers, these are wrongly called deconvolutions) then convert this high level representation into a 64 Ã 64 pixel image. Notably, no fully connected or pooling layers are used. suggested value of 0.9 resulted in training oscillation and instability while reducing it to 0.5 helped stabilize training. | 1511.06434#10 | 1511.06434#12 | 1511.06434 | [
"1505.00853"
] |
1511.06434#12 | Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks | # 4.1 LSUN As visual quality of samples from generative image models has improved, concerns of over-ï¬ tting and memorization of training samples have risen. To demonstrate how our model scales with more data and higher resolution generation, we train a model on the LSUN bedrooms dataset containing a little over 3 million training examples. Recent analysis has shown that there is a direct link be- tween how fast models learn and their generalization performance (Hardt et al., 2015). We show samples from one epoch of training (Fig.2), mimicking online learning, in addition to samples after convergence (Fig.3), as an opportunity to demonstrate that our model is not producing high quality samples via simply overï¬ tting/memorizing training examples. No data augmentation was applied to the images. # 4.1.1 DEDUPLICATION To further decrease the likelihood of the generator memorizing input examples (Fig.2) we perform a simple image de-duplication process. | 1511.06434#11 | 1511.06434#13 | 1511.06434 | [
"1505.00853"
] |
1511.06434#13 | Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks | We ï¬ t a 3072-128-3072 de-noising dropout regularized RELU autoencoder on 32x32 downsampled center-crops of training examples. The resulting code layer activations are then binarized via thresholding the ReLU activation which has been shown to be an effective information preserving technique (Srivastava et al., 2014) and provides a convenient form of semantic-hashing, allowing for linear time de-duplication . Visual inspection of hash collisions showed high precision with an estimated false positive rate of less than 1 in 100. Additionally, the technique detected and removed approximately 275,000 near duplicates, suggesting a high recall. | 1511.06434#12 | 1511.06434#14 | 1511.06434 | [
"1505.00853"
] |
1511.06434#14 | Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks | 4.2 FACES We scraped images containing human faces from random web image queries of peoples names. The people names were acquired from dbpedia, with a criterion that they were born in the modern era. This dataset has 3M images from 10K people. We run an OpenCV face detector on these images, keeping the detections that are sufï¬ ciently high resolution, which gives us approximately 350,000 face boxes. We use these face boxes for training. No data augmentation was applied to the images. | 1511.06434#13 | 1511.06434#15 | 1511.06434 | [
"1505.00853"
] |
1511.06434#15 | Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks | 4 # Under review as a conference paper at ICLR 2016 Figure 2: Generated bedrooms after one training pass through the dataset. Theoretically, the model could learn to memorize training examples, but this is experimentally unlikely as we train with a small learning rate and minibatch SGD. We are aware of no prior empirical evidence demonstrating memorization with SGD and a small learning rate. Figure 3: Generated bedrooms after ï¬ ve epochs of training. There appears to be evidence of visual under-ï¬ tting via repeated noise textures across multiple samples such as the base boards of some of the beds. 4.3 IMAGENET-1K We use Imagenet-1k (Deng et al., 2009) as a source of natural images for unsupervised training. We train on 32 à 32 min-resized center crops. No data augmentation was applied to the images. | 1511.06434#14 | 1511.06434#16 | 1511.06434 | [
"1505.00853"
] |
1511.06434#16 | Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks | 5 # Under review as a conference paper at ICLR 2016 5 EMPIRICAL VALIDATION OF DCGANS CAPABILITIES 5.1 CLASSIFYING CIFAR-10 USING GANS AS A FEATURE EXTRACTOR One common technique for evaluating the quality of unsupervised representation learning algo- rithms is to apply them as a feature extractor on supervised datasets and evaluate the performance of linear models ï¬ tted on top of these features. On the CIFAR-10 dataset, a very strong baseline performance has been demonstrated from a well tuned single layer feature extraction pipeline utilizing K-means as a feature learning algorithm. When using a very large amount of feature maps (4800) this technique achieves 80.6% accuracy. An unsupervised multi-layered extension of the base algorithm reaches 82.0% accuracy (Coates & Ng, 2011). To evaluate the quality of the representations learned by DCGANs for supervised tasks, we train on Imagenet-1k and then use the discriminatorâ s convolutional features from all layers, maxpooling each layers representation to produce a 4 à 4 spatial grid. These features are then ï¬ attened and concatenated to form a 28672 dimensional vector and a regularized linear L2-SVM classiï¬ er is trained on top of them. This achieves 82.8% accuracy, out performing all K-means based approaches. Notably, the discriminator has many less feature maps (512 in the highest layer) compared to K-means based techniques, but does result in a larger total feature vector size due to the many layers of 4 à 4 spatial locations. The performance of DCGANs is still less than that of Exemplar CNNs (Dosovitskiy et al., 2015), a technique which trains normal discriminative CNNs in an unsupervised fashion to differentiate between speciï¬ cally chosen, aggressively augmented, exemplar samples from the source dataset. Further improvements could be made by ï¬ netuning the discriminatorâ s representations, but we leave this for future work. Additionally, since our DCGAN was never trained on CIFAR-10 this experiment also demonstrates the domain robustness of the learned features. Table 1: CIFAR-10 classiï¬ cation results using our pre-trained model. | 1511.06434#15 | 1511.06434#17 | 1511.06434 | [
"1505.00853"
] |
1511.06434#17 | Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks | Our DCGAN is not pre- trained on CIFAR-10, but on Imagenet-1k, and the features are used to classify CIFAR-10 images. Model 1 Layer K-means 3 Layer K-means Learned RF View Invariant K-means Exemplar CNN DCGAN (ours) + L2-SVM Accuracy Accuracy (400 per class) max # of features units 80.6% 82.0% 81.9% 84.3% 82.8% 63.7% (±0.7%) 70.7% (±0.7%) 72.6% (±0.7%) 77.4% (±0.2%) 73.8% (±0.4%) 4800 3200 6400 1024 512 5.2 CLASSIFYING SVHN DIGITS USING GANS AS A FEATURE EXTRACTOR On the StreetView House Numbers dataset (SVHN)(Netzer et al., 2011), we use the features of the discriminator of a DCGAN for supervised purposes when labeled data is scarce. Following similar dataset preparation rules as in the CIFAR-10 experiments, we split off a validation set of 10,000 examples from the non-extra set and use it for all hyperparameter and model selection. 1000 uniformly class distributed training examples are randomly selected and used to train a regularized linear L2-SVM classiï¬ er on top of the same feature extraction pipeline used for CIFAR-10. This achieves state of the art (for classiï¬ cation using 1000 labels) at 22.48% test error, improving upon another modifcation of CNNs designed to leverage unlabled data (Zhao et al., 2015). Additionally, we validate that the CNN architecture used in DCGAN is not the key contributing factor of the modelâ s performance by training a purely supervised CNN with the same architecture on the same data and optimizing this model via random search over 64 hyperparameter trials (Bergstra & Bengio, 2012). | 1511.06434#16 | 1511.06434#18 | 1511.06434 | [
"1505.00853"
] |
1511.06434#18 | Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks | It achieves a signï¬ cantly higher 28.87% validation error. 6 # INVESTIGATING AND VISUALIZING THE INTERNALS OF THE NETWORKS We investigate the trained generators and discriminators in a variety of ways. We do not do any kind of nearest neighbor search on the training set. Nearest neighbors in pixel or feature space are 6 # Under review as a conference paper at ICLR 2016 Table 2: SVHN classiï¬ cation with 1000 labels Model KNN TSVM M1+KNN M1+TSVM M1+M2 SWWAE without dropout SWWAE with dropout DCGAN (ours) + L2-SVM Supervised CNN with the same architecture error rate 77.93% 66.55% 65.63% 54.33% 36.02% 27.83% 23.56% 22.48% 28.87% (validation) trivially fooled (Theis et al., 2015) by small image transforms. We also do not use log-likelihood metrics to quantitatively assess the model, as it is a poor (Theis et al., 2015) metric. | 1511.06434#17 | 1511.06434#19 | 1511.06434 | [
"1505.00853"
] |
1511.06434#19 | Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks | 6.1 WALKING IN THE LATENT SPACE The ï¬ rst experiment we did was to understand the landscape of the latent space. Walking on the manifold that is learnt can usually tell us about signs of memorization (if there are sharp transitions) and about the way in which the space is hierarchically collapsed. If walking in this latent space results in semantic changes to the image generations (such as objects being added and removed), we can reason that the model has learned relevant and interesting representations. The results are shown in Fig.4. 6.2 VISUALIZING THE DISCRIMINATOR FEATURES Previous work has demonstrated that supervised training of CNNs on large image datasets results in very powerful learned features (Zeiler & Fergus, 2014). Additionally, supervised CNNs trained on scene classiï¬ cation learn object detectors (Oquab et al., 2014). We demonstrate that an unsupervised DCGAN trained on a large image dataset can also learn a hierarchy of features that are interesting. Using guided backpropagation as proposed by (Springenberg et al., 2014), we show in Fig.5 that the features learnt by the discriminator activate on typical parts of a bedroom, like beds and windows. | 1511.06434#18 | 1511.06434#20 | 1511.06434 | [
"1505.00853"
] |
1511.06434#20 | Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks | For comparison, in the same ï¬ gure, we give a baseline for randomly initialized features that are not activated on anything that is semantically relevant or interesting. 6.3 MANIPULATING THE GENERATOR REPRESENTATION 6.3.1 FORGETTING TO DRAW CERTAIN OBJECTS In addition to the representations learnt by a discriminator, there is the question of what representa- tions the generator learns. The quality of samples suggest that the generator learns speciï¬ c object representations for major scene components such as beds, windows, lamps, doors, and miscellaneous furniture. In order to explore the form that these representations take, we conducted an experiment to attempt to remove windows from the generator completely. On 150 samples, 52 window bounding boxes were drawn manually. On the second highest con- volution layer features, logistic regression was ï¬ t to predict whether a feature activation was on a window (or not), by using the criterion that activations inside the drawn bounding boxes are posi- tives and random samples from the same images are negatives. Using this simple model, all feature maps with weights greater than zero ( 200 in total) were dropped from all spatial locations. Then, random new samples were generated with and without the feature map removal. The generated images with and without the window dropout are shown in Fig.6, and interestingly, the network mostly forgets to draw windows in the bedrooms, replacing them with other objects. | 1511.06434#19 | 1511.06434#21 | 1511.06434 | [
"1505.00853"
] |
1511.06434#21 | Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks | 7 # Under review as a conference paper at ICLR 2016 Figure 4: Top rows: Interpolation between a series of 9 random points in Z show that the space learned has smooth transitions, with every image in the space plausibly looking like a bedroom. In the 6th row, you see a room without a window slowly transforming into a room with a giant window. In the 10th row, you see what appears to be a TV slowly being transformed into a window. | 1511.06434#20 | 1511.06434#22 | 1511.06434 | [
"1505.00853"
] |
1511.06434#22 | Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks | # 6.3.2 VECTOR ARITHMETIC ON FACE SAMPLES In the context of evaluating learned representations of words (Mikolov et al., 2013) demonstrated that simple arithmetic operations revealed rich linear structure in representation space. One canoni- cal example demonstrated that the vector(â Kingâ ) - vector(â Manâ ) + vector(â Womanâ ) resulted in a vector whose nearest neighbor was the vector for Queen. We investigated whether similar structure emerges in the Z representation of our generators. We performed similar arithmetic on the Z vectors of sets of exemplar samples for visual concepts. Experiments working on only single samples per concept were unstable, but averaging the Z vector for three examplars showed consistent and stable generations that semantically obeyed the arithmetic. In addition to the object manipulation shown in (Fig. 7), we demonstrate that face pose is also modeled linearly in Z space (Fig. 8). These demonstrations suggest interesting applications can be developed using Z representations learned by our models. It has been previously demonstrated that conditional generative models can learn to convincingly model object attributes like scale, rotation, and position (Dosovitskiy et al., 2014). | 1511.06434#21 | 1511.06434#23 | 1511.06434 | [
"1505.00853"
] |
1511.06434#23 | Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks | This is to our knowledge the ï¬ rst demonstration of this occurring in purely unsupervised 8 # Under review as a conference paper at ICLR 2016 Random filters Trained filters Figure 5: On the right, guided backpropagation visualizations of maximal axis-aligned responses for the ï¬ rst 6 learned convolutional features from the last convolution layer in the discriminator. Notice a signiï¬ cant minority of features respond to beds - the central object in the LSUN bedrooms dataset. On the left is a random ï¬ lter baseline. Comparing to the previous responses there is little to no discrimination and random structure. Figure 6: Top row: un-modiï¬ ed samples from model. Bottom row: the same samples generated with dropping out â | 1511.06434#22 | 1511.06434#24 | 1511.06434 | [
"1505.00853"
] |
1511.06434#24 | Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks | windowâ ï¬ lters. Some windows are removed, others are transformed into objects with similar visual appearance such as doors and mirrors. Although visual quality decreased, overall scene composition stayed similar, suggesting the generator has done a good job disentangling scene representation from object representation. Extended experiments could be done to remove other objects from the image and modify the objects the generator draws. models. Further exploring and developing the above mentioned vector arithmetic could dramat- ically reduce the amount of data needed for conditional generative modeling of complex image distributions. # 7 CONCLUSION AND FUTURE WORK We propose a more stable set of architectures for training generative adversarial networks and we give evidence that adversarial networks learn good representations of images for supervised learning and generative modeling. There are still some forms of model instability remaining - we noticed as models are trained longer they sometimes collapse a subset of ï¬ lters to a single oscillating mode. | 1511.06434#23 | 1511.06434#25 | 1511.06434 | [
"1505.00853"
] |
1511.06434#25 | Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks | 9 # Under review as a conference paper at ICLR 2016 uU __,â __ â â ~ ; smiling neutral neutral smiling man woman woman man uâ â _ } Sy man man woman . with glasses without glasses without glasses woman with glasses Results of doing the same arithmetic in pixel space -fi+=& -fitG@=& Figure 7: Vector arithmetic for visual concepts. For each column, the Z vectors of samples are averaged. Arithmetic was then performed on the mean vectors creating a new vector Y . The center sample on the right hand side is produce by feeding Y as input to the generator. To demonstrate the interpolation capabilities of the generator, uniform noise sampled with scale +-0.25 was added to Y to produce the 8 other samples. Applying arithmetic in the input space (bottom two examples) results in noisy overlap due to misalignment. Further work is needed to tackle this from of instability. We think that extending this framework | 1511.06434#24 | 1511.06434#26 | 1511.06434 | [
"1505.00853"
] |
1511.06434#26 | Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks | 10 # Under review as a conference paper at ICLR 2016 Figure 8: A â turnâ vector was created from four averaged samples of faces looking left vs looking right. By adding interpolations along this axis to random samples we were able to reliably transform their pose. to other domains such as video (for frame prediction) and audio (pre-trained features for speech synthesis) should be very interesting. Further investigations into the properties of the learnt latent space would be interesting as well. # ACKNOWLEDGMENTS We are fortunate and thankful for all the advice and guidance we have received during this work, especially that of Ian Goodfellow, Tobias Springenberg, Arthur Szlam and Durk Kingma. Addition- ally weâ d like to thank all of the folks at indico for providing support, resources, and conversations, especially the two other members of the indico research team, Dan Kuster and Nathan Lintz. Finally, weâ d like to thank Nvidia for donating a Titan-X GPU used in this work. # REFERENCES Bergstra, James and Bengio, Yoshua. Random search for hyper-parameter optimization. JMLR, 2012. Coates, Adam and Ng, Andrew. Selecting receptive ï¬ elds in deep networks. NIPS, 2011. Coates, Adam and Ng, Andrew Y. Learning feature representations with k-means. In Neural Net- works: | 1511.06434#25 | 1511.06434#27 | 1511.06434 | [
"1505.00853"
] |
1511.06434#27 | Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks | Tricks of the Trade, pp. 561â 580. Springer, 2012. Deng, Jia, Dong, Wei, Socher, Richard, Li, Li-Jia, Li, Kai, and Fei-Fei, Li. Imagenet: A large-scale hierarchical image database. In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pp. 248â 255. IEEE, 2009. Denton, Emily, Chintala, Soumith, Szlam, Arthur, and Fergus, Rob. | 1511.06434#26 | 1511.06434#28 | 1511.06434 | [
"1505.00853"
] |
1511.06434#28 | Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks | Deep generative image models using a laplacian pyramid of adversarial networks. arXiv preprint arXiv:1506.05751, 2015. Dosovitskiy, Alexey, Springenberg, Jost Tobias, and Brox, Thomas. Learning to generate chairs with convolutional neural networks. arXiv preprint arXiv:1411.5928, 2014. 11 # Under review as a conference paper at ICLR 2016 Dosovitskiy, Alexey, Fischer, Philipp, Springenberg, Jost Tobias, Riedmiller, Martin, and Brox, Thomas. Discriminative unsupervised feature learning with exemplar convolutional neural net- works. In Pattern Analysis and Machine Intelligence, IEEE Transactions on, volume 99. IEEE, 2015. Efros, Alexei, Leung, Thomas K, et al. Texture synthesis by non-parametric sampling. In Computer Vision, 1999. The Proceedings of the Seventh IEEE International Conference on, volume 2, pp. 1033â 1038. IEEE, 1999. Freeman, William T, Jones, Thouis R, and Pasztor, Egon C. | 1511.06434#27 | 1511.06434#29 | 1511.06434 | [
"1505.00853"
] |
1511.06434#29 | Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks | Example-based super-resolution. Com- puter Graphics and Applications, IEEE, 22(2):56â 65, 2002. Goodfellow, Ian J, Warde-Farley, David, Mirza, Mehdi, Courville, Aaron, and Bengio, Yoshua. Maxout networks. arXiv preprint arXiv:1302.4389, 2013. Goodfellow, Ian J., Pouget-Abadie, Jean, Mirza, Mehdi, Xu, Bing, Warde-Farley, David, Ozair, Sherjil, Courville, Aaron C., and Bengio, Yoshua. Generative adversarial nets. NIPS, 2014. | 1511.06434#28 | 1511.06434#30 | 1511.06434 | [
"1505.00853"
] |
1511.06434#30 | Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks | Gregor, Karol, Danihelka, Ivo, Graves, Alex, and Wierstra, Daan. Draw: A recurrent neural network for image generation. arXiv preprint arXiv:1502.04623, 2015. Hardt, Moritz, Recht, Benjamin, and Singer, Yoram. Train faster, generalize better: Stability of stochastic gradient descent. arXiv preprint arXiv:1509.01240, 2015. | 1511.06434#29 | 1511.06434#31 | 1511.06434 | [
"1505.00853"
] |
1511.06434#31 | Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks | Hauberg, Sren, Freifeld, Oren, Larsen, Anders Boesen Lindbo, Fisher III, John W., and Hansen, Lars Kair. Dreaming more data: Class-dependent distributions over diffeomorphisms for learned data augmentation. arXiv preprint arXiv:1510.02795, 2015. Hays, James and Efros, Alexei A. Scene completion using millions of photographs. ACM Transac- tions on Graphics (TOG), 26(3):4, 2007. Ioffe, Sergey and Szegedy, Christian. | 1511.06434#30 | 1511.06434#32 | 1511.06434 | [
"1505.00853"
] |
1511.06434#32 | Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks | Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015. Kingma, Diederik P and Ba, Jimmy Lei. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. Kingma, Diederik P and Welling, Max. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013. Lee, Honglak, Grosse, Roger, Ranganath, Rajesh, and Ng, Andrew Y. Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations. In Proceedings of the 26th Annual International Conference on Machine Learning, pp. 609â | 1511.06434#31 | 1511.06434#33 | 1511.06434 | [
"1505.00853"
] |
1511.06434#33 | Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks | 616. ACM, 2009. Loosli, Ga¨elle, Canu, St´ephane, and Bottou, L´eon. Training invariant support vector machines using In Bottou, L´eon, Chapelle, Olivier, DeCoste, Dennis, and Weston, Jason selective sampling. (eds.), Large Scale Kernel Machines, pp. 301â 320. MIT Press, Cambridge, MA., 2007. URL http://leon.bottou.org/papers/loosli-canu-bottou-2006. Maas, Andrew L, Hannun, Awni Y, and Ng, Andrew Y. | 1511.06434#32 | 1511.06434#34 | 1511.06434 | [
"1505.00853"
] |
1511.06434#34 | Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks | Rectiï¬ er nonlinearities improve neural network acoustic models. In Proc. ICML, volume 30, 2013. Mikolov, Tomas, Sutskever, Ilya, Chen, Kai, Corrado, Greg S, and Dean, Jeff. Distributed repre- sentations of words and phrases and their compositionality. In Advances in neural information processing systems, pp. 3111â 3119, 2013. Inceptionism : Going deeper into neural networks. http://googleresearch.blogspot.com/2015/06/ inceptionism-going-deeper-into-neural.html. | 1511.06434#33 | 1511.06434#35 | 1511.06434 | [
"1505.00853"
] |
1511.06434#35 | Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks | Accessed: 2015-06-17. Nair, Vinod and Hinton, Geoffrey E. Rectiï¬ ed linear units improve restricted boltzmann machines. In Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 807â 814, 2010. 12 # Under review as a conference paper at ICLR 2016 Netzer, Yuval, Wang, Tao, Coates, Adam, Bissacco, Alessandro, Wu, Bo, and Ng, Andrew Y. Read- ing digits in natural images with unsupervised feature learning. In NIPS workshop on deep learn- ing and unsupervised feature learning, volume 2011, pp. 5. | 1511.06434#34 | 1511.06434#36 | 1511.06434 | [
"1505.00853"
] |
1511.06434#36 | Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks | Granada, Spain, 2011. Oquab, M., Bottou, L., Laptev, I., and Sivic, J. Learning and transferring mid-level image represen- tations using convolutional neural networks. In CVPR, 2014. Portilla, Javier and Simoncelli, Eero P. A parametric texture model based on joint statistics of complex wavelet coefï¬ cients. International Journal of Computer Vision, 40(1):49â 70, 2000. | 1511.06434#35 | 1511.06434#37 | 1511.06434 | [
"1505.00853"
] |
1511.06434#37 | Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks | Rasmus, Antti, Valpola, Harri, Honkala, Mikko, Berglund, Mathias, and Raiko, Tapani. Semi- supervised learning with ladder network. arXiv preprint arXiv:1507.02672, 2015. Sohl-Dickstein, Jascha, Weiss, Eric A, Maheswaranathan, Niru, and Ganguli, Surya. Deep unsuper- vised learning using nonequilibrium thermodynamics. arXiv preprint arXiv:1503.03585, 2015. Springenberg, Jost Tobias, Dosovitskiy, Alexey, Brox, Thomas, and Riedmiller, Martin. | 1511.06434#36 | 1511.06434#38 | 1511.06434 | [
"1505.00853"
] |
1511.06434#38 | Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks | Striving for simplicity: The all convolutional net. arXiv preprint arXiv:1412.6806, 2014. Srivastava, Rupesh Kumar, Masci, Jonathan, Gomez, Faustino, and Schmidhuber, J¨urgen. Under- standing locally competitive networks. arXiv preprint arXiv:1410.1165, 2014. Theis, L., van den Oord, A., and Bethge, M. A note on the evaluation of generative models. arXiv:1511.01844, Nov 2015. URL http://arxiv.org/abs/1511.01844. Vincent, Pascal, Larochelle, Hugo, Lajoie, Isabelle, Bengio, Yoshua, and Manzagol, Pierre-Antoine. Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. The Journal of Machine Learning Research, 11:3371â 3408, 2010. Xu, Bing, Wang, Naiyan, Chen, Tianqi, and Li, Mu. | 1511.06434#37 | 1511.06434#39 | 1511.06434 | [
"1505.00853"
] |
1511.06434#39 | Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks | Empirical evaluation of rectiï¬ ed activations in convolutional network. arXiv preprint arXiv:1505.00853, 2015. Yu, Fisher, Zhang, Yinda, Song, Shuran, Seff, Ari, and Xiao, Jianxiong. Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365, 2015. Zeiler, Matthew D and Fergus, Rob. Visualizing and understanding convolutional networks. Computer Visionâ ECCV 2014, pp. 818â 833. Springer, 2014. In Zhao, Junbo, Mathieu, Michael, Goroshin, Ross, and Lecun, Yann. Stacked what-where auto- encoders. arXiv preprint arXiv:1506.02351, 2015. | 1511.06434#38 | 1511.06434#40 | 1511.06434 | [
"1505.00853"
] |
1511.06434#40 | Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks | 13 Under review as a conference paper at ICLR 2016 # 8 SUPPLEMENTARY MATERIAL 8.1 EVALUATING DCGANS CAPABILITY TO CAPTURE DATA DISTRIBUTIONS We propose to apply standard classiï¬ cation metrics to a conditional version of our model, evaluating the conditional distributions learned. We trained a DCGAN on MNIST (splitting off a 10K validation set) as well as a permutation invariant GAN baseline and evaluated the models using a nearest neighbor classiï¬ er comparing real data to a set of generated conditional samples. We found that removing the scale and bias parameters from batchnorm produced better results for both models. We speculate that the noise introduced by batchnorm helps the generative models to better explore and generate from the underlying data distribution. The results are shown in Table 3 which compares our models with other techniques. The DCGAN model achieves the same test error as a nearest neighbor classiï¬ | 1511.06434#39 | 1511.06434#41 | 1511.06434 | [
"1505.00853"
] |
1511.06434#41 | Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks | er ï¬ tted on the training dataset - suggesting the DCGAN model has done a superb job at modeling the conditional distributions of this dataset. At one million samples per class, the DCGAN model outperforms Inï¬ MNIST (Loosli et al., 2007), a hand developed data augmentation pipeline which uses translations and elastic deformations of training examples. The DCGAN is competitive with a probabilistic generative data augmentation technique utilizing learned per class transformations (Hauberg et al., 2015) while being more general as it directly models the data instead of transformations of the data. | 1511.06434#40 | 1511.06434#42 | 1511.06434 | [
"1505.00853"
] |
1511.06434#42 | Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks | Table 3: Nearest neighbor classiï¬ cation results. Model AlignMNIST Inï¬ MNIST Real Data GAN DCGAN (ours) Test Error @50K samples Test Error @10M samples - - 3.1% 6.28% 2.98% 1.4% 2.6% - 5.65% 1.48% ®AYNAHKWONYD Ses eHcwh-O OWN TA ALMLAO 9mY EYWAKwYWHâ VY% MENS YLWYAO YON OC ANRWN-~O Peneatw L~o me Oye PrA CG YPâ -â O KwWprâ G AQYOYLaAL-O WAN AWW HXG NIST ag? | 1511.06434#41 | 1511.06434#43 | 1511.06434 | [
"1505.00853"
] |
1511.06434#43 | Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks | SONY DO Groundtruth = Figure 9: Side-by-side illustration of (from left-to-right) the MNIST dataset, generations from a baseline GAN, and generations from our DCGAN . 14 # Under review as a conference paper at ICLR 2016 iD. Sees CDikk Bic 3 DET Gr : De Figure 10: More face generations from our Face DCGAN. 15 # Under review as a conference paper at ICLR 2016 Figure 11: Generations of a DCGAN that was trained on the Imagenet-1k dataset. 16 | 1511.06434#42 | 1511.06434 | [
"1505.00853"
] |
|
1511.06342#0 | Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning | 6 1 0 2 b e F 2 2 ] G L . s c [ 4 v 2 4 3 6 0 . 1 1 5 1 : v i X r a Published as a conference paper at ICLR 2016 # ACTOR-MIMIC DEEP MULTITASK AND TRANSFER REINFORCEMENT LEARNING Emilio Parisotto, Jimmy Ba, Ruslan Salakhutdinov Department of Computer Science University of Toronto Toronto, Ontario, Canada {eparisotto,jimmy,rsalakhu}@cs.toronto.edu # ABSTRACT | 1511.06342#1 | 1511.06342 | [
"1503.02531"
] |
|
1511.06342#1 | Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning | The ability to act in multiple environments and transfer previous knowledge to new situations can be considered a critical aspect of any intelligent agent. To- wards this goal, we deï¬ ne a novel method of multitask and transfer learning that enables an autonomous agent to learn how to behave in multiple tasks simultane- ously, and then generalize its knowledge to new domains. This method, termed â Actor-Mimicâ , exploits the use of deep reinforcement learning and model com- pression techniques to train a single policy network that learns how to act in a set of distinct tasks by using the guidance of several expert teachers. We then show that the representations learnt by the deep policy network are capable of general- izing to new tasks with no prior expert guidance, speeding up learning in novel environments. Although our method can in general be applied to a wide range of problems, we use Atari games as a testing environment to demonstrate these methods. # INTRODUCTION Deep Reinforcement Learning (DRL), the combination of reinforcement learning methods and deep neural network function approximators, has recently shown considerable success in high- dimensional challenging tasks, such as robotic manipulation (Levine et al., 2015; Lillicrap et al., 2015) and arcade games (Mnih et al., 2015). These methods exploit the ability of deep networks to learn salient descriptions of raw state input, allowing the agent designer to essentially bypass the lengthy process of feature engineering. In addition, these automatically learnt descriptions often sig- niï¬ cantly outperform hand-crafted feature representations that require extensive domain knowledge. One such DRL approach, the Deep Q-Network (DQN) (Mnih et al., 2015), has achieved state-of- the-art results on the Arcade Learning Environment (ALE) (Bellemare et al., 2013), a benchmark of Atari 2600 arcade games. The DQN uses a deep convolutional neural network over pixel inputs to parameterize a state-action value function. The DQN is trained using Q-learning combined with sev- eral tricks that stabilize the training of the network, such as a replay memory to store past transitions and target networks to deï¬ | 1511.06342#0 | 1511.06342#2 | 1511.06342 | [
"1503.02531"
] |
1511.06342#2 | Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning | ne a more consistent temporal difference error. Although the DQN maintains the same network architecture and hyperparameters for all games, the approach is limited in the fact that each network only learns how to play a single game at a time, despite the existence of similarities between games. For example, the tennis-like game of pong and the squash-like game of breakout are both similar in that each game consists of trying to hit a moving ball with a rectangular paddle. A network trained to play multiple games would be able to generalize its knowledge between the games, achieving a single compact state representation as the inter-task similarities are exploited by the network. Having been trained on enough source tasks, the multitask network can also exhibit transfer to new target tasks, which can speed up learning. Training DRL agents can be extremely computationally intensive and therefore reducing training time is a signiï¬ cant practical beneï¬ t. 1 | 1511.06342#1 | 1511.06342#3 | 1511.06342 | [
"1503.02531"
] |
1511.06342#3 | Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning | Published as a conference paper at ICLR 2016 The contribution of this paper is to develop and evaluate methods that enable multitask and trans- fer learning for DRL agents, using the ALE as a test environment. To ï¬ rst accomplish multitask learning, we design a method called â Actor-Mimicâ that leverages techniques from model compres- sion to train a single multitask network using guidance from a set of game-speciï¬ c expert networks. The particular form of guidance can vary, and several different approaches are explored and tested empirically. | 1511.06342#2 | 1511.06342#4 | 1511.06342 | [
"1503.02531"
] |
1511.06342#4 | Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning | To then achieve transfer learning, we treat a multitask network as being a DQN which was pre-trained on a set of source tasks. We show experimentally that this multitask pre-training can result in a DQN that learns a target task signiï¬ cantly faster than a DQN starting from a random initialization, effectively demonstrating that the source task representations generalize to the target task. # 2 BACKGROUND: DEEP REINFORCEMENT LEARNING A Markov Decision Process (MDP) is defined as a tuple (S, A, T, R, 7) where S is a set of states, A is a set of actions, T(sâ |s, a) is the transition probability of ending up in state sâ when executing action a in state s, R is the reward function mapping states in S to rewards in R, and Â¥ is a discount factor. An agentâ s behaviour in an MDP is represented as a policy 7(a|s) which defines the probability of executing action a in state s. For a given policy, we can further define the Q-value function Q*(s,a) = E[v}49 y'r1|80 = 8, a0 = a] where H is the step when the game ends. The Q-function represents the expected future discounted reward when starting in a state s, executing a, and then following policy 7 until a terminating state is reached. There always exists at least one optimal state-action value function, Q*(s, a), such that Vs â ¬ S,a â ¬ A, Q*(s,a) = max, Qâ ¢(s, a) (Sutton & Barto} |T998). The optimal Q-function can be rewritten as a Bellman equation: Q*(s,a) = E r+7-max Q*(sâ ,aâ )|. (1) s'NT(-|s,a) aeA | 1511.06342#3 | 1511.06342#5 | 1511.06342 | [
"1503.02531"
] |
1511.06342#5 | Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning | An optimal policy can be constructed from the optimal Q-function by choosing, for a given state, the action with highest Q-value. Q-learning, a reinforcement learning algorithm, uses iterative backups of the Q-function to converge towards the optimal Q-function. Using a tabular representation of the Q-function, this is equivalent to setting Q("*))(s,a) = Es xT (-|s,a)[7 + *Maxaca Qâ ¢(s!,aâ )] for the (n+1)th update step (Sutton & Barto}|I Because the state space in the ALE is too large to tractably store a tabular representation of the Q-function, the Deep Q-Network (DQN) approach uses a deep function approximator to represent the state-action value function (Mnih et al.||2015). To train a DQN on the (n+1)th step, we set the networkâ | 1511.06342#4 | 1511.06342#6 | 1511.06342 | [
"1503.02531"
] |
1511.06342#6 | Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning | s loss to LOAD (9) = E 8,a,7,8'~M(-) 2 (r-+9 nay a(e'sa's0) â Qs, 4; gon) | » Q) acA where M(-) is a uniform probability distribution over a replay memory, which is a set of the m previous (s,a,7,sâ ) transition tuples seen during play, where m is the size of the memory. The replay memory is used to reduce correlations between adjacent states and is shown to have large effect on the stability of training the network in some games. | 1511.06342#5 | 1511.06342#7 | 1511.06342 | [
"1503.02531"
] |
1511.06342#7 | Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning | 3 ACTOR-MIMIC 3.1 POLICY REGRESSION OBJECTIVE Given a set of source games S1, ..., SN , our ï¬ rst goal is to obtain a single multitask policy network that can play any source game at as near an expert level as possible. To train this multitask policy network, we use guidance from a set of expert DQN networks E1, ..., EN , where Ei is an expert specialized in source task Si. One possible deï¬ nition of â guidanceâ would be to deï¬ ne a squared loss that would match Q-values between the student network and the experts. As the range of the expert value functions could vary widely between games, we found it difï¬ cult to directly distill knowledge from the expert value functions. The alternative we develop here is to instead match policies by ï¬ rst transforming Q-values using a softmax. Using the softmax gives us outputs which | 1511.06342#6 | 1511.06342#8 | 1511.06342 | [
"1503.02531"
] |
1511.06342#8 | Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning | 2 Published as a conference paper at ICLR 2016 are bounded in the unit interval and so the effects of the different scales of each expertâ s Q-function are diminished, achieving higher stability during learning. Intuitively, we can view using the softmax from the perspective of forcing the student to focus more on mimicking the action chosen by the guiding expert at each state, where the exact values of the state are less important. We call this method â Actor-Mimicâ as it is an actor, i.e. policy, that mimics the decisions of a set of experts. | 1511.06342#7 | 1511.06342#9 | 1511.06342 | [
"1503.02531"
] |
1511.06342#9 | Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning | In particular, our technique ï¬ rst transforms each expert DQN into a policy network by a Boltzmann distribution deï¬ ned over the Q-value outputs, e7 Qn, (8,0) Tp, (as) = a re Ce (3) aâ â ¬Ap, Tp, (as) = a re Ce (3) aâ â ¬Ap, where 7 is a temperature parameter and Aj, is the action space used by the expert E;, Ap, C A. Given a state s from source task 5;, we then define the policy objective over the multitask network as the cross-entropy between the expert networkâ s policy and the current multitask policy: Li policy(θ) = Ï Ei(a|s) log Ï AMN(a|s; θ), (4) # aâ AEi where Ï AMN(a|s; θ) is the multitask Actor-Mimic Network (AMN) policy, parameterized by θ. | 1511.06342#8 | 1511.06342#10 | 1511.06342 | [
"1503.02531"
] |
1511.06342#10 | Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning | In contrast to the Q-learning objective which recursively relies on itself as a target value, we now have a stable supervised training signal (the expert network output) to guide the multitask network. To acquire training data, we can sample either the expert network or the AMN action outputs to generate the trajectories used in the loss. Empirically we have observed that sampling from the AMN while it is learning gives the best results. We later prove that in either case of sampling from the expert or AMN as it is learning, the AMN will converge to the expert policy using the policy regression loss, at least in the case when the AMN is a linear function approximator. We use an e-greedy policy no matter which network we sample actions from, which with probability â ¬ picks a random action uniformly and with probability 1 â | 1511.06342#9 | 1511.06342#11 | 1511.06342 | [
"1503.02531"
] |
1511.06342#11 | Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning | â ¬ chooses an action from the network. 3.2 FEATURE REGRESSION OBJECTIVE We can obtain further guidance from the expert networks in the following way. Let hAMN(s) and hEi(s) be the hidden activations in the feature (pre-output) layer of the AMN and iâ th expert net- work computed from the input state s, respectively. Note that the dimension of hAMN(s) does not necessarily need to be equal to hEi(s), and this is the case in some of our experiments. We deï¬ ne a feature regression network fi(hAMN(s)) that, for a given state s, attempts to predict the features hEi(s) from hAMN(s). The architecture of the mapping fi can be deï¬ ned arbitrarily, and fi can be trained using the following feature regression loss: Liocaturenegression(9,97,) = ||filhamn(s; 0); 07,) â he, (s)II3 5 (5) where @ and @,, are the parameters of the AMN and i'â feature regression network, respectively. When training this objective, the error is fully back-propagated from the feature regression network output through the layers of the AMN. In this way, the feature regression objective provides pressure on the AMN to compute features that can predict an expertâ s features. A justification for this objec- tive is that if we have a perfect regression from multitask to expert features, all the information in the expert features is contained in the multitask features. The use of the separate feature prediction network f; for each task enables the multitask network to have a different feature dimension than the experts as well as prevent issues with identifiability. Empirically we have found that the feature regression objectiveâ s primary benefit is that it can increase the performance of transfer learning in some target tasks. 3.3 ACTOR-MIMIC OBJECTIVE Combining both regression objectives, the Actor-Mimic objective is thus deï¬ ned as F eatureRegression(θ, θfi), policy(θ) + β â Li (6) where β is a scaling parameter which controls the relative weighting of the two objectives. | 1511.06342#10 | 1511.06342#12 | 1511.06342 | [
"1503.02531"
] |
1511.06342#12 | Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning | Intu- itively, we can think of the policy regression objective as a teacher (expert network) telling a student (AMN) how they should act (mimic expertâ s actions), while the feature regression objective is anal- ogous to a teacher telling a student why it should act that way (mimic expertâ s thinking process). 3 Published as a conference paper at ICLR 2016 3.4 TRANSFERING KNOWLEDGE: ACTOR-MIMIC AS PRETRAINING Now that we have a method of training a network that is an expert at all source tasks, we can proceed to the task of transferring source task knowledge to a novel but related target task. To enable transfer to a new task, we ï¬ | 1511.06342#11 | 1511.06342#13 | 1511.06342 | [
"1503.02531"
] |
1511.06342#13 | Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning | rst remove the ï¬ nal softmax layer of the AMN. We then use the weights of AMN as an instantiation for a DQN that will be trained on the new target task. The pretrained DQN is then trained using the same training procedure as the one used with a standard DQN. Multitask pretraining can be seen as initializing the DQN with a set of features that are effective at deï¬ ning policies in related tasks. If the source and target tasks share similarities, it is probable that some of these pretrained features will also be effective at the target task (perhaps after slight ï¬ ne-tuning). 4 CONVERGENCE PROPERTIES OF ACTOR-MIMIC We further study the convergence properties of the proposed Actor-Mimic under a framework similar to (Perkins & Precup, 2002). The analysis mainly focuses on L2-regularized policy regression with- out feature regression. Without losing generality, the following analysis focuses on learning from a single game expert softmax policy Ï | 1511.06342#12 | 1511.06342#14 | 1511.06342 | [
"1503.02531"
] |
1511.06342#14 | Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning | E. The analysis can be readily extended to consider multiple experts on multiple games by absorbing different games into the same state space. Let DÏ (s) be the stationary distribution of the Markov decision process under policy Ï over states s â S. The policy regression objective function can be rewritten using expectation under the stationary distribution of the Markov decision process: 1 eal san |(meCals) aavetalss6))] + AIO a s~ DAMN, e-greedy (.) where ((-) is the cross-entropy measure and ) is the coefficient of weight decay that is necessary in the following analysis of the policy regression. Under Actor-Mimic, the learning agent interacts with the environment by following an e-greedy strategy of some Q function. The mapping from a Q function to an ¢-greedy policy 7¢-greeay is denoted by an operator Iâ , where Te-greeay = I'(Q). To avoid confusion onwards, we use notation p(a|s; 0) for the softmax policies in the policy regression objective. Assume each state in a Markov decision process is represented by a compact K-dimensional feature representation Ï (s) â RK. Consider a linear function approximator for Q values with parameter matrix θ â RKà |A|, Ë Q(s, a; θ) = Ï (s)T θa, where θa is the ath column of θ. The corresponding softmax policy of the linear approximator is deï¬ ned by p(a|s; θ) â exp{ Ë Q(s, a; θ)}. 4.1 STOCHASTIC STATIONARY POLICY For any stationary policy Ï â , the stationary point of the objective function Eq. (7) can be found by setting its gradient w.r.t. θ to zero. Let Pθ be a |S| à |A| matrix where its ith row jth column element is the softmax policy prediction p(aj|si; θ) from the linear approximator. Similarly, let Î E be a |S| à |A| matrix for the softmax policy prediction from the expert model. Additionally, let DÏ be a diagonal matrix whose entries are DÏ (s). | 1511.06342#13 | 1511.06342#15 | 1511.06342 | [
"1503.02531"
] |
1511.06342#15 | Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning | A simple gradient following algorithm on the objective function Eq. (7) has the following expected update rule using a learning rate αt > 0 at the tth iteration: AO, = â a4 | ®7 Dy (Po,_, â We) + AOâ -1]- (8) Lemma 1. Under a fixed policy x* and a learning rate schedule that satisfies \~?-, a, = ~, 1 a? < 00, the parameters 0, updated by the stochastic gradient descent learning algorithm described above, asymptotically almost surely converge to a unique solution 0°. When the policy Ï â is ï¬ | 1511.06342#14 | 1511.06342#16 | 1511.06342 | [
"1503.02531"
] |
1511.06342#16 | Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning | xed, the objective function Eq. (7) is convex and is the same as a multinomial logistic regression problem with a bounded Lipschitz constant due to its compact input features. Hence there is a unique stationary point θâ such that â θâ = 0. The proof of Lemma 1 follows the stochastic approximation argument (Robbins & Monro, 1951). 4.2 STOCHASTIC ADAPTIVE POLICY Consider the following learning scheme to adapt the agentâ s policy. The learning agent interacts with the environment and samples states by following a fixed e-greedy policy 7â . Given the samples 4 | 1511.06342#15 | 1511.06342#17 | 1511.06342 | [
"1503.02531"
] |
1511.06342#17 | Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning | Published as a conference paper at ICLR 2016 BOXING 405 ATLANTIS © [â AMN | Iâ DQN |â DQN-Max |DQN-Mean BREAKOUT _. 104 CRAZY CLIMBER a 3 8 00r sz Or os- PONG SEAQUEST SPACE INVADERS 0008 0 o0sz 0 00s 000r oszt 0 50 100 0 50 100 50 100 0 50 100 Figure 1: The Actor-Mimic and expert DQN training curves for 100 training epochs for each of the 8 games. A training epoch is 250,000 frames and for each training epoch we evaluate the networks with a testing epoch that lasts 125,000 frames. We report AMN and expert DQN test reward for each testing epoch and the mean and max of DQN performance. The max is calculated over all testing epochs that the DQN experienced until convergence while the mean is calculated over the last ten epochs before the DQN training was stopped. In the testing epoch we use ¢ = 0.05 in the e-greedy policy. The y-axis is the average unscaled episode reward during a testing epoch. The AMN results are averaged over 2 separately trained networks. and the expert prediction, the linear function approximator parameters are updated using Eq. to a unique stationary point 0â | 1511.06342#16 | 1511.06342#18 | 1511.06342 | [
"1503.02531"
] |
1511.06342#18 | Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning | . The new parameters 6â are then used to establish a new e-greedy policy xâ = T(Qor) through the Iâ operator over the linear function Qg. The agent under the new policy m"â subsequently samples a new set of states and actions from the Markov decision process to update its parameters. The learning agent therefore generates a sequence of policies {7!, 77, 73, ...}. The proof for the following theorem is given in Appendi Theorem 1. Assume the Markov decision process is irreducible and aperiodic for any policy 7 induced by the V operator and is Lipschitz continuous with a constant c., then the sequence of policies and model parameters generated by the iterative algorithm above converges almost surely to a unique solution x* and 6*. 4.3. PERFORMANCE GUARANTEE The convergence theorem implies the Actor-Mimic learning algorithm also belongs to the family of no-regret algorithms in the online learning framework, see{Ross et al.| for more details. Their theoretical analysis can be directly applied to Actor-Mimic and results in a performance guarantee bound on how well the Actor-Mimic model performs with respect to the guiding expert. Let Zi (s, a) be the t-step reward of executing 7 in the initial state s and then following policy aâ | 1511.06342#17 | 1511.06342#19 | 1511.06342 | [
"1503.02531"
] |
1511.06342#19 | Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning | . The cost-to-go for a policy 7 after T-steps is defined as Jp(7) = â T Es~ pi.) [R(s, a)], where R(s, a) is the reward after executing action a in state s. Proposition 1. For the iterative algorithm described in Section (42), if the loss function in Eq. converges to â ¬ with the solution Tamy and LE a aals, 1) = Zh awa(s, a) > uforallactionsa â ¬ A andt â ¬ {1,-++ ,T}, then the cost-to-go of Actor-Mimic Jr(mamn) grows linearly after executing T actions: Jp (mamn) < Jr (mE) + uTe/ log 2. The above linear growth rate of the cost-to-go is achieved through sampling from AMN action output Ï AMN, while the cost grows quadratically if the algorithm only samples from the expert action output. Our empirical observations conï¬ rm this theoretical prediction. # 5 EXPERIMENTS In the following experiments, we validate the Actor-Mimic method by demonstrating its effective- ness at both multitask and transfer learning in the Arcade Learning Environment (ALE). For our experiments, we use subsets of a collection of 20 Atari games. 19 games of this set were among the 29 games that the DQN method performed at a super-human level. We additionally chose 1 game, the game of Seaquest, on which the DQN had performed poorly when compared to a human expert. | 1511.06342#18 | 1511.06342#20 | 1511.06342 | [
"1503.02531"
] |
1511.06342#20 | Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning | Details on the training procedure are described in Appendix B. 5.1 MULTITASK To ï¬ rst evaluate the actor-mimic objective on multitask learning, we demonstrate the effectiveness of training an AMN over multiple games simultaneously. In this particular case, since our focus is 5 Published as a conference paper at ICLR 2016 Network DQN AMN 100% à AMN DQN Mean Max Mean Max Mean Max Atlantis Boxing Breakout Crazy Climber 57279 81.47 273.15 541000 88.02 377.96 165065 76.264 347.01 370.32 81.860 584196 288.2% 93.61% 127.0% 108.0% 93.00% 97.98% 96189 117593 57070 74342 59.33% 63.22% Enduro Pong Seaquest 457.60 19.581 4278.9 808.00 20.140 6200.5 499.3 15.275 1177.3 1466.0 18.780 686.77 109.1% 78.01% 27.51% 85.00% 93.25% 23.64% Space Invaders 1669.2 2109.7 1142.4 1349.0 68.44% 63.94% Table 1: Actor-Mimic results on a set of eight Atari games. We compare the AMN performance to that of the expert DQNs trained separately on each game. The expert DQNs were trained until convergence and the AMN was trained for 100 training epochs, which is equivalent to 25 million input frames per source game. For the AMN, we report maximum test reward ever achieved in epochs 1-100 and mean test reward in epochs 91-100. For the DQN, we report maximum test reward ever achieved until convergence and mean test reward in the last 10 epochs of DQN training. Additionally, at the last row of the table we report the percentage ratio of the AMN reward to the expert DQN reward for every game for both mean and max rewards. These percentage ratios are plotted in Figure 6. The AMN results are averaged over 2 separately trained networks. | 1511.06342#19 | 1511.06342#21 | 1511.06342 | [
"1503.02531"
] |
1511.06342#21 | Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning | on multitask learning and not transfer learning, we disregard the feature regression objective and set β to 0. Figure 1 and Table 1 show the results of an AMN trained on 8 games simultaneously with the policy regression objective, compared to an expert DQN trained separately for each game. The AMN and every individual expert DQN in this case had the exact same network architecture. We can see that the AMN quickly reaches close-to-expert performance on 7 games out of 8, only taking around 20 epochs or 5 million training frames to settle to a stable behaviour. This is in comparison to the expert networks, which were trained for up to 50 million frames. One result that was observed during training is that the AMN often becomes more consistent in its behaviour than the expert DQN, with a noticeably lower reward variance in every game except Atlantis and Pong. Another surprising result is that the AMN achieves a signiï¬ cantly higher mean reward in the game of Atlantis and relatively higher mean reward in the games of Breakout and Enduro. This is despite the fact that the AMN is not being optimized to improve reward over the expert but just replicate the expertâ | 1511.06342#20 | 1511.06342#22 | 1511.06342 | [
"1503.02531"
] |
1511.06342#22 | Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning | s behaviour. We also observed this increase in source task perfor- mance again when we later on increased the AMN model complexity for the transfer experiments (see Atlantis experiments in Appendix D). The AMN had the worst performance on the game of Seaquest, which was a game on which the expert DQN itself did not do very well. It is possible that a low quality expert policy has difï¬ culty teaching the AMN to even replicate its own (poor) behaviour. We compare the performance of our AMN against a baseline of two different multitask DQN architectures in Appendix C. 5.2 TRANSFER We have found that although a small AMN can learn how to behave at a close-to-expert level on multiple source tasks, a larger AMN can more easily transfer knowledge to target tasks after be- ing trained on the source tasks. For the transfer experiments, we therefore signiï¬ cantly increased the AMN model complexity relative to that of an expert. Using a larger network architecture also allowed us to scale up to playing 13 source games at once (see Appendix D for source task perfor- mance using the larger AMNs). We additionally found that using an AMN trained for too long on the source tasks hurt transfer, as it is likely overï¬ | 1511.06342#21 | 1511.06342#23 | 1511.06342 | [
"1503.02531"
] |
1511.06342#23 | Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning | tting. Therefore for the transfer experiments, we train the AMN on only 4 million frames for each of the source games. To evaluate the Actor-Mimic objective on transfer learning, the previously described large AMNs will be used as a weight initialization for DQNs which are each trained on a different target task. We additionally independently evaluate the beneï¬ t of the feature regression objective during transfer by having one AMN trained with only the policy regression objective (AMN-policy) and another trained using both feature and policy regression (AMN-feature). The results are then compared to the baseline of a DQN that was initialized with random weights. The performance on a set of 7 target games is detailed in Table 2 (learning curves are plotted in Figure 7). We can see that the AMN pretraining provides a deï¬ nite increase in learning speed for the 3 games of Breakout, Star Gunner and Video Pinball. The results in Breakout and Video Pinball demonstrate that the policy regression objective alone provides signiï¬ cant positive transfer in some target tasks. The reason for this large positive transfer might be due to the source game Pong having very similar mechanics to both Video Pinball and Breakout, where one must use a paddle to prevent a ball from falling off screen. The machinery used to detect the ball in Pong would likely be useful in detecting the ball for these two target tasks, given some ï¬ | 1511.06342#22 | 1511.06342#24 | 1511.06342 | [
"1503.02531"
] |
1511.06342#24 | Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning | ne-tuning. Additionally, the feature regression objective causes a signiï¬ cant speed-up in the game of Star Gunner compared to both the random initialization and the network trained solely with policy regression. Therefore even though the feature regression objective can slightly hurt transfer in some source games, it can provide large 6 Published as a conference paper at ICLR 2016 Breakout 1mil | 2mil | 3 mil 4 mil 5 mil 6 mil 7 mil 8mil | 9mil | 10 mil Random 1.182 | 5.278 | 29.13 102.3 202.8 212.8 252.9 211.8 | 243.5 258.7 AMN-policy | 18.35 | 102.1 | 216.0 | 271.1 | 308.6 | 286.3 284.6 | 318.8 | 281.6 | 311.3 AMN-feature | 16.23 | 119.0 | 153.7 191.8 172.6 233.9 248.5 178.8 | 235.6 225.5 Gopher Tmil | 2mil | 3 mil 4 mil 5 mil 6 mil 7 mil 8mil | 8mil | 10 mil Random 294.0 | 578.9 1360 1540 1820 1133 633.0 1306 1758 1539 AMN-policy | 715.0 | 612.7 | 1362 924.8 1029 1186 1081 936.7 1251 1142 AMN-feature | 636.2 | 1110 | 918.8 1073 1028 810.1 1008 868.8 1054 982.4 Krull Tmil |] 2mil | 3 mil 4 mil 5 mil 6 mil 7 mil 8mil | 9 mil 10 mil Random 4302 | 6193 6576 7030 6754 5294 5949 5557 5366 6005 AMN-policy | 5827 | 7279 6838 6971 7277 7129 7854 8012 7244 7835 AMN-feature | 5033 | 7256 7008 7582 7665 8016 8133 6536 7832 6923 Road Runner | 1 mil | 2 mil | 3 mil 4 mil 5 mil 6 mil 7 mil 8mil | 9mil | 10 mil Random 327.5 | 988.1 | 16263 | 27183 | 26639 | 29488 33197 | 27683 | 25235 | 31647 AMN-policy | 1561 | 5119 | 19483 | 22132 | 23391 | 23813 34673 | 33476 | 31967 | 31416 AMN-feature | 1349 | 6659 | 18074 | 16858 | 18099 | 22985 27023 | 24149 | 28225 | 23342 Robotank Tmil | 2mil | 3 mil 4 mil 5 mil 6 mil 7 mil 8mil | 9mil | 10 mil Random 4.830 | 6.965 | 9.825 | 13.22 | 21.07 | 22.54 31.94 | 29.80 | 37.12 | 34.04 AMN-policy | 3.502 | 4.522 | 11.03 9.215 16.89 17.31 18.66 20.58 | 23.58 23.02 AMN-feature | 3.550 | 6.162 | 13.94 17.58 17.57 20.72 20.13 21.13 | 26.14 23.29 Star Gunner | I mil | 2mil | 3 mil 4 mil 5 mil 6 mil 7 mil 8mil | 9mil | 10 mil Random 221.2 | 468.5 | 927.6 1084 1508 1626 3286 16017 | 36273 | 45322 AMN-policy | 274.3 | 302.0 | 978.4 1667 4000 14655 31588 | 45667 | 38738 | 53642 AMN-feature | 1405 | 4570 | 18111 | 23406 | 36070 | 46811 | 50667 | 49579 | 50440 | 56839 Video Pinball | I mil |] 2 mil ] 3 mil 4 mil 5 mil 6 mil 7 mil 8mil | 9 mil 10 mil Random 2323 | 8549 6780 5842 10383 | 11093 8468 5476 9964 11893 AMN-policy | 2583 | 25821 | 95949 | 143729 | 57114 | 106873 | 111074 | 73523 | 34908 | 123337 AMN-feature | 1593 | 3958 | 21341 12421 15409 | 18992 15920 | 48690 | 24366 | 26379 | 1511.06342#23 | 1511.06342#25 | 1511.06342 | [
"1503.02531"
] |
1511.06342#25 | Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning | Table 2: Actor-Mimic transfer results for a set of 7 games. The 3 networks are trained as DQNs on the target task, with the only difference being the weight initialization. â Randomâ means random initial weights, â AMN- policyâ means a weight initialization with an AMN trained using policy regression and â AMN-featureâ means a weight initialization with an AMN trained using both policy and feature regression (see text for more details). We report the average test reward every 4 training epochs (equivalent to 1 million training frames), where the average is over 4 testing epochs that are evaluated immediately after each training epoch. For each game, we bold out the network results that have the highest average testing reward for that particular column. | 1511.06342#24 | 1511.06342#26 | 1511.06342 | [
"1503.02531"
] |
1511.06342#26 | Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning | beneï¬ ts in others. The positive transfer in Breakout, Star Gunner and Video Pinball saves at least up to 5 million frames of training time in each game. Processing 5 million frames with the large model is equivalent to around 4 days of compute time on a NVIDIA GTX Titan. On the other hand, for the games of Krull and Road Runner (although the multitask pretraining does help learning at the start) the effect is not very pronounced. When running Krull we observed that the policy learnt by any DQN regardless of the initialization was a sort of unexpected local maximum. In Krull, the objective is to move between a set of varied minigames and complete each one. One of the minigames, where the player must traverse a spiderweb, gives extremely high reward by simply jumping quickly in a mostly random fashion. What the DQN does is it kills itself on purpose in the initial minigame, runs to the high reward spiderweb minigame, and then simply jumps in the corner of the spiderweb until it is terminated by the spider. Because it is relatively easy to get stuck in this local maximum, and very hard to get out of it (jumping in the minigame gives unproportionally high reward compared to the other minigames), transfer does not really help learning. For the games of Gopher and Robotank, we can see that the multitask pretraining does not have any signiï¬ cant positive effect. In particular, multitask pretraining for Robotank even seems to slow down learning, providing an example of negative transfer. The task in Robotank is to control a tank turret in a 3D environment to destroy other tanks, so itâ s possible that this game is so signiï¬ cantly different from any source task (being the only ï¬ rst-person 3D game) that the multitask pretraining does not provide any useful prior knowledge. 6 RELATED WORK The idea of using expert networks to guide a single mimic network has been studied in the context of supervised learning, where it is known as model compression. The goal of model compression is to reduce the computational complexity of a large model (or ensemble of large models) to a single smaller mimic network while maintaining as high an accuracy as possible. To obtain high accuracy, the mimic network is trained using rich output targets provided by the experts. These output targets are either the ï¬ | 1511.06342#25 | 1511.06342#27 | 1511.06342 | [
"1503.02531"
] |
1511.06342#27 | Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning | nal layer logits (Ba & Caruana, 2014) or the high-temperature softmax outputs of the experts (Hinton et al., 2015). Our approach is most similar to the technique of (Hinton et al., 2015) 7 Published as a conference paper at ICLR 2016 which matches the high-temperature outputs of the mimic network with that of the expert network. In addition, we also tried an objective that provides expert guidance at the feature level instead of only at the output level. A similar idea was also explored in the model compression case (Romero et al., 2015), where a deep and thin mimic network used a larger expert networkâ s intermediate features as guiding hints during training. In contrast to these model compression techniques, our method is not concerned with decreasing test time computation but instead using experts to provide otherwise unavailable supervision to a mimic network on several distinct tasks. Actor-Mimic can also be considered as part of the larger Imitation Learning class of methods, which use expert guidance to teach an agent how to act. One such method, called DAGGER (Ross et al., 2011), is similar to our approach in that it trains a policy to directly mimic an expertâ s behaviour while sampling actions from the mimic agent. Actor-Mimic can be considered as an extension of this work to the multitask case. In addition, using a deep neural network to parameterize the policy provides us with several advantages over the more general Imitation Learning framework. First, we can exploit the automatic feature construction ability of deep networks to transfer knowledge to new tasks, as long as the raw data between tasks is in the same form, i.e. pixel data with the same dimen- sions. | 1511.06342#26 | 1511.06342#28 | 1511.06342 | [
"1503.02531"
] |
1511.06342#28 | Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning | Second, we can deï¬ ne objectives which take into account intermediate representations of the state and not just the policy outputs, for example the feature regression objective which provides a richer training signal to the mimic network than just samples of the expertâ s action output. Recent work has explored combining expert-guided Imitation Learning and deep neural networks in the single-task case. Guo et al. (2014) use DAGGER with expert guidance provided by Monte-Carlo Tree Search (MCTS) policies to train a deep neural network that improves on the original DQNâ | 1511.06342#27 | 1511.06342#29 | 1511.06342 | [
"1503.02531"
] |
1511.06342#29 | Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning | s performance. Some disadvantages of using MCTS experts as guidance are that they require both access to the (hidden) RAM state of the emulator as well as an environment model. Another re- lated method is that of guided policy search (Levine & Koltun, 2013), which combines a regularized importance-sampled policy gradient with guiding trajectory samples generated using differential dy- namic programming. The goal in that work was to learn continuous control policies which improved upon the basic policy gradient method, which is prone to poor local minima. A wide variety of methods have also been studied in the context of RL transfer learning (see Tay- lor & Stone (2009) for a more comprehensive review). One related approach is to use a dual state representation with a set of task-speciï¬ c and task-independent features known as â problem-spaceâ and â agent-spaceâ descriptors, respectively. For each source task, a task-speciï¬ c value function is learnt on the problem-space descriptors and then these learnt value functions are transferred to a single value function over the agent-space descriptors. Because the agent-space value function is deï¬ ned over features which maintain constant semantics across all tasks, this value function can be directly transferred to new tasks. Banerjee & Stone (2007) constructed agent-space features by ï¬ rst generating a ï¬ xed-depth game tree of the current state, classifying each future state in the tree as either {win, lose, draw, nonterminal} and then coalescing all states which have the same class or subtree. To transfer the source tasks value functions to agent-space, they use a simple weighted av- erage of the source task value functions, where the weight is proportional to the number of times that a speciï¬ c agent-space descriptor has been seen during play in that source task. In a related method, Konidaris & Barto (2006) transfer the value function to agent-space by using regression to predict every source tasks problem-space value function from the agent-space descriptors. A drawback of these methods is that the agent- and problem-space descriptors are either hand-engineered or gener- ated from a perfect environment model, thus requiring a signiï¬ cant amount of domain knowledge. 7 DISCUSSION In this paper we deï¬ | 1511.06342#28 | 1511.06342#30 | 1511.06342 | [
"1503.02531"
] |
1511.06342#30 | Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning | ned Actor-Mimic, a novel method for training a single deep policy network over a set of related source tasks. We have shown that a network trained using Actor-Mimic is capable of reaching expert performance on many games simultaneously, while having the same model complexity as a single expert. In addition, using Actor-Mimic as a multitask pretraining phase can signiï¬ cantly improve learning speed in a set of target tasks. This demonstrates that the features learnt over the source tasks can generalize to new target tasks, given a sufï¬ cient level of similarity between source and target tasks. A direction of future work is to develop methods that can enable a targeted knowledge transfer from source tasks by identifying related source tasks for the given target task. Using targeted knowledge transfer can potentially help in cases of negative transfer observed in our experiments. | 1511.06342#29 | 1511.06342#31 | 1511.06342 | [
"1503.02531"
] |
1511.06342#31 | Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning | Acknowledgments: This work was supported by Samsung and NSERC. 8 Published as a conference paper at ICLR 2016 # REFERENCES Ba, Jimmy and Caruana, Rich. Do deep nets really need to be deep? In Advances in Neural Information Processing Systems, pp. 2654â 2662, 2014. Banerjee, Bikramjit and Stone, Peter. General game learning using knowledge transfer. In Interna- tional Joint Conferences on Artiï¬ cial Intelligence, pp. 672â 677, 2007. | 1511.06342#30 | 1511.06342#32 | 1511.06342 | [
"1503.02531"
] |
1511.06342#32 | Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning | Bellemare, Marc G., Naddaf, Yavar, Veness, Joel, and Bowling, Michael. The arcade learning envi- ronment: An evaluation platform for general agents. Journal of Artiï¬ cial Intelligence Research, 47:253â 279, 2013. Bertsekas, Dimitri P. Dynamic programming and optimal control, volume 1. Athena Scientiï¬ c Belmont, MA, 1995. Guo, Xiaoxiao, Singh, Satinder, Lee, Honglak, Lewis, Richard L, and Wang, Xiaoshi. | 1511.06342#31 | 1511.06342#33 | 1511.06342 | [
"1503.02531"
] |
1511.06342#33 | Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning | Deep learning for real-time atari game play using ofï¬ ine monte-carlo tree search planning. In Advances in Neural Information Processing Systems 27, pp. 3338â 3346, 2014. Hinton, Geoffrey, Vinyals, Oriol, and Dean, Jeff. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531, 2015. Kingma, Diederik P. and Ba, Jimmy. Adam: | 1511.06342#32 | 1511.06342#34 | 1511.06342 | [
"1503.02531"
] |
1511.06342#34 | Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning | A method for stochastic optimization. In International Conference on Learning Representations, 2015. Konidaris, George and Barto, Andrew G. Autonomous shaping: Knowledge transfer in reinforce- In Proceedings of the 23rd international conference on Machine learning, pp. ment learning. 489â 496, 2006. Levine, Sergey and Koltun, Vladlen. Guided policy search. In Proceedings of the 30th international conference on Machine Learning, 2013. | 1511.06342#33 | 1511.06342#35 | 1511.06342 | [
"1503.02531"
] |
1511.06342#35 | Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning | Levine, Sergey, Finn, Chelsea, Darrell, Trevor, and Abbeel, Pieter. End-to-end training of deep visuomotor policies. CoRR, abs/1504.00702, 2015. Lillicrap, Timothy P., Hunt, Jonathan J., Pritzel, Alexander, Heess, Nicholas, Erez, Tom, Tassa, Yuval, Silver, David, and Wierstra, Daan. Continuous control with deep reinforcement learning. CoRR, abs/1509.02971, 2015. | 1511.06342#34 | 1511.06342#36 | 1511.06342 | [
"1503.02531"
] |
1511.06342#36 | Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning | Mnih, Volodymyr, Kavukcuoglu, Koray, Silver, David, Rusu, Andrei A., Veness, Joel, Bellemare, Marc G., Graves, Alex, Riedmiller, Martin, Fidjeland, Andreas K., Ostrovski, Georg, Petersen, Stig, Beattie, Charles, Sadik, Amir, Antonoglou, Ioannis, King, Helen, Kumaran, Dharshan, Wier- stra, Daan, Legg, Shane, and Hassabis, Demis. Human-level control through deep reinforcement learning. | 1511.06342#35 | 1511.06342#37 | 1511.06342 | [
"1503.02531"
] |
1511.06342#37 | Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning | Nature, 518(7540):529â 533, 2015. Perkins, Theodore J and Precup, Doina. A convergent form of approximate policy iteration. Advances in neural information processing systems, pp. 1595â 1602, 2002. In Robbins, Herbert and Monro, Sutton. A stochastic approximation method. The annals of mathemat- ical statistics, pp. 400â 407, 1951. Romero, Adriana, Ballas, Nicolas, Kahou, Samira Ebrahimi, Chassang, Antoine, Gatta, Carlo, and In International Conference on Learning Bengio, Yoshua. | 1511.06342#36 | 1511.06342#38 | 1511.06342 | [
"1503.02531"
] |
1511.06342#38 | Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning | Fitnets: Hints for thin deep nets. Representations, 2015. Ross, Stephane, Gordon, Geoffrey, and Bagnell, Andrew. A reduction of imitation learning and structured prediction to no-regret online learning. Journal of Machine Learning Research, 15: 627â 635, 2011. Seneta, E. Sensitivity analysis, ergodicity coefï¬ cients, and rank-one updates for ï¬ nite markov chains. Numerical solution of Markov chains, 8:121â 129, 1991. Sutton, Richard S. and Barto, Andrew G. Reinforcement learning: An introduction. | 1511.06342#37 | 1511.06342#39 | 1511.06342 | [
"1503.02531"
] |
1511.06342#39 | Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning | MIT Press Cambridge, 1998. Taylor, Matthew E and Stone, Peter. Transfer learning for reinforcement learning domains: A survey. The Journal of Machine Learning Research, 10:1633â 1685, 2009. 9 Published as a conference paper at ICLR 2016 # APPENDIX A PROOF OF THEOREM 1 Lemma 2. For any two policies t!,xâ , the stationary distributions over the states under the policies are bounded: \|D,1 â Dz2\| < cp||t! â x? ||, for some cp > 0. Proof. | 1511.06342#38 | 1511.06342#40 | 1511.06342 | [
"1503.02531"
] |
1511.06342#40 | Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning | Let T 1 and T 2 be the two transition matrices under the stationary distributions DÏ 1 , DÏ 2. For any ij elements T 1 IT; -â T3ll = | SS p(sila, 85) (w'(a|s;) â e's] (9) (10) <|Al|l7" (als;) â 7?(a|s5)|| <|Al||7? â 1? lI-0. (11) The above bound for any ijâ â elements implies the Euclidean distance of the transition matrices is also upper bounded ||T'? â T?|| < |S||.A|||7+ â 7? |]. ) has shown that ||D1 â D,2|| < yor llT? â T?||.0, where A! is the largest eigenvalue 0 . Hence, there is a constant cp > 0 such that ||D,.1 â D,2|| < cp||x1 â 2? ||. Lemma 3. For any two softmax policy Pg:, P92 matrices from the linear function approximator, || Pox â Po2|| < ey|| 20! â 06 ||, for some cy > 0. Proof. | 1511.06342#39 | 1511.06342#41 | 1511.06342 | [
"1503.02531"
] |
1511.06342#41 | Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning | Note that the ith row jth column element p(aj|si) in a softmax policy matrix P is computed by the softmax transformation on the Q function: e(si.03) Sy e@ (sian) â k Diy = p(aj|si) = softmar( Q(si.4))) = (12) Because the softmax function is a monotonically increasing element-wise function on matrices, the Euclidean distance of the softmax transformation is upper bounded by the largest Jacobian in the domain of the softmax function. | 1511.06342#40 | 1511.06342#42 | 1511.06342 | [
"1503.02531"
] |
1511.06342#42 | Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning | Namely, for câ , = maxzepom softmaz || dsoftman(s) | \|softmaz(x') â softmax(x?)|| < c,||a! â x?||,Vx', x? â ¬ Dom softmaz. (13) By bounding the elements in P matrix, it gives || Pp: â Po2|| < cs||Qo. â Qeel| = csl|eo! -â 06? |), Theorem 1. Assume the Markov decision process is irreducible and aperiodic for any policy 7 induced by the T operator and is Lipschitz continuous with a constant C., the sequence of policies and model parameters generated by the iterative algorithm above converges almost surely to a unique solution x* and 0*. Proof. We follow a similar contraction argument made in|Perkins & Precup|(2002) , and show the it- erative algorithm is a contraction process. Namely, for any two policies 7* and 7, the learning algo- rithm above produces new policies '(Qg:), [(Qo2) after one iteration, where ||'(Qo:)â F'(Qog2)|| < B|\x! â 1? ||. Here || - || is the Euclidean norm and f â ¬ (0, 1). By Lipschtiz continuity, IIF(Qo.) â P(Qez)|| <cellQo1 â Qo2|| = cell 0" â &6|| (14) <ce||B||||6" â 67||- (15) 10 | 1511.06342#41 | 1511.06342#43 | 1511.06342 | [
"1503.02531"
] |
1511.06342#43 | Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning | Published as a conference paper at ICLR 2016 Let θ1 and θ2 be the stationary points of Eq. (7) under Ï 1 and Ï 2. That is, â θ1 = â θ2 = 0 respectively. Rearranging Eq. (8) gives, # 1 λ 1 λ |" â | =]? Daa (Poy =.) â ©" Dya( Poe ~The) (16) => 67 (D2 â Dy)Me + 87 D1 Por â ®T Daa Por + 87 Dai Poo â ®7 D2 Pp? || (17) # 1 λ 1 λ => 61 (D,2 â Dy). + ©" Djs (Pp â Po2) + 87 (Dy â D2) Pp2| (18) [27 ||| De â D2] lel] + 27 |ll_Dall|LPox â Poe ll + [27 ||Dx â Deal ll Poe â ¤ <ellx! â 1? |]. (20) The last inequality is given by Lemma 2 and 3 and the compactness of ®. For a Lipschtiz constant Ce > ¢, there exists a 9 such that ||T'(Qo:)â I'(Qo2) || < ||! â x2||. Hence, the sequence of policies generated by the algorithm converges almost surely to a unique fixed point 7* from Lemma|I]and the Contraction Mapping Theorem|Bertsekas|(1995). Furthermore, the model parameters converge w.p. | to a stationary point 0* under the fixed point policy 7*. | 1511.06342#42 | 1511.06342#44 | 1511.06342 | [
"1503.02531"
] |
1511.06342#44 | Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning | # APPENDIX B AMN TRAINING DETAILS All of our Actor-Mimic Networks (AMNs) were trained using the Adam (Kingma & Bal |2015) ding to optimization algorithm. The AMNs have a single 18-unit output, with each output correspon one of the 18 possible Atari player actions. Having the full 18-action output simplifies the multitask case when each game has a different subset of valid actions. While playing a certain game, we mask out AMN action outputs that are not valid for that game and take the softmax over only the subset of valid actions. We use a replay memory for each game to reduce correlations between successive frames and stabilize network training. Because the memory requirements of having the standard replay memory size of 1,000,000 frames for each game are prohibitive when we are training over many source games, for AMNs we use a per-game 100,000 frame replay memory. AMN training was stable even with only a per-game equivalent of a tenth of the replay memory size of the DQN experts. For the transfer experiments with the feature regression objective, we set the scaling parameter ( to 0.01 and the feature prediction network f; was set to a linear projection from the AMN features to the i*â expert features. For the policy regression objective, we use a softmax temperature of | in all cases. Additionally, during training for all AMNs we use an e-greedy policy with â ¬ set to a constant 0.1. Annealing â ¬ from | did not provide any noticeable benefit. | 1511.06342#43 | 1511.06342#45 | 1511.06342 | [
"1503.02531"
] |
1511.06342#45 | Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning | During training, we choose actions based on the AMN and not the expert DQN. We do not use weight decay during AMN training as we empirically found that it did not provide any large benefits. For the experiments using the DQN algorithm, we optimize the networks with RMSProp. Since the DQNs are trained on a single game their output layers only contain the player actions that are valid in the particular game that they are trained on. The experts guiding the AMNs used the same architecture, hyperparameters and training procedure as that of Mnih et al. (2015). We use the full 1,000,000 frame replay memory when training any DQN. # APPENDIX C MULTITASK DQN BASELINE RESULTS As a baseline, we trained DQN networks over 8 games simultaneously to test their performance against the Actor-Mimic method. We tried two different architectures, the ï¬ rst is using the basic DQN procedure on all 8 games. This network has a single 18 action output shared by all games, but when we train or test in a particular game, we mask out and ignore the action values from actions that are invalid for that particular game. This architecture is denoted the Multitask DQN (MDQN). The second architecture is a DQN but where each game has a separate fully-connected feature layer and action output. In this architecture only the convolutions are shared between games, and thus the features and action values are completely separate. | 1511.06342#44 | 1511.06342#46 | 1511.06342 | [
"1503.02531"
] |
1511.06342#46 | Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning | This was to try to mitigate the destabilizing 11 (19) Published as a conference paper at ICLR 2016 BOXING a. 10% ATLANTIS 3 FS BREAKOUT _. «104 CRAZY CLIMBER 8 5 a |â AMN, |â DQN | -MDQN A x iS 7 & 8 a AJ â \ , - \ Ae nee 8 ole o V o ENDURO wy 3 SEAQUEST a SPACE INVADERS 3 8 8 g 8 8 8 By i & | 8 . a nd ° 8 â Vy ° 0 20 40 0 20 40 0 20 40 0 20 40 Figure 2: | 1511.06342#45 | 1511.06342#47 | 1511.06342 | [
"1503.02531"
] |
1511.06342#47 | Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning | The Actor-Mimic, expert DQN, and Multitask DQN (MDQN) training curves for 40 training epochs for each of the 8 games. A training epoch is 250,000 frames and for each training epoch we evaluate the networks with a testing epoch that lasts 125,000 frames. We report AMN, expert DQN and MDQN test reward for each testing epoch. In the testing epoch we use « = 0.05 in the e-greedy policy. The y-axis is the average unscaled episode reward during a testing epoch. BOXING BREAKOUT , x10% |â AMN |â DQN | -MCDQN ATLANTIS S 3 0b _. x104 CRAZY CLIMBER a 002 SL ° 8 ok ° 3 » g SEAQUEST a 3 8 8 8 8 8 8 io i ol. & oly okt 0 20 40 0 20 40 0 40 0 20 40 Figure 3: The Actor-Mimic, expert DQN, and Multitask Convolutions DQN (MCDQN) training curves for 40 training epochs for each of the 8 games. A training epoch is 250,000 frames and for each training epoch we evaluate the networks with a testing epoch that lasts 125,000 frames. We report AMN, expert DQN and MCDQN test reward for each testing epoch. In the testing epoch we use â ¬ = 0.05 in the e-greedy policy. The y-axis is the average unscaled episode reward during a testing epoch. effect that the different value scales of each game had during learning. This architecture is denoted the Multitask Convolutions DQN (MCDQN). The results for the MDQN and MCDQN are shown in Figures 2 and 3, respectively. | 1511.06342#46 | 1511.06342#48 | 1511.06342 | [
"1503.02531"
] |
1511.06342#48 | Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning | From the ï¬ gures, we can observe that the AMN is far more stable during training as well as being consistently higher in performance than either the MDQN or MCDQN methods. In addition, it can be seen that the MDQN and MCDQN will often focus on performing reasonably well on a small subset of the source games, such as on Boxing and Enduro, while making little to no progress in others, such as Breakout or Pong. Between the MDQN and MCDQN, we can see that the MCDQN hardly improves results even though it has signiï¬ cantly larger computational cost that scales linearly with the number of source games. For the speciï¬ c details of the architectures we tested, for the MDQN the architecture was: 8x8x4x32- 4 1 â 4x4x32x64-2 â 3x3x64x64-1 â 512 fully-connected units â | 1511.06342#47 | 1511.06342#49 | 1511.06342 | [
"1503.02531"
] |
1511.06342#49 | Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning | 18 actions. This is exactly the same network architecture as used for the 8 game AMN in Section 5.1. For the MCDQN, the bottom convolutional layers were the same as the MDQN, except there are 8 parallel subnetworks on top of the convolutional layers. These game-speciï¬ c subnetworks had the architecture: 512 fully- connected units â 18 actions. All layers except the action outputs were followed with a rectiï¬ | 1511.06342#48 | 1511.06342#50 | 1511.06342 | [
"1503.02531"
] |
1511.06342#50 | Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning | er non-linearity. 1 Here we represent convolutional layers as WxWxCxN-S, where W is the width of the (square) convolution kernel, C is the number of input images, N is the number of ï¬ lter maps and S is the convolution stride. 12 Published as a conference paper at ICLR 2016 # APPENDIX D ACTOR-MIMIC NETWORK MULTITASK RESULTS FOR TRANSFER PRETRAINING The network used for transfer consisted of the following architecture: 8x8x4x256-4 1 â 4x4x256x512-2 â 3x3x512x512-1 â 3x3x512x512-1 â 2048 fully-connected units â 1024 fully- connected units â | 1511.06342#49 | 1511.06342#51 | 1511.06342 | [
"1503.02531"
] |
Subsets and Splits