id
stringlengths 12
15
| title
stringlengths 8
162
| content
stringlengths 1
17.6k
| prechunk_id
stringlengths 0
15
| postchunk_id
stringlengths 0
15
| arxiv_id
stringlengths 10
10
| references
listlengths 1
1
|
---|---|---|---|---|---|---|
1611.06440#42 | Pruning Convolutional Neural Networks for Resource Efficient Inference | Network trimming: A data-driven neuron pruning approach towards efï¬ cient deep architectures. arXiv preprint arXiv:1607.03250, 2016. Andrej Karpathy, George Toderici, Sanketh Shetty, Thomas Leung, Rahul Sukthankar, and Li Fei-Fei. Large-scale video classiï¬ cation with convolutional neural networks. In CVPR, 2014. Yong-Deok Kim, Eunhyeok Park, Sungjoo Yoo, Taelim Choi, Lu Yang, and Dongjun Shin. Com- pression of deep convolutional neural networks for fast and low power mobile applications. In Proceedings of the International Conference on Learning Representations (ICLR), 2015. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classiï¬ cation with deep convolu- tional neural networks. In Advances in neural information processing systems, pp. 1097â 1105, 2012. Andrew Lavin. maxDNN: An Efï¬ cient Convolution Kernel for Deep Learning with Maxwell GPUs. CoRR, abs/1501.06633, 2015a. URL http://arxiv.org/abs/1501.06633. | 1611.06440#41 | 1611.06440#43 | 1611.06440 | [
"1512.08571"
]
|
1611.06440#43 | Pruning Convolutional Neural Networks for Resource Efficient Inference | Andrew Lavin. Fast algorithms for convolutional neural networks. arXiv preprint arXiv:1509.09308, 2015b. Vadim Lebedev and Victor Lempitsky. Fast convnets using group-wise brain damage. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2554â 2564, 2016. Yann LeCun, J. S. Denker, S. Solla, R. E. Howard, and L. D. Jackel. | 1611.06440#42 | 1611.06440#44 | 1611.06440 | [
"1512.08571"
]
|
1611.06440#44 | Pruning Convolutional Neural Networks for Resource Efficient Inference | Optimal brain damage. In Advances in Neural Information Processing Systems (NIPS), 1990. Yann LeCun, Leon Bottou, Genevieve B. Orr, and Klaus Robert Müller. Efï¬ cient BackProp, pp. 9â 50. Springer Berlin Heidelberg, Berlin, Heidelberg, 1998. James Martens. Deep learning via Hessian-free optimization. In Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 735â 742, 2010. James Martens, Ilya Sutskever, and Kevin Swersky. Estimating the Hessian by back-propagating curvature. arXiv preprint arXiv:1206.6464, 2012. Pavlo Molchanov, Xiaodong Yang, Shalini Gupta, Kihwan Kim, Stephen Tyree, and Jan Kautz. | 1611.06440#43 | 1611.06440#45 | 1611.06440 | [
"1512.08571"
]
|
1611.06440#45 | Pruning Convolutional Neural Networks for Resource Efficient Inference | Online detection and classiï¬ cation of dynamic hand gestures with recurrent 3d convolutional neural network. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2016. M-E. Nilsback and A. Zisserman. Automated ï¬ ower classiï¬ cation over a large number of classes. In Proceedings of the Indian Conference on Computer Vision, Graphics and Image Processing, Dec 2008. 11 Published as a conference paper at ICLR 2017 Barak A. Pearlmutter. | 1611.06440#44 | 1611.06440#46 | 1611.06440 | [
"1512.08571"
]
|
1611.06440#46 | Pruning Convolutional Neural Networks for Resource Efficient Inference | Fast Exact Multiplication by the Hessian. Neural Computation, 6:147â 160, 1994. Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, and Ali Farhadi. XNOR-Net: ImageNet Classiï¬ cation Using Binary Convolutional Neural Networks. CoRR, abs/1603.05279, 2016. URL http://arxiv.org/abs/1603.05279. Russell Reed. Pruning algorithms-a survey. IEEE transactions on Neural Networks, 4(5):740â 747, 1993. Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV), 115 (3):211â 252, 2015. K. Simonyan and A. Zisserman. | 1611.06440#45 | 1611.06440#47 | 1611.06440 | [
"1512.08571"
]
|
1611.06440#47 | Pruning Convolutional Neural Networks for Resource Efficient Inference | Very deep convolutional networks for large-scale image recognition. CoRR, abs/1409.1556, 2014. Suraj Srinivas and R. Venkatesh Babu. Data-free parameter pruning for deep neural networks. In Mark W. Jones Xianghua Xie and Gary K. L. Tam (eds.), Proceedings of the British Machine Vision Conference (BMVC), pp. 31.1â 31.12. BMVA Press, September 2015. | 1611.06440#46 | 1611.06440#48 | 1611.06440 | [
"1512.08571"
]
|
1611.06440#48 | Pruning Convolutional Neural Networks for Resource Efficient Inference | Theano Development Team. Theano: A Python framework for fast computation of mathematical expressions. arXiv e-prints, abs/1605.02688, May 2016. URL http://arxiv.org/abs/ 1605.02688. Catherine Wah, Steve Branson, Peter Welinder, Pietro Perona, and Serge Belongie. The caltech-ucsd birds-200-2011 dataset. 2011. Wei Wen, Chunpeng Wu, Yandan Wang, Yiran Chen, and Hai Li. Learning structured sparsity in deep neural networks. In Advances in Neural Information Processing Systems, pp. 2074â 2082, 2016. | 1611.06440#47 | 1611.06440#49 | 1611.06440 | [
"1512.08571"
]
|
1611.06440#49 | Pruning Convolutional Neural Networks for Resource Efficient Inference | Hao Zhou, Jose M. Alvarez, and Fatih Porikli. Less is more: Towards compact cnns. In European Conference on Computer Vision, pp. 662â 677, Amsterdam, the Netherlands, October 2016. 12 Published as a conference paper at ICLR 2017 A APPENDIX A.1 FLOPS COMPUTATION To compute the number of ï¬ oating-point operations (FLOPs), we assume convolution is implemented as a sliding window and that the nonlinearity function is computed for free. For convolutional kernels we have: FLOPs = 2HW (CinK 2 + 1)Cout, (11) where H, W and Cin are height, width and number of channels of the input feature map, K is the kernel width (assumed to be symmetric), and Cout is the number of output channels. For fully connected layers we compute FLOPs as: FLOPs = (2I â 1)O, (12) where I is the input dimensionality and O is the output dimensionality. We apply FLOPs regularization during pruning to prune neurons with higher FLOPs ï¬ | 1611.06440#48 | 1611.06440#50 | 1611.06440 | [
"1512.08571"
]
|
1611.06440#50 | Pruning Convolutional Neural Networks for Resource Efficient Inference | rst. FLOPs per convolutional neuron in every layer: VGG16: Î f lops = [3.1, 57.8, 14.1, 28.9, 7.0, 14.5, 14.5, 3.5, 7.2, 7.2, 1.8, 1.8, 1.8, 1.8] AlexNet: Î f lops = [2.3, 1.7, 0.8, 0.6, 0.6] R3DCNN: Î f lops = [5.6, 86.9, 21.7, 43.4, 5.4, 10.8, 1.4, 1.4] # A.2 NORMALIZATION ACROSS LAYERS Scaling a criterion across layers is very important for pruning. If the criterion is not properly scaled, then a hand-tuned multiplier would need to be selected for each layer. Statistics of feature map ranking by different criteria are shown in Fig.{10] Without normalization (Fig. [4a}fT%d). the weight magnitude criterion tends to rank feature maps from the first layers more important than last layers; the activation criterion ranks middle layers more important; and Taylor ranks first layers higher. | 1611.06440#49 | 1611.06440#51 | 1611.06440 | [
"1512.08571"
]
|
1611.06440#51 | Pruning Convolutional Neural Networks for Resource Efficient Inference | After ⠬y normalization (Fig.[TOd}{T0f), all criteria have a shape more similar to the oracle, where each layer has some feature maps which are highly important and others which are unimportant. (a) Weight (b) Activation (mean) (c) Taylor errresrs) ee a en 7 _ = median (d) Weight + ¢2 (e) Activation (mean) + £2 (f) Taylor + £2 errresrs) ee a en 7 _ = median Figure 10: Statistics of feature map ranking by raw criteria values (top) and by criteria values after 02 normalization (bottom). 13 Published as a conference paper at ICLR 2017 MI Weight Activation OBD Taylor Mean S.d. | 1611.06440#50 | 1611.06440#52 | 1611.06440 | [
"1512.08571"
]
|
1611.06440#52 | Pruning Convolutional Neural Networks for Resource Efficient Inference | APoZ Per layer Layer 1 0.41 0.40 0.65 0.78 0.36 0.54 0.95 Layer 2 0.23 0.57 0.56 0.59 0.33 0.78 0.90 Layer 3 0.14 0.55 0.48 0.45 0.51 0.66 0.74 Layer 4 0.26 0.23 0.58 0.42 0.10 0.36 0.80 Layer 5 0.17 0.28 0.49 0.52 0.15 0.54 0.69 Layer 6 0.21 0.18 0.41 0.48 0.16 0.49 0.63 Layer 7 0.12 0.19 0.54 0.49 0.38 0.55 0.71 Layer 8 0.18 0.23 0.43 0.42 0.30 0.50 0.54 Layer 9 0.21 0.18 0.50 0.55 0.35 0.53 0.61 Layer 10 0.26 0.15 0.59 0.60 0.45 0.61 0.66 Layer 11 0.41 0.12 0.61 0.65 0.45 0.64 0.72 Layer 12 0.47 0.15 0.60 0.66 0.39 0.66 0.72 Layer 13 0.61 0.21 0.77 0.76 0.65 0.76 0.77 Mean 0.28 0.27 0.56 0.57 0.35 0.59 0.73 All layers No normalization 0.35 0.34 0.35 0.30 0.43 0.65 0.14 ¢, normalization 0.47 0.37 0.63 0.63 0.52 0.65, 0.71 9 normalization 0.47 0.33 0.64 0.66 0.51 0.60 0.73 Min-max normalization 0.27 0.17 0.52 0.57 0.42 0.54 0.67 | 1611.06440#51 | 1611.06440#53 | 1611.06440 | [
"1512.08571"
]
|
1611.06440#53 | Pruning Convolutional Neural Networks for Resource Efficient Inference | Table 3: Spearmanâ s rank correlation of criteria vs oracle-abs in VGG-16 ï¬ ne-tuned on Birds 200. A.3 ORACLE COMPUTATION FOR VGG-16 ON BIRDS-200 We compute the change in the loss caused by removing individual feature maps from the VGG-16 network, after ï¬ ne-tuning on the Birds-200 dataset. Results are illustrated in Fig. 11a-11b for each feature map in layers 1 and 13, respectively. To compute the oracle estimate for a feature map, we remove the feature map and compute the network prediction for each image in the training set using the central crop with no data augmentation or dropout. We draw the following conclusions: | 1611.06440#52 | 1611.06440#54 | 1611.06440 | [
"1512.08571"
]
|
1611.06440#54 | Pruning Convolutional Neural Networks for Resource Efficient Inference | â ¢ The contribution of feature maps range from positive (above the red line) to slightly negative (below the red line), implying the existence of some feature maps which decrease the training cost when removed. â ¢ There are many feature maps with little contribution to the network output, indicated by almost zero change in loss when removed. â ¢ Both layers contain a small number of feature maps which induce a signiï¬ cant increase in the loss when removed. (a) Layer 1 (b) Layer 13 oor 0.008 0.006 0.004 change in loss 0.002, 0.000 00025 30 0 35 20 i0 0 Feature map Index 0.0035, 0.0030) 0.0025 0.0029) 0.0015; ange in toss c.0010) 0.0008 556 a0 300 200 700 0 Feature map index Figure 11: Change in training loss as a function of the removal of a single feature map from the VGG-16 network after ï¬ ne-tuning on Birds-200. Results are plotted for two convolutional layers w.r.t. the index of the removed feature map index. The loss with all feature maps, 0.00461, is indicated with a red horizontal line. | 1611.06440#53 | 1611.06440#55 | 1611.06440 | [
"1512.08571"
]
|
1611.06440#55 | Pruning Convolutional Neural Networks for Resource Efficient Inference | 14 Published as a conference paper at ICLR 2017 100% <â & regularization, > = 0.01 80% " larization, 7 = 0.04 " lor, 50 updates ~~ Taylor, 100 updates 60% â Taylor, 200 updates Parameters 40% 20% 0%, 0 50 ~ 100 ~+150°~S*S*«S 002; Mini-batch updates, x1000 100% <â & regularization, > = 0.01 80% " larization, 7 = 0.04 " lor, 50 updates ~~ Taylor, 100 updates 60% â Taylor, 200 updates Parameters 40% Accuracy, test set 20% 0%, 0 50 ~ 100 ~+150°~S*S*«S 002; 380% 80% 60% 40% 20% 0% Mini-batch updates, x1000 Parameters Accuracy, test set 380% 80% 60% 40% 20% 0% Parameters | 1611.06440#54 | 1611.06440#56 | 1611.06440 | [
"1512.08571"
]
|
1611.06440#56 | Pruning Convolutional Neural Networks for Resource Efficient Inference | Figure 12: Comparison of our iterative pruning with pruning by regularization Table|3|contains a layer-by-layer listing of Spearmanâ s rank correlation of several criteria with the ranking of oracle-abs. In this more detailed comparison, we see the Taylor criterion shows higher correlation for all individual layers. For several methods including Taylor, the worst correlations are observed for the middle of the network, layers 5-10. We also evaluate several techniques for normalization of the raw criteria values for comparison across layers. The table shows the best performance is obtained by £2 normalization, hence we select it for our method. # A.4 COMPARISON WITH WEIGHT REGULARIZATION 5) find that fine-tuning with high @, or ¢2 regularization causes unimportant connections to be suppressed. Connections with energy lower than some threshold can be removed on the assumption that they do not contribute much to subsequent layers. The same work also finds that thresholds must be set separately for each layer depending on its sensitivity to pruning. The procedure to evaluate sensitivity is time-consuming as it requires pruning layers independently during evaluation. The idea of pruning with high regularization can be extended to removing the kernels for an entire feature map if the £2 norm of those kernels is below a predefined threshold. We compare our approach with this regularization-based pruning for the task of pruning the last convolutional layer of VGG-16 fine-tuned for Birds-200. By considering only a single layer, we avoid the need to compute layerwise sensitivity. Parameters for optimization during fine-tuning are the same as other experiments with the Birds-200 dataset. For the regularization technique, the pruning threshold is set to ¢ = 10~° while we vary the regularization coefficient 7 of the £2 norm on each feature map kernel} We prune only kernel weights, while keeping the bias to maintain the same expected output. A comparison between pruning based on regularization and our greedy scheme is illustrated in Fig. 12. We observe that our approach has higher test accuracy for the same number of remaining unpruned feature maps, when pruning 85% or more of the feature maps. We observe that with high regularization all weights tend to zero, not only unimportant weights as Han et al. (2015) observe in the case of ImageNet networks. | 1611.06440#55 | 1611.06440#57 | 1611.06440 | [
"1512.08571"
]
|
1611.06440#57 | Pruning Convolutional Neural Networks for Resource Efficient Inference | The intuition here is that with regularization we push all weights down and potentially can affect important connections for transfer learning, whereas in our iterative procedure we only remove unimportant parameters leaving others untouched. A.5 COMBINATION OF CRITERIA One of the possibilities to improve saliency estimation is to combine several criteria together. One of the straight forward combinations is Taylor and mean activation of the neuron. We compute the joint criteria as Î joint(z(k) ) and perform a grid search of parameter λ in Fig.13. The highest correlation value for each dataset is marked with with vertical bar with λ and gain. We observe that the gain of linearly combining criteria is negligibly small (see â | 1611.06440#56 | 1611.06440#58 | 1611.06440 | [
"1512.08571"
]
|
1611.06440#58 | Pruning Convolutional Neural Networks for Resource Efficient Inference | â s in the ï¬ gure). 5In our implementation, the regularization coefï¬ cient is multiplied by the learning rate equal to 10â 4. 15 Published as a conference paper at ICLR 2017 oo s Ga S 0.006400 = 169e03 Correlation, higher better 0.00.4 0.05, 4 o im 107 10° d, criterion = (1 - \)*Taylor + \*Activation â S = VGG-16/Birds-200 â AlexNet/Flowers-102 â VGG-16/Flowers-102 â AlexNet/ImageNet â AlexNet/Birds-200 Figure 13: | 1611.06440#57 | 1611.06440#59 | 1611.06440 | [
"1512.08571"
]
|
1611.06440#59 | Pruning Convolutional Neural Networks for Resource Efficient Inference | Spearman rank correlation for linear combination of criteria. The per layer metric is used. Each â indicates the gain in correlation for one experiment. A.6 OPTIMAL BRAIN DAMAGE IMPLEMENTATION OBD computes saliency of a parameter by computing a product of the squared magnitude of the parameter and the corresponding element on the diagonal of the Hessian. For many deep learning frameworks, an efï¬ cient implementation of the diagonal evaluation is not straightforward and approximation techniques must be applied. Our implementation of Hessian diagonal computation was inspired by Dauphin et al. (2015) work, where the technique proposed by Bekas et al. (2007) was used to evaluate SGD preconditioned with the Jacobi preconditioner. It was shown that diagonal of the Hessian can be approximated as: diag(H) = E[v © Hv] = E[v© V(VC -v)], (13) where © is the element-wise product, v are random vectors with entries +1, and V is the gradient operator. To compute saliency with OBD, we randomly draw v and compute the diagonal over 10 iterations for a single minibatch for 1000 mini batches. We found that this number of mini batches is required to compute close approximation of the Hessianâ s diagonal (which we verified). Computing saliency this way is computationally expensive for iterative pruning, and we use a slightly different but more efficient procedure. Before the first pruning iteration, saliency is initialized from values computed off-line with 1000 minibatches and 10 iterations, as described above. Then, at every minibatch we compute the OBD criteria with only one iteration and apply an exponential moving averaging with a coefficient of 0.99. We verified that this computes a close approximation to the Hessianâ s diagonal. A.7 CORRELATION OF TAYLOR CRITERION WITH GRADIENT AND ACTIVATION The Taylor criterion is composed of both an activation term and a gradient term. In Figure 14, we depict the correlation between the Taylor criterion and each constituent part. We consider expected absolute value of the gradient instead of the mean, because otherwise it tends to zero. The plots are computed from pruning criteria for an unpruned VGG network ï¬ ne-tuned for the Birds-200 dataset. (Values are shown after layer-wise normalization). | 1611.06440#58 | 1611.06440#60 | 1611.06440 | [
"1512.08571"
]
|
1611.06440#60 | Pruning Convolutional Neural Networks for Resource Efficient Inference | Figure 14(a-b) depict the Taylor criterion in the y-axis for all neurons w.r.t. the gradient and activation components, respectively. The bottom 10% of neurons (lowest Taylor criterion, most likely to be pruned) are depicted in red, while the top 10% are shown in green. Considering all neurons, both gradient and activation components demonstrate a linear trend with the Taylor criterion. However, for the bottom 10% of neurons, as shown in Figure 14(c-d), the activation criterion shows much stronger correlation, with lower activations indicating lower Taylor scores. | 1611.06440#59 | 1611.06440#61 | 1611.06440 | [
"1512.08571"
]
|
1611.06440#61 | Pruning Convolutional Neural Networks for Resource Efficient Inference | 16 Published as a conference paper at ICLR 2017 0.25 . . . . E : oss F oxo : 0.25 . . E : oss . F oxo (a) (b) i 0.002 activation (normalized) i 0.002 . gradient (normalized) (c) (d) Figure 14: Correlation of Taylor criterion with gradient and activation (after layer-wise 2 normaliza- tion) for all neurons (a-b) and bottom 10% of neurons (c-d) for unpruned VGG after fine-tuning on Birds-200. | 1611.06440#60 | 1611.06440#62 | 1611.06440 | [
"1512.08571"
]
|
1611.06440#62 | Pruning Convolutional Neural Networks for Resource Efficient Inference | 17 | 1611.06440#61 | 1611.06440 | [
"1512.08571"
]
|
|
1611.06216#0 | Generative Deep Neural Networks for Dialogue: A Short Review | 6 1 0 2 v o N 8 1 ] L C . s c [ 1 v 6 1 2 6 0 . 1 1 6 1 : v i X r a # Generative Deep Neural Networks for Dialogue: A Short Review Iulian Vlad Serban Department of Computer Science and Operations Research, University of Montreal # Ryan Lowe School of Computer Science, McGill University # Laurent Charlin School of Computer Science, McGill University # Joelle Pineau School of Computer Science, McGill University | 1611.06216#1 | 1611.06216 | [
"1605.06069"
]
|
|
1611.06216#1 | Generative Deep Neural Networks for Dialogue: A Short Review | # Abstract Researchers have recently started investigating deep neural networks for dialogue applications. In particular, generative sequence-to-sequence (Seq2Seq) models have shown promising results for unstructured tasks, such as word-level dialogue response generation. The hope is that such models will be able to leverage massive amounts of data to learn meaningful natural language representations and response generation strategies, while requiring a minimum amount of domain knowledge and hand-crafting. An important challenge is to develop models that can effectively incorporate dialogue context and generate meaningful and diverse responses. In support of this goal, we review recently proposed models based on generative encoder-decoder neural network architectures, and show that these models have better ability to incorporate long-term dialogue history, to model uncertainty and ambiguity in dialogue, and to generate responses with high-level compositional structure. | 1611.06216#0 | 1611.06216#2 | 1611.06216 | [
"1605.06069"
]
|
1611.06216#2 | Generative Deep Neural Networks for Dialogue: A Short Review | # Introduction Researchers have recently started investigating sequence-to-sequence (Seq2Seq) models for dialogue applications. These models typically use neural networks to both represent dialogue histories and to generate or select appropriate responses. Such models are able to leverage large amounts of data in order to learn meaningful natural language representations and generation strategies, while requiring a minimum amount of domain knowledge and hand-crafting. Although the Seq2Seq framework is different from the well-established goal-oriented setting [Gorin et al., 1997, Young, 2000, Singh et al., 2002], these models have already been applied to several real-world applications, with Microsoftâ s system Xiaoice [Markoff and Mozur, 2015] and Googleâ s Smart Reply system [Kannan et al., 2016] as two prominent examples. Researchers have mainly explored two types of Seq2Seq models. | 1611.06216#1 | 1611.06216#3 | 1611.06216 | [
"1605.06069"
]
|
1611.06216#3 | Generative Deep Neural Networks for Dialogue: A Short Review | The ï¬ rst are generative models, which are usually trained with cross-entropy to generate responses word-by-word conditioned on a dialogue context [Ritter et al., 2011, Vinyals and Le, 2015, Sordoni et al., 2015, Shang et al., 2015, Li et al., 2016a, Serban et al., 2016b]. The second are discriminative models, which are trained to select an appropriate response from a set of candidate responses [Lowe et al., 2015, Bordes and Weston, 2016, Inaba and Takahashi, 2016, Yu et al., 2016]. In a related strand of work, researchers have also investigated applying neural networks to the different components of a standard dialogue system, including natural language understanding, natural language generation, dialogue state tracking and 30th Conference on Neural Information Processing Systems (NIPS 2016), Workshop on Learning Methods for Dialogue, Barcelona, Spain. evaluation [Wen et al., 2016, 2015, Henderson et al., 2013, MrkÅ¡i´c et al., 2015, Su et al., 2015]. In this paper, we focus on generative models trained with cross-entropy. One weakness of current generative models is their limited ability to incorporate rich dialogue context and to generate meaningful and diverse responses [Serban et al., 2016b, Li et al., 2016a]. To overcome this challenge, we propose new generative models that are better able to incorporate long-term dialogue history, to model uncertainty and ambiguity in dialogue, and to generate responses with high-level compositional structure. Our experiments demonstrate the importance of the model architecture and the related inductive biases in achieving this improved performance. CEOS) GORE a O dated A) Classic LSTM Cc) MrRNN B) VHRED Figure 1: Probabilistic graphical models for dialogue response generation. Variables w represent natural language utterances. Variables z represent discrete or continuous stochastic latent variables. (A): Classic LSTM model, which uses a shallow generation process. This is problematic because it has no mechanism for incorporating uncertainty and ambiguity and because it forces the model to generate compositional and long-term structure incrementally on a word-by-word basis. (B): | 1611.06216#2 | 1611.06216#4 | 1611.06216 | [
"1605.06069"
]
|
1611.06216#4 | Generative Deep Neural Networks for Dialogue: A Short Review | VHRED expands the generation process by adding one latent variable for each utterance, which helps incorporate uncertainty and ambiguity in the representations and generate meaningful, diverse responses. (C): MrRNN expands the generation process by adding a sequence of discrete stochastic variables for each utterance, which helps generate responses with high-level compositional structure. # 2 Models HRED: The Hierarchical Recurrent Encoder-Decoder model (HRED) [Serban et al., 2016b] is a type of Seq2Seq model that decomposes a dialogue into a two-level hierarchy: a sequence of utterances, each of which is a sequence of words. HRED consists of three recurrent neural networks (RNNs): an encoder RNN, a context RNN and a decoder RNN. Each utterance is encoded into a real-valued vector representation by the encoder RNN. These utterance representations are given as input to the context RNN, which computes a real-valued vector representation summarizing the dialogue at every turn. This summary is given as input to the decoder RNN, which generates a response word-by-word. Unlike the RNN encoders in previous Seq2Seq models, the context RNN is only updated once every dialogue turn and uses the same parameters for each update. This gives HRED an inductive bias that helps incorporate long-term context and learn invariant representations. VHRED: The Latent Variable Hierarchical Recurrent Encoder-Decoder model (VHRED) [Serban et al., 2016c] is an HRED model with an additional component: a high-dimensional stochastic latent variable at every dialogue turn. As in HRED, the dialogue context is encoded into a vector representation using encoder and context RNNs. Conditioned on the summary vector at each dialogue turn, VHRED samples a multivariate Gaussian variable, which is given along with the summary vector as input to the decoder RNN. The multivariate Gaussian latent variable allows modelling ambiguity and uncertainty in the dialogue through the latent variable distribution parameters (mean and variance parameters). This provides a useful inductive bias, which helps VHRED encode the dialogue context into a real-valued embedding space even when the dialogue context is ambiguous or uncertain, and it helps VHRED generate more diverse responses. MrRNN: | 1611.06216#3 | 1611.06216#5 | 1611.06216 | [
"1605.06069"
]
|
1611.06216#5 | Generative Deep Neural Networks for Dialogue: A Short Review | The Multiresolution RNN (MrRNN) [Serban et al., 2016a] models dialogue as two parallel stochastic sequences: a sequence of high-level coarse tokens (coarse sequences), and a sequence of low-level natural language words (utterances). The coarse sequences follow a latent stochastic processâ analogous to hidden Markov modelsâ which conditions the utterances through a hierar- chical generation process. The hierarchical generation process ï¬ rst generates the coarse sequence, and conditioned on this generates the natural language utterance. In our experiments, the coarse | 1611.06216#4 | 1611.06216#6 | 1611.06216 | [
"1605.06069"
]
|
1611.06216#6 | Generative Deep Neural Networks for Dialogue: A Short Review | 2 Table 1: Model response examples. The arrows indicate change of turn. Dialogue Context Hello I tried to install ubuntu studio but I get W : Failed to fetch <url > 404 Not Found when apt-get update â the ppa you added is not online hi community.. is there any difference between " /.bashrc " " /.bashrc. local " â yes, theyâ re different ï¬ les :-) the default /.bashrc sources /.bashrc. local â sorry I could undersatnd... If I write on terminal " gedit /.bashrc " it opens a ï¬ le but how I open " /.bashrc. local " â " gedit /.bashrc. local "... â but it just open blank ï¬ le nothing inside it Hey guys what do you in general use for irc something ubuntu xchat or xchat-gnome ? â without -gnome. that is just cut down â you mean drop xchat-gnome and go with xchat ? when setting up rules with iptables command only writes changes this ï¬ le " etciptables. rules "? i ask this so i can backup before messing anything â sudo iptables-save something . dat to backup your rules restore with sudo iptables-restore < something . dat Model Response MrRNN Act. -Ent.: try a different mirror MrRNN Noun: something VHRED: but I dont know how to add it HRED: how do I do that ? MrRNN Act.-Ent.: open up gedit /.bashrc, there is a ï¬ le called " something " MrRNN Noun: the ï¬ le is empty.. VHRED: itâ s in /.bashrc or /.bashrc HRED: what are you trying to do ? MrRNN Act. -Ent.: im using xchat right now MrRNN Noun: what is xchat-gnome ? VHRED: correct HRED: yes MrRNN Act. -Ent.: | 1611.06216#5 | 1611.06216#7 | 1611.06216 | [
"1605.06069"
]
|
1611.06216#7 | Generative Deep Neural Networks for Dialogue: A Short Review | I donâ t see any reason why you need iptables to do that MrRNN Noun: are you using ubuntu ? VHRED: thx HRED: thanks sequences are deï¬ ned as either noun sequences or activity-entity pairs (predicate-argument pairs) extracted from the natural language utterances. The coarse sequences and utterances are modelled by two separate HRED models. The hierarchical generation provides an important inductive bias, because it helps MrRNN model high-level, compositional structure and generate meaningful and on-topic responses. # 3 Experiments We apply our generative models to dialogue response generation on the Ubuntu Dialogue Cor- pus [Lowe et al., 2015]. For each example, given a dialogue context, the model must generate an appropriate response. | 1611.06216#6 | 1611.06216#8 | 1611.06216 | [
"1605.06069"
]
|
1611.06216#8 | Generative Deep Neural Networks for Dialogue: A Short Review | We also present results on Twitter in the Appendix. This task has been studied extensively in the recent literature [Ritter et al., 2011, Sordoni et al., 2015, Li et al., 2016a]. Corpus: The Ubuntu Dialogue Corpus consists of about half a million dialogues extracted from the #Ubuntu Internet Relayed Chat (IRC) channel. Users entering this chat channel usually have a speciï¬ c technical problem. Typically, users ï¬ rst describe their problem, and other users try to help them resolve it. The technical problems range from software-related and hardware-related issues (e.g. installing packages, ï¬ xing broken drivers) to informational needs (e.g. ï¬ nding software). Evaluation: We carry out an in-lab human study to evaluate the model responses. We recruit 5 human evaluators. We show each evaluator between 30 and 40 dialogue contexts with the ground truth response, and 4 candidate model responses. For each example, we ask the evaluators to compare the candidate responses to the ground truth response and dialogue context, and rate them for ï¬ uency and relevancy on a scale 0â 4, where 0 means incomprehensible or no relevancy and 4 means ï¬ awless English or all relevant. In addition to the human evaluation, we also evaluate dialogue responses w.r.t. the activity-entity metrics proposed by Serban et al. [2016a]. These metrics measure whether the model response contains the same activities (e.g. download, install) and entities (e.g. ubuntu, ï¬ refox) as the ground truth responses. Models that generate responses with the same activities and entities as the ground truth responsesâ including expert responses, which often lead to solving the userâ s problemâ | 1611.06216#7 | 1611.06216#9 | 1611.06216 | [
"1605.06069"
]
|
1611.06216#9 | Generative Deep Neural Networks for Dialogue: A Short Review | are given higher scores. Sample responses from each model are shown in Table 1. Table 2: Ubuntu evaluation using F1 metrics w.r.t. activities and entities (mean scores ± 90% conï¬ dence intervals), and human ï¬ uency and human relevancy scores given on a scale 0-4 (â indicates scores signiï¬ cantly different from baseline models at 90% conï¬ dence) Model F1 Activity F1 Entity Human Fluency Human Relevancy LSTM HRED VHRED MrRNN Noun MrRNN Act.-Ent. 1.18 ±0.18 4.34 ±0.34 4.63 ±0.34 4.04 ±0.33 11.43 ±0.54 0.87 ±0.15 2.22 ±0.25 2.53 ±0.26 6.31 ±0.42 3.72 ±0.33 - 2.98 - 3.48â 3.42â - 1.01 - 1.32â 1.04 Results: The results are given in Table 2. The MrRNNs perform substantially better than the other models w.r.t. both the human evaluation study and the evaluation metrics based on activities and | 1611.06216#8 | 1611.06216#10 | 1611.06216 | [
"1605.06069"
]
|
1611.06216#10 | Generative Deep Neural Networks for Dialogue: A Short Review | 3 entities. MrRNN with noun representations obtains an F1 entity score at 6.31, while all other models obtain less than half F1 scores between 0.87 â 2.53, and human evaluators consistently rate its ï¬ uency and relevancy signiï¬ cantly higher than all the baseline models. MrRNN with activity representations obtains an F1 activity score at 11.43, while all other models obtain less than half F1 activity scores between 1.18 â 4.63, and performs substantially better than the baseline models w.r.t. the F1 entity score. This indicates that the MrRNNs have learned to model high-level, goal-oriented sequential structure in the Ubuntu domain. Followed by these, VHRED performs better than the HRED and LSTM models w.r.t. both activities and entities. This shows that VHRED generates more appropriate responses, which suggests that the latent variables are useful for modeling uncertainty and ambiguity. Finally, HRED performs better than the LSTM baseline w.r.t. both activities and entities, which underlines the importance of representing longer-term context. | 1611.06216#9 | 1611.06216#11 | 1611.06216 | [
"1605.06069"
]
|
1611.06216#11 | Generative Deep Neural Networks for Dialogue: A Short Review | These conclusions are conï¬ rmed by additional experiments on response generation for the Twitter domain (see Appendix). # 4 Discussion We have presented generative models for dialogue response generation. We have proposed ar- chitectural modiï¬ cations with inductive biases towards 1) incorporating longer-term context, 2) handling uncertainty and ambiguity, and 3) generating diverse and on-topic responses with high-level compositional structure. Our experiments show the advantage of the architectural modiï¬ cations quantitatively through human experiments and qualitatively through manual inspections. These experiments demonstrate the need for further research into generative model architectures. Although we have focused on three generative models, other model architectures such as memory-based models [Bordes and Weston, 2016, Weston et al., 2015] and attention-based models [Shang et al., 2015] have also demonstrated promising results and therefore deserve the attention of future research. In another line of work, researchers have started proposing alternative training and response selection criteria [Weston, 2016]. Li et al. [2016a] propose ranking candidate responses according to a mutual information criterion, in order to incorporate dialogue context efï¬ ciently and retrieve on-topic responses. Li et al. [2016b] further propose a model trained using reinforcement learning to optimize a hand-crafted reward function. Both these models are motivated by the lack of diversity observed in the generative model responses. Similarly, Yu et al. [2016] propose a hybrid modelâ combining retrieval models, neural networks and hand-crafted rulesâ trained using reinforcement learning to optimize a hand-crafted reward function. In contrast to these approaches, without combining several models or having to modify the training or response selection criterion, VHRED generates more diverse responses than previous models. Similarly, by optimizing the joint log-likelihood over sequences, MrRNNs generate more appropriate and on-topic responses with compositional structure. Thus, improving generative model architectures has the potential to compensate â | 1611.06216#10 | 1611.06216#12 | 1611.06216 | [
"1605.06069"
]
|
1611.06216#12 | Generative Deep Neural Networks for Dialogue: A Short Review | or even remove the need â for hand-crafted reward functions. At the same time, the models we propose are not necessarily better language models, which are more efï¬ cient at compressing dialogue data as measured by word perplexity. Although these models produce responses that are preferred by humans, they often result in higher test set perplexity than traditional LSTM language models. This suggests maximizing log-likelihood (i.e. minimizing perplexity) is not a sufï¬ cient training objective for these models. An important line of future work therefore lies in improving the objective functions for training and response selection, as well as learning directly from interactions with real users. | 1611.06216#11 | 1611.06216#13 | 1611.06216 | [
"1605.06069"
]
|
1611.06216#13 | Generative Deep Neural Networks for Dialogue: A Short Review | 4 # References A. Bordes and J. Weston. Learning end-to-end goal-oriented dialog. arXiv preprint arXiv:1605.07683, 2016. A. L. Gorin, G. Riccardi, and J. H. Wright. How may i help you? Speech communication, 23(1):113â 127, 1997. M. Henderson, B. Thomson, and S. Young. | 1611.06216#12 | 1611.06216#14 | 1611.06216 | [
"1605.06069"
]
|
1611.06216#14 | Generative Deep Neural Networks for Dialogue: A Short Review | Deep neural network approach for the dialog state tracking challenge. In Proceedings of the SIGDIAL 2013 Conference, pages 467â 471, 2013. M. Inaba and K. Takahashi. Neural utterance ranking model for conversational dialogue systems. In 17th Annual Meeting of the Special Interest Group on Discourse and Dialogue, page 393, 2016. A. Kannan, K. Kurach, S. Ravi, T. Kaufmann, A. Tomkins, B. Miklos, G. Corrado, L. Lukács, M. Ganea, P. Young, et al. | 1611.06216#13 | 1611.06216#15 | 1611.06216 | [
"1605.06069"
]
|
1611.06216#15 | Generative Deep Neural Networks for Dialogue: A Short Review | Smart reply: Automated response suggestion for email. In Proceedings of the ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD), volume 36, pages 495â 503, 2016. J. Li, M. Galley, C. Brockett, J. Gao, and B. Dolan. A diversity-promoting objective function for neural conversation models. In NAACL, 2016a. J. Li, W. Monroe, A. Ritter, and D. Jurafsky. Deep reinforcement learning for dialogue generation. arXiv preprint arXiv:1606.01541, 2016b. R. Lowe, N. Pow, I. Serban, and J. Pineau. | 1611.06216#14 | 1611.06216#16 | 1611.06216 | [
"1605.06069"
]
|
1611.06216#16 | Generative Deep Neural Networks for Dialogue: A Short Review | The Ubuntu Dialogue Corpus: A Large Dataset for Research in Unstructured Multi-Turn Dialogue Systems. In Proc. of SIGDIAL-2015, 2015. J. Markoff and P. Mozur. For sympathetic ear, more chinese turn to smartphone program. NY Times, 2015. N. Mrkši´c, D. O. Séaghdha, B. Thomson, M. Gaši´c, P.-H. Su, D. Vandyke, T.-H. Wen, and S. Young. Multi- N. Mrk&ié, D. O. Séaghdha, B. Thomson, M. Gaiié, P.-H. Su, D. Vandyke, T.-H. Wen, and S. Young. | 1611.06216#15 | 1611.06216#17 | 1611.06216 | [
"1605.06069"
]
|
1611.06216#17 | Generative Deep Neural Networks for Dialogue: A Short Review | Multi- domain dialog state tracking using recurrent neural networks. In HLT-NAACL, pages 120-129, 2015. domain dialog state tracking using recurrent neural networks. In HLT-NAACL, pages 120â 129, 2015. A. Ritter, C. Cherry, and W. B. Dolan. Data-driven response generation in social media. In EMNLP, 2011. I. V. Serban, T. Klinger, G. Tesauro, K. Talamadupula, B. Zhou, Y. Bengio, and A. Courville. Multiresolution recurrent neural networks: An application to dialogue response generation. arXiv preprint arXiv:1606.00776, 2016a. | 1611.06216#16 | 1611.06216#18 | 1611.06216 | [
"1605.06069"
]
|
1611.06216#18 | Generative Deep Neural Networks for Dialogue: A Short Review | I. V. Serban, A. Sordoni, Y. Bengio, A. C. Courville, and J. Pineau. Building end-to-end dialogue systems using generative hierarchical neural network models. In AAAI, pages 3776â 3784, 2016b. I. V. Serban, A. Sordoni, R. Lowe, L. Charlin, J. Pineau, A. Courville, and Y. Bengio. A hierarchical latent variable encoder-decoder model for generating dialogues. arXiv preprint arXiv:1605.06069, 2016c. L. Shang, Z. Lu, and H. Li. | 1611.06216#17 | 1611.06216#19 | 1611.06216 | [
"1605.06069"
]
|
1611.06216#19 | Generative Deep Neural Networks for Dialogue: A Short Review | Neural responding machine for short-text conversation. In ACL-IJCNLP, pages 1577â 1586, 2015. S. Singh, D. Litman, M. Kearns, and M. Walker. Optimizing dialogue management with reinforcement learning: Experiments with the njfun system. JAIR, 16:105â 133, 2002. A. Sordoni, M. Galley, M. Auli, C. Brockett, Y. Ji, M. Mitchell, J.-Y. Nie, J. Gao, and B. Dolan. | 1611.06216#18 | 1611.06216#20 | 1611.06216 | [
"1605.06069"
]
|
1611.06216#20 | Generative Deep Neural Networks for Dialogue: A Short Review | A neural network approach to context-sensitive generation of conversational responses. In Conference of the North American Chapter of the Association for Computational Linguistics (NAACL-HLT 2015), 2015. P.-H. Su, D. Vandyke, M. Gasic, D. Kim, N. Mrksic, T.-H. Wen, and S. Young. Learning from real users: Rating dialogue success with neural networks for reinforcement learning in spoken dialogue systems. In SIGDIAL, 2015. | 1611.06216#19 | 1611.06216#21 | 1611.06216 | [
"1605.06069"
]
|
1611.06216#21 | Generative Deep Neural Networks for Dialogue: A Short Review | O. Vinyals and Q. Le. A neural conversational model. ICML, Workshop, 2015. T.-H. Wen, M. Gasic, N. Mrksic, P.-H. Su, D. Vandyke, and S. Young. Semantically conditioned lstm-based natural language generation for spoken dialogue systems. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1711â 1721, Lisbon, Portugal, September 2015. Association for Computational Linguistics. URL http://aclweb.org/anthology/D15-1199. T.-H. Wen, M. Gasic, N. Mrksic, L. M. Rojas-Barahona, P.-H. Su, S. Ultes, D. Vandyke, and S. Young. | 1611.06216#20 | 1611.06216#22 | 1611.06216 | [
"1605.06069"
]
|
1611.06216#22 | Generative Deep Neural Networks for Dialogue: A Short Review | A network-based end-to-end trainable task-oriented dialogue system. arXiv:1604.04562, 2016. J. Weston. Dialog-based language learning. arXiv preprint arXiv:1604.06045, 2016. J. Weston, S. Chopra, and A. Bordes. Memory networks. ICLR, 2015. S. Young. Probabilistic methods in spokenâ dialogue systems. Philosophical Transactions of the Royal Society of London. | 1611.06216#21 | 1611.06216#23 | 1611.06216 | [
"1605.06069"
]
|
1611.06216#23 | Generative Deep Neural Networks for Dialogue: A Short Review | Series A: Mathematical, Physical and Engineering Sciences, 358(1769), 2000. Z. Yu, Z. Xu, A. W. Black, and A. I. Rudnicky. Strategy and policy learning for non-task-oriented conversational systems. In 17th Annual Meeting of the Special Interest Group on Discourse and Dialogue, page 404, 2016. 5 # Appendix # Twitter Results Corpus: We experiment on a Twitter Dialogue Corpus [Ritter et al., 2011] containing about one million dialogues. The task is to generate utterances to append to existing Twitter conversations. | 1611.06216#22 | 1611.06216#24 | 1611.06216 | [
"1605.06069"
]
|
1611.06216#24 | Generative Deep Neural Networks for Dialogue: A Short Review | This task is typically categorized as a non-goal-driven task, because any ï¬ uent and on-topic response may be adequate. Evaluation: We carry out a human study on Amazon Mechanical Turk (AMT). We show human evaluators a dialogue context along with two potential responses: one response generated from each model conditioned on the dialogue context. We ask evaluators to choose the response most appropriate to the dialogue context. If the evaluators are indifferent, they can choose neither response. For each pair of models we conduct two experiments: one where the example contexts contain at least 80 unique tokens (long context), and one where they contain at least 20 (not necessarily unique) tokens (short context). We experiment with the LSTM, HRED and VHRED models, as well as a TF-IDF retrieval-based baseline model. We do not experiment with the MrRNN models, because we do not have appropriate coarse representations for this domain. | 1611.06216#23 | 1611.06216#25 | 1611.06216 | [
"1605.06069"
]
|
1611.06216#25 | Generative Deep Neural Networks for Dialogue: A Short Review | Results: The results given in Table 3 show that VHRED is strongly preferred in the majority of the experiments. In particular, VHRED is strongly preferred over the HRED and TF-IDF baseline models for both short and long context settings. VHRED is also strongly preferred over the LSTM baseline model for long contexts, although the LSTM model is preferred over VHRED for short contexts.For short contexts, the LSTM model is often preferred over VHRED because the LSTM model tends to generate very generic responses. Such generic or safe responses are reasonable for a wide range of contexts, but are not useful when applied through-out a dialogue, because the user would loose interest in the conversation. In conclusion, VHRED performs substantially better overall than competing models, which suggests that the high-dimensional latent variables help model uncertainty and ambiguity in the dialogue context and help generate meaningful responses. Table 3: Wins, losses and ties (in %) of VHRED against baselines based on the human study (mean preferences ± 90% conï¬ dence intervals, where â indicates signiï¬ cant differences at 90% conï¬ dence) | 1611.06216#24 | 1611.06216#26 | 1611.06216 | [
"1605.06069"
]
|
1611.06216#26 | Generative Deep Neural Networks for Dialogue: A Short Review | Opponent Wins Losses Ties Short Contexts VHRED vs LSTM VHRED vs HRED VHRED vs TF-IDF 32.3 ±2.4 42.0 ±2.8â 51.6 ±3.3â 42.5 ±2.6â 31.9 ±2.6 17.9 ±2.5 25.2 ±2.3 26.2 ±2.5 30.4 ±3.0 Long Contexts VHRED vs LSTM 41.9 ±2.2â 41.5 ±2.8â VHRED vs HRED 47.9 ±3.4â VHRED vs TF-IDF 36.8 ±2.2 29.4 ±2.6 11.7 ±2.2 21.3 ±1.9 29.1 ±2.6 40.3 ±3.4 | 1611.06216#25 | 1611.06216#27 | 1611.06216 | [
"1605.06069"
]
|
1611.06216#27 | Generative Deep Neural Networks for Dialogue: A Short Review | 6 | 1611.06216#26 | 1611.06216 | [
"1605.06069"
]
|
|
1611.05763#0 | Learning to reinforcement learn | 7 1 0 2 n a J 3 2 ] G L . s c [ 3 v 3 6 7 5 0 . 1 1 6 1 : v i X r a # LEARNING TO REINFORCEMENT LEARN JX Wang1, Z Kurth-Nelson1, D Tirumala1, H Soyer1, JZ Leibo1, R Munos1, C Blundell1, D Kumaran1,3, M Botvinick1,2 1DeepMind, London, UK 2Gatsby Computational Neuroscience Unit, UCL, London, UK 3Institute of Cognitive Neuroscience, UCL, London, UK {wangjane, zebk, dhruvat, soyer, jzl, munos, cblundell, dkumaran, botvinick} @google.com # ABSTRACT In recent years deep reinforcement learning (RL) systems have attained superhuman performance in a number of challenging task domains. However, a major limitation of such applications is their demand for massive amounts of training data. A critical present objective is thus to develop deep RL methods that can adapt rapidly to new tasks. In the present work we introduce a novel approach to this challenge, which we refer to as deep meta-reinforcement learning. Previous work has shown that recurrent networks can support meta-learning in a fully supervised context. We extend this approach to the RL setting. What emerges is a system that is trained using one RL algorithm, but whose recurrent dynamics implement a second, quite separate RL procedure. This second, learned RL algorithm can differ from the original one in arbitrary ways. Importantly, because it is learned, it is conï¬ gured to exploit structure in the training domain. We unpack these points in a series of seven proof-of-concept experiments, each of which examines a key aspect of deep meta-RL. We consider prospects for extending and scaling up the approach, and also point out some potentially important implications for neuroscience. | 1611.05763#1 | 1611.05763 | [
"1611.01578"
]
|
|
1611.05763#1 | Learning to reinforcement learn | 1 # INTRODUCTION Recent advances have allowed long-standing methods for reinforcement learning (RL) to be newly extended to such complex and large-scale task environments as Atari (Mnih et al., 2015) and Go (Silver et al., 2016). The key enabling breakthrough has been the development of techniques allowing the stable integration of RL with non-linear function approximation through deep learning (LeCun et al., 2015; Mnih et al., 2015). The resulting deep RL methods are attaining human- and often superhuman-level performance in an expanding list of domains (Jaderberg et al., 2016; Mnih et al., 2015; Silver et al., 2016). However, there are at least two aspects of human performance that they starkly lack. First, deep RL typically requires a massive volume of training data, whereas human learners can attain reasonable performance on any of a wide range of tasks with comparatively little experience. Second, deep RL systems typically specialize on one restricted task domain, whereas human learners can ï¬ exibly adapt to changing task conditions. Recent critiques (e.g., Lake et al., 2016) have invoked these differences as posing a direct challenge to current deep RL research. In the present work, we outline a framework for meeting these challenges, which we refer to as deep meta-reinforcement learning, a label that is intended to both link it with and distinguish it from previous work employing the term â meta-reinforcement learningâ | 1611.05763#0 | 1611.05763#2 | 1611.05763 | [
"1611.01578"
]
|
1611.05763#2 | Learning to reinforcement learn | (e.g. Schmidhuber et al., 1996; Schweighofer and Doya, 2003, discussed later). The key concept is to use standard deep RL techniques to train a recurrent neural network in such a way that the recurrent network comes to implement its own, free-standing RL procedure. As we shall illustrate, under the right circumstances, the secondary learned RL procedure can display an adaptiveness and sample efï¬ ciency that the original RL procedure lacks. The following sections review previous work employing recurrent neural networks in the context of meta-learning and describe the general approach for extending such methods to the RL setting. | 1611.05763#1 | 1611.05763#3 | 1611.05763 | [
"1611.01578"
]
|
1611.05763#3 | Learning to reinforcement learn | We 1 then present seven proof-of-concept experiments, each of which highlights an important ramiï¬ cation of the deep meta-RL setup by characterizing agent performance in light of this framework. We close with a discussion of key challenges for next-step research, as well as some potential implications for neuroscience. # 2 METHODS 2.1 BACKGROUND: META-LEARNING IN RECURRENT NEURAL NETWORKS Flexible, data-efï¬ cient learning naturally requires the operation of prior biases. In general terms, such biases can derive from two sources; they can either be engineered into the learning system (as, for example, in convolutional networks), or they can themselves be acquired through learning. The second case has been explored in the machine learning literature under the rubric of meta-learning (Schmidhuber et al., 1996; Thrun and Pratt, 1998). In one standard setup, the learning agent is confronted with a series of tasks that differ from one another but also share some underlying set of regularities. Meta-learning is then deï¬ ned as an effect whereby the agent improves its performance in each new task more rapidly, on average, than in past tasks (Thrun and Pratt, 1998). At an architectural level, meta-learning has generally been conceptualized as involving two learning systems: one lower-level system that learns relatively quickly, and which is primarily responsible for adapting to each new task; and a slower higher-level system that works across tasks to tune and improve the lower-level system. A variety of methods have been pursued to implement this basic meta-learning setup, both within the deep learning community and beyond (Thrun and Pratt, 1998). Of particular relevance here is an approach introduced by Hochreiter and colleagues (Hochreiter et al., 2001), in which a recurrent neural network is trained on a series of interrelated tasks using standard backpropagation. A critical aspect of their setup is that the network receives, on each step within a task, an auxiliary input indicating the target output for the preceding step. For example, in a regression task, on each step the network receives as input an x value for which it is desired to output the corresponding y, but the network also receives an input disclosing the target y value for the preceding step (see Hochreiter et al., 2001; Santoro et al., 2016). | 1611.05763#2 | 1611.05763#4 | 1611.05763 | [
"1611.01578"
]
|
1611.05763#4 | Learning to reinforcement learn | In this scenario, a different function is used to generate the data in each training episode, but if the functions are all drawn from a single parametric family, then the system gradually tunes into this consistent structure, converging on accurate outputs more and more rapidly across episodes. One interesting aspect of Hochreiterâ s method is that the process that underlies learning within each new task inheres entirely in the dynamics of the recurrent network, rather than in the backpropagation procedure used to tune that networkâ s weights. Indeed, after an initial training period, the network can improve its performance on new tasks even if the weights are held constant (see also Cotter and Conwell, 1990; Prokhorov et al., 2002; Younger et al., 1999). A second important aspect of the approach is that the learning procedure implemented in the recurrent network is ï¬ t to the structure that spans the family of tasks on which the network is trained, embedding biases that allow it to learn efï¬ ciently when dealing with tasks from that family. 2.2 DEEP META-RL: DEFINITION AND KEY FEATURES Importantly, Hochreiterâ s original work (Hochreiter et al., 2001), as well as its subsequent extensions (Cotter and Conwell, 1990; Prokhorov et al., 2002; Santoro et al., 2016; Younger et al., 1999) only addressed supervised learning (i.e. the auxiliary input provided on each step explicitly indicated the target output on the previous step, and the network was trained using explicit targets). In the present work we consider the implications of applying the same approach in the context of reinforcement learning. Here, the tasks that make up the training series are interrelated RL problems, for example, a series of bandit problems varying only in their parameterization. Rather than presenting target outputs as auxiliary inputs, the agent receives inputs indicating the action output on the previous step and, critically, the quantity of reward resulting from that action. The same reward information is fed in parallel to a deep RL procedure, which tunes the weights of the recurrent network. It is this setup, as well as its result, that we refer to as deep meta-RL (although from here on, for brevity, we will often simply call it meta-RL, with apologies to authors who have used that term | 1611.05763#3 | 1611.05763#5 | 1611.05763 | [
"1611.01578"
]
|
1611.05763#5 | Learning to reinforcement learn | 2 previously). As in the supervised case, when the approach is successful, the dynamics of the recurrent network come to implement a learning algorithm entirely separate from the one used to train the network weights. Once again, after sufï¬ cient training, learning can occur within each task even if the weights are held constant. However, here the procedure the recurrent network implements is itself a full-ï¬ edged reinforcement learning algorithm, which negotiates the exploration-exploitation tradeoff and improves the agentâ s policy based on reward outcomes. A key point, which we will emphasize in what follows, is that this learned RL procedure can differ starkly from the algorithm used to train the networkâ s weights. In particular, its policy update procedure (including features such as the effective learning rate of that procedure), can differ dramatically from those involved in tuning the network weights, and the learned RL procedure can implement its own approach to exploration. Critically, as in the supervised case, the learned RL procedure will be ï¬ t to the statistics spanning the multi-task environment, allowing it to adapt rapidly to new task instances. 2.3 FORMALISM Let us write as D a distribution (the prior) over Markov Decision Processes (MDPs). We want to demonstrate that meta-RL is able to learn a prior-dependent RL algorithm, in the sense that it will perform well on average on MDPs drawn from D or slight modiï¬ cations of D. An appropriately structured agent, embedding a recurrent neural network, is trained by interacting with a sequence of MDP environments (also called tasks) through episodes. At the start of a new episode, a new MDP task m â ¼ D and an initial state for this task are sampled, and the internal state of the agent (i.e., the pattern of activation over its recurrent units) is reset. The agent then executes its action-selection strategy in this environment for a certain number of discrete time-steps. At each step t an action at â A is executed as a function of the whole history Ht = {x0, a0, r0, . . . , xtâ 1, atâ 1, rtâ 1, xt} of the agent interacting in the MDP m during the current episode (set of states {xs}0â ¤sâ ¤t, actions {as}0â ¤s<t, and rewards {rs}0â | 1611.05763#4 | 1611.05763#6 | 1611.05763 | [
"1611.01578"
]
|
1611.05763#6 | Learning to reinforcement learn | ¤s<t observed since the beginning of the episode, when the recurrent unit was reset). The network weights are trained to maximize the sum of observed rewards over all steps and episodes. After training, the agentâ s policy is ï¬ xed (i.e. the weights are frozen, but the activations are changing due to input from the environment and the hidden state of the recurrent layer), and it is evaluated on a set of MDPs that are drawn either from the same distribution D or slight modiï¬ cations of that distribution (to test the generalization capacity of the agent). The internal state is reset at the beginning of the evaluation of any new episode. Since the policy learned by the agent is history-dependent (as it makes uses of a recurrent network), when exposed to any new MDP environment, it is able to adapt and deploy a strategy that optimizes rewards for that task. # 3 EXPERIMENTS In order to evaluate the approach to learning that we have just described, we conducted a series of six proof-of-concept experiments, which we present here along with a seventh experiment originally reported in a related paper (Mirowski et al., 2016). One particular point of interest in these experiments was to see whether meta-RL could be used to learn an adaptive balance between exploration and exploitation, as demanded of any fully-ï¬ edged RL procedure. A second and still more important focus was on the question of whether meta-RL can give rise to learning that gains efï¬ ciency by capitalizing on task structure. In order to examine these questions, we performed four experiments focusing on bandit tasks and two additional experiments focusing on Markov decision problems. All of our experiments (as well as the additional experiment we report) employ a common set of methods, with minor implementational variations. In all experiments, the agent architecture centers on a recurrent neural network (LSTM; Hochreiter and Schmidhuber, 1997) feeding into a soft-max output representing discrete actions. As detailed below, the parameters of this network core, as well as some other architectural details, varied across experiments (see Figure 1 and Table 1). However, it is important to emphasize that comparisons between speciï¬ c architectures are outside the scope of this paper. Our main aim is to illustrate and validate the meta-RL framework in a more general way. To this end, all experiments used the high-level task setup previously described: | 1611.05763#5 | 1611.05763#7 | 1611.05763 | [
"1611.01578"
]
|
1611.05763#7 | Learning to reinforcement learn | Both training and testing were organized into ï¬ xed-length episodes, each involving a task randomly sampled from a predetermined task distribution, with the LSTM hidden state initialized at the beginning of each episode. Task-speciï¬ c inputs and 3 Parameter Exps.1&2 Exp.3 Exp. 4 Exp. 5 Exp. 6 No. threads 1 1 1 1 32 No. LSTMs 1 1 1 1 2 No. hiddens 48 48 48 48 256/64 Steps unrolled 100 5 150 20 100 Be annealed annealed annealed 0.05 0.001 Bo 0.05 0.05 0.05 0.05 0.4 Learning rate tuned 0.001 0.001 tuned tuned Discount factor tuned 0.8 0.8 tuned tuned Input a,r,t a,r,t a,r,t a,r,t,x a,7T, x Observation 1-hot RGB (84x84) No. trials/episode 100 5 150 10 10 Episode length 100 5 150 20 <3600 Table 1: List of hyperparameters. βe = coefï¬ cient of entropy regularization loss; in Exps. 1-4, βe is annealed from 1.0 to 0.0 over the course of training. βv = coefï¬ cient of value function loss (Mirowski et al., 2016). r = reward, a = last action, t = current time step, x = current observation. Exp. 1: Bandits with independent arms (Section 3.1.1); Exp. 2: Bandits with dependent arms I (Section 3.1.2); Exp. 3: Bandits with dependent arms II (Section 3.1.3); Exp. 4: Restless bandits (Section 3.1.4); Exp. 5: The â Two-Step Taskâ (Section 3.2.1); Exp. 6: Learning abstract task structure (Section 3.2.2). action outputs are described in conjunction with individual experiments. In all experiments except where speciï¬ | 1611.05763#6 | 1611.05763#8 | 1611.05763 | [
"1611.01578"
]
|
1611.05763#8 | Learning to reinforcement learn | ed, the input included a scalar indicating the reward received on the preceding time-step as well as a one-hot representation of the action sampled on that time-step. All reinforcement learning was conducted using the Advantage Actor-Critic algorithm, as detailed in Mnih et al. (2016) and Mirowski et al. (2016) (see also Figure 1). Details of training, including the use of entropy regularization and a combined policy and value estimate loss, closely follow the methods detailed in Mirowski et al. (2016), with the exception that our experiments used a single thread unless otherwise noted. | 1611.05763#7 | 1611.05763#9 | 1611.05763 | [
"1611.01578"
]
|
1611.05763#9 | Learning to reinforcement learn | For a full listing of parameters refer to Table 1. v v ma ma Cc | Cc | a Vv Pree ¢ * a yO = e enc enc x, a an U x, Lv a.) x, a a (a) LSTM A2C (b) LSTM A3C (c) Stacked-LSTM A3C Figure 1: Advantage actor-critic with recurrence. In all architectures, reward and last action are additional inputs to the LSTM. For non-bandit environments, observation is also fed into the LSTM either as a one-hot or passed through an encoder model [3-layer encoder: two convolutional layers (ï¬ rst layer: 16 8x8 ï¬ lters applied with stride 4, second layer: 32 4x4 ï¬ lters with stride 2) followed by a fully connected layer with 256 units and then a ReLU non-linearity. | 1611.05763#8 | 1611.05763#10 | 1611.05763 | [
"1611.01578"
]
|
1611.05763#10 | Learning to reinforcement learn | See for details Mirowski et al. (2016)]. For bandit experiments, current time step is also fed in as input. Ï = policy; v = value function. A3C is the distributed multi-threaded asynchronous version of the advantage actor-critic algorithm (Mnih et al., 2016); A2C is single threaded. (a) Architecture used in experiments 1-5. (b) Convolutional-LSTM architecture used in experiment 6. (c) Stacked LSTM architecture with convolutional encoder used in experiments 6 and 7. | 1611.05763#9 | 1611.05763#11 | 1611.05763 | [
"1611.01578"
]
|
1611.05763#11 | Learning to reinforcement learn | 4 3.1 BANDIT PROBLEMS As an initial setting for evaluating meta-RL, we studied a series of bandit problems. Except for a very limited set of bandit environments, it is intractable to compute the (prior-dependent) Bayesian-optimal strategy. Here we demonstrate that a recurrent system trained on a set of bandit environments drawn i.i.d. from a given distribution of environments produces a bandit algorithm which performs well on problems drawn from that distribution, and to a certain extent generalizes to related distributions. Thus, meta-RL learns a prior-dependent bandit algorithm. The specific bandit instantiation of the general meta-RL procedure described in Section[2.3]is defined as follows. Let D be a training distribution over bandit environments. The meta-RL system is trained on a sequence of bandit environments through episodes. At the start of a new episode, its LSTM state is reset and a bandit task b ~ D is sampled. A bandit task is defined as a set of distributions â one for each arm â from which rewards are sampled. The agent plays in this bandit environment for a certain number of trials and is trained to maximize observed rewards. After training, the agentâ s policy is evaluated on a set of bandit tasks that are drawn from a test distribution Dâ , which can either be the same as D or a slight modification of it. We evaluate the resulting performance of the learned bandit algorithm by the cumulative regret, a measure of the loss (in expected rewards) suffered when playing sub-optimal arms. Writing Ha(b) the expected reward of arm a in bandit environment 6, and u*(b) = maXq fla(b) = Hav) (0) (where a*(b) is one optimal arm) the optimal expected reward, we define the cumulative regret (in environment b) as Rr(b) = 7)_, pe*(b) â fia, (b), where a, is the arm (action) chosen at time t. In experiment 4 (Restless bandits; Section 3.1.4), .* also depends on t. We report the performance (average over bandit environments drawn from the test distribution) either in terms of the cumulative regret: E,~p/[Rr(b)] or in terms of number of sub-optimal pulls: Eyvp[S7)_, Har 4 a*(b)}]- | 1611.05763#10 | 1611.05763#12 | 1611.05763 | [
"1611.01578"
]
|
1611.05763#12 | Learning to reinforcement learn | # 3.1.1 BANDITS WITH INDEPENDENT ARMS We ï¬ rst consider a simple two-armed bandit task to examine the behavior of meta-RL under conditions where theoretical guarantees exist and general purpose algorithms apply. The arm distributions are independent Bernoulli distributions (rewards are 1 with probability p and 0 with probability 1 â p), where the parameters of each arm (p1 and p2) are sampled independently and uniformly over [0, 1]. We denote by Di the corresponding distribution over these independent bandit environments (where the subscript i stands for independent arms). At the beginning of each episode, a new bandit task is sampled and held constant for 100 trials. Training lasted for 20,000 episodes. The network is given as input the last reward, last action taken, and the trial number t, subsequently producing the action for the next trial t + 1 (Figure 1). After training, we evaluated on 300 new episodes with the learning rate set to zero (the learned policy is ï¬ | 1611.05763#11 | 1611.05763#13 | 1611.05763 | [
"1611.01578"
]
|
1611.05763#13 | Learning to reinforcement learn | xed). Across model instances, we randomly sampled learning rate and discount, following Mnih et al. (2016). For all ï¬ gures, we plotted the average of the top 5 runs of 100 randomly sampled hyperparameter settings, where the top agents were selected from the ï¬ rst half of the 300 evaluation episodes and performance was plotted for the second half. We measured the cumulative expected regret across the episode, comparing with several algorithms tailored for this independent bandit setting: Gittins indices (Gittins, 1979) (which is Bayesian optimal in the ï¬ nite-horizon case), UCB (Auer et al., 2002) (which comes with theoretical ï¬ nite-time regret guarantees), and Thompson sampling (Thompson, 1933) (which is asymptotically optimal in this setting: see Kaufmann et al., 2012b). Model simulations were conducted with the PymaBandits toolbox from (Kaufmann et al., 2012a) and custom Matlab scripts. As shown in Figure 2a (green line; â Independentâ ), meta-RL outperforms both Thompson sampling (gray dashed line) and UCB (light gray dashed line), although it performs less well compared to Gittins (black dashed line). To verify the critical importance of providing reward information to the LSTM, we removed this input, leaving all other inputs as before. As expected, performance was at chance levels on all bandit tasks. | 1611.05763#12 | 1611.05763#14 | 1611.05763 | [
"1611.01578"
]
|
1611.05763#14 | Learning to reinforcement learn | 5 (a) (b) Testing: Independent = Sub-optimal arm pulls â LSTM A2C â Independentâ - 4g 3 =sGittins S = Thompson 2 uce 7 £ 3 - z1 5 fel - ° *° â Srialy O° 80100 Trial # 100 i) Testing: Dependent Uniform (d) Testing: Easy LSTM A2C â Dependent Uniformâ â LSTM A2C â Mediumâ 3 3) Gittins g 3) ---Gittins 5 £ = thompson 3 2 B 2 Ea | uessszeti----4 g g 5 ict : F g é 0 20 40 60 80-100 0 20 40 60 80 100 Trial # Trial # Cumulative regret ) Testing: Hard © 9 â LSTM A2C â Mediumâ A Indep. 077 =-Gittins ~~ Thompson < m3 UB Unit. 1s 0.67 12 4 2 g & 22 fay 1s) ose o1a S £ E £ oi = Med. 13 07 1a 0 20 40. 60 80 100 Hard 1 15 Trial # Indep. Unif. Easy += Med. â Hard # Testing Condition Figure 2: Performance on independent- and correlated-arm bandits. We report performance as the cumulative expected regret RT for 150 test episodes, averaged over the top 5 hyperparameters for each agent-task con- ï¬ guration, where the top 5 was determined based on performance on a separate set of 150 test episodes. (a) LSTM A2C trained and evaluated on bandits with independent arms (distribution Di; see text), and compared with theoretically optimal models. (b) A single agent playing the medium difï¬ culty task with distribution Dm. Suboptimal arm pulls over trials are depicted for 300 episodes. (c) LSTM A2C trained and evaluated on bandits with dependent uniform arms (distribution Du), (d) trained on medium bandit tasks (Dm) and tested on easy (De), and (e) trained on medium (Dm) and tested on hard task (Dh). | 1611.05763#13 | 1611.05763#15 | 1611.05763 | [
"1611.01578"
]
|
1611.05763#15 | Learning to reinforcement learn | (f) Cumulative regret for all possible combinations of training and testing environments (Di, Du, De, Dm, Dh). 3.1.2 BANDITS WITH DEPENDENT ARMS (I) As we have emphasized, a key property of meta-RL is that it gives rise to a learned RL algorithm that exploits consistent structure in the training distribution. In order to garner empirical evidence for this point, we tested the agent from our ï¬ rst experiment in a more structured bandit task. Speciï¬ cally, we trained the system on two-arm bandits in which arm reward distributions are correlated. In this setting, unlike the one studied in the previous section, experience with either arm provides information about the other. Standard bandit algorithms, including UCB and Thompson sampling, perform suboptimally in this setting, as they are not designed to exploit such correlations. In some cases it is possible to tailor algorithms for speciï¬ c arm structures (see for example Lattimore and Munos, 2014), but extensive problem-speciï¬ c analysis is typically required. Our approach aims to learn a structure-dependent bandit algorithm directly from experience with the target bandit domain. We consider Bernoulli distributions where the parameters (p1, p2) of the two arms are correlated in the sense that p1 = 1 â p2. We consider several training and test distributions. The uniform means that p1 â ¼ U([0, 1]) (uniform distribution over the unit interval). The easy means that p1 â ¼ U({0.1, 0.9}) (uniform distribution over those two possible values), and similarly we call medium when p1 â ¼ U({0.25, 0.75}) and hard when p1 â ¼ U({0.4, 0.6}). We denote by Du, De, Dm, and Dh the corresponding induced distributions over bandit environments. | 1611.05763#14 | 1611.05763#16 | 1611.05763 | [
"1611.01578"
]
|
1611.05763#16 | Learning to reinforcement learn | In addition 6 we also considered the independent uniform distribution (as in the previous section, Di) where p1, p2 â ¼ U([0, 1]) independently. Agents were both trained and tested on those ï¬ ve distributions over bandit environments (among which four correspond to correlated distributions: Du, De, Dm and Dh; and one to the independent case: Di). As a validation of the names given to the task distributions (De, Dm, Dh), results show that the easy task is easier to learn than the medium which itself is easier than the hard one (Figure 2f). This is compatible with the general notion that the hardness of a bandit problem is inversely proportional to the difference between the expected reward of the optimal and sub-optimal arms. We again note that withholding the reward input to the LSTM resulted in chance performance on even the easiest bandit task, as should be expected. Figure 2f reports the results of all possible training-testing regimes. From observing the cumulative expected regrets, we make the following observations: i) agents trained in structured environments (Du, De, Dm, and Dh) develop prior knowledge that can be used effectively when tested on structured distributions â performing comparably to Gittins (Figure 2c-f), and superiorly compared to agents trained on independent arms (Di) in all structured tasks at test (Figure 2f). This is because an agent trained on independent rewards (Di) has not learned to exploit the reward correlations that are useful in those structured tasks. ii) Conversely, previous training on any structured distribution (Du, De, Dm, or Dh) hurts performance when agents are tested on an independent distribution (Di; Figure 2f). This makes sense, as training on correlated arms may produce a policy that relies on speciï¬ c reward structure, thereby impacting performance in problems where no such structure exists. iii) Whilst the previous results emphasize the point that meta-RL gives rise to a separate learnt RL algorithm that implements prior-dependent bandit strategies, results also provide evidence that there is some generalization beyond the exact training distribution encountered (Figure 2f). For example, agents trained on the distributions De and Dm perform well when tested over a much wider structured distribution (i.e. | 1611.05763#15 | 1611.05763#17 | 1611.05763 | [
"1611.01578"
]
|
1611.05763#17 | Learning to reinforcement learn | Du). Further, our evidence suggests that there is generalization from training on the easier tasks (De,Dm) to testing on the hardest task (Dh; Figure 2e), with similar or even marginally superior performance as compared to training on the hard distribution Dh itself(Figure 2f). In contrast, training on the hard distribution Dh results in relatively poor generalization to other structured distributions (Du, De, Dm), suggesting that training purely on hard instances may result in a learned RL algorithm that is more constrained by prior knowledge, perhaps due to the difï¬ culty of solving the original problem. # 3.1.3 BANDITS WITH DEPENDENT ARMS (II) In the previous experiment, the agent could outperform standard bandit algorithms by making use of learned dependencies between arms. However, it could do this while always choosing what it believes to be the highest-paying arm. We next examine a problem where information can be gained by paying a short-term reward cost. Similar problems have been examined before as providing a challenge to standard bandit algorithms (see e.g. Russo and Van Roy, 2014). In contrast, humans and animals make decisions that sacriï¬ ce immediate reward for information gain (e.g. Bromberg-Martin and Hikosaka, 2009). In this experiment, the agent was trained on 11-armed bandits with strong dependencies between arms. All arms had deterministic payouts. | 1611.05763#16 | 1611.05763#18 | 1611.05763 | [
"1611.01578"
]
|
1611.05763#18 | Learning to reinforcement learn | Nine â non-targetâ arms had reward = 1, and one â targetâ arm had reward = 5. Meanwhile, arm a11 was always â informativeâ , in that the target arm was indexed by 10 times a11â s reward (e.g. a reward of 0.2 on a11 indicated that a2 was the target arm). Thus, a11â s payouts ranged from 0.1 to 1. In each episode, the index of the target arm was randomly assigned. | 1611.05763#17 | 1611.05763#19 | 1611.05763 | [
"1611.01578"
]
|
1611.05763#19 | Learning to reinforcement learn | On the ï¬ rst trial of each episode, the agent could not know which arm was the target, so the informative arm returned expected reward 0.55 and every target arm returned expected reward 1.4. Choosing the informative arm thus meant foregoing immediate reward, but with the compensation of valuable information. Episodes were ï¬ ve steps long. Again, the reward on the previous trial was provided as an additional observation to the agent. To facilitate learning, this was encoded in 1-hot format. Results are shown in Figure 3. The agent learned the optimal long-run strategy of sampling the informative arm once, despite the short-term cost, and then using the resulting information to exploit the high-value target arm. Thompson sampling, if supplied the true prior, searched potential target arms and exploited the target if found. UCB performed worse because it sampled every arm once even if the target arm was found early. | 1611.05763#18 | 1611.05763#20 | 1611.05763 | [
"1611.01578"
]
|
1611.05763#20 | Learning to reinforcement learn | 7 15 + LSTM A2C â Oâ Optimal ~~ =~: Thompson 4 10 + UCB one Cumulative Regret Trial # Figure 3: Learned RL procedure pays immediate cost to gain information to improve long-run returns. In this task, one arm is lower-paying but provides perfect information about which of the other ten arms is highest-paying. The remaining nine arms are intermediate in reward. The index of the informative arm is ï¬ xed between episodes, but the index of the highest-paying arm is randomized between episodes. On the ï¬ rst trial, the trained agent samples the informative arm. On subsequent trials, the agent uses the information it gained to deterministically exploit the highest-paying arm. Thompson sampling and UCB are not able to take advantage of the dependencies between arms. # 3.1.4 RESTLESS BANDITS In previous experiments we considered stationary problems where the agentâ s actions yielded in- formation about task parameters that remained ï¬ xed throughout each episode. Next, we consider a bandit problem in which reward probabilities change over the course of an episode, with different rates of change (volatilities) in different episodes. To perform well, the agent must not only track the best arm, but also infer the volatility of the episode and adjust its own learning rate accordingly. In such an environment, learning rates should be higher when the environment is changing rapidly, because past information becomes irrelevant more quickly (Behrens et al., 2007; Sutton and Barto, 1998). We tested whether meta-RL would learn such a flexible RL policy using a two-armed Bernoulli bandit task with reward probabilities p; and 1-p;. The value of p; changed slowly in â low volâ episodes and quickly in â high volâ episodes. The agent had no way of knowing which type of episode it was in, except for its reward history within the episode. Figur shows example â low volâ and â high volâ episodes. Reward magnitude was fixed at 1, and episodes were 100 steps long. | 1611.05763#19 | 1611.05763#21 | 1611.05763 | [
"1611.01578"
]
|
1611.05763#21 | Learning to reinforcement learn | UCB and Thompson sampling were again implemented for comparison. The confidence bound term J xlog n ni Thompson sampling were again implemented for comparison. The conï¬ dence bound term in UCB had parameter Ï which was set to 1, selected empirically for good performance on our data set. Thompson samplingâ s posterior update included knowledge of the Gaussian random walk, but with a ï¬ xed volatility for all episodes. As in the previous experiment, meta-RL achieved lower regret in test than Thompson sampling, UCB, or the Rescorla-Wagner (R-W) learning rule (Figure[4p;|Rescorla et al.|[1972) fixed learning rate (a=0.5). To test whether the agent adjusted its effective learning rate to match environments with different volatility levels, we fit R-W models to the agentâ | 1611.05763#20 | 1611.05763#22 | 1611.05763 | [
"1611.01578"
]
|
1611.05763#22 | Learning to reinforcement learn | s behavior, concatenating episodes into blocks of 10, where each block consisted of only â low volâ or only â high volâ episodes. We considered four different models encompassing different combinations of three parameters: learning rate a, softmax inverse temperature 3, and a lapse rate â ¬ to account for unexplained choice variance not related to estimated value{Economides et al.| (2015). Model â bâ included only 8, â abâ included a and 3, â beâ included 3 and e, and â abeâ included all three. All parameters were estimated separately on each block of 10 episodes. In models where â ¬ and a were not free, they were fixed to 0 and 0.5, respectively. Model comparison by Bayesian Information Criterion (BIC) indicated that meta-RLâ s behavior was better described by a model with different learning rates for each block than a model with a fixed learning rate across blocks. As a control, we performed the same model comparison on the behavior produced by the best R-W agent, finding no benefit of allowing different learning rates across episodes (models â | 1611.05763#21 | 1611.05763#23 | 1611.05763 | [
"1611.01578"
]
|
1611.05763#23 | Learning to reinforcement learn | abeâ and â abâ vs â beâ and â bâ ; Figure[4p-d). In these models, the parameter estimates for meta-RLâ s behavior were strongly related to the volatility of the episodes, indicating that meta-RL adjusted its learning rate to the volatility of the episode, whereas model fitting the R-W behavior simply recovered the fixed parameters (Figure/4p-f). 8 (a) LSTM, low vol (c) (e) 0.75 AAT RN EET TH R-W RW â Tr 50 ee ee ee ~ meen R-W, low vol Ss 40 RRR â ODER ART OEY Ke 0.75" $s © + low vol episodes kee) a0 0400 100 ee OG â _ 26d 0 oO s 2 30 « high vol episodes LSTM, high vol av g Ly (re pepsin i Ap aie ee s i imi hl, ek Ld Led "10 R-W, high vol - true p ey LP ee Sa EER Rap ET RACED) o action s 0 â uly! VEL |) \ yl feedback! oo fo y Vt Ly Ly] be Qala: dot a Jets es eas als foes Ms aaa 22 6 0 20°40 1,00 80100 © 9 02 04 06 ste| (b) P (a) LsTM (f) LsTM 50 20 - â LSTM A2c a se best R-W. & 40 a = = Thompson Sf L o ucs be) P _ 30 . o os 3 @ os 5 ao & = 20 Ss 2 é SS 10 s Ry i 0 20 40 60 so 100 © 8 S$ 8 * 0 02 04 06 step © model alpha Figure 4: Learned RL procedure adapts its own learning rate to the environment. (a) Agents were trained on two-armed bandits with perfectly anti-correlated Bernoulli reward probabilities, p1 and 1-p1. Two example episodes are shown. p1 changed within an episode (solid black line), with a fast Poisson jump rate in â | 1611.05763#22 | 1611.05763#24 | 1611.05763 | [
"1611.01578"
]
|
1611.05763#24 | Learning to reinforcement learn | high volâ episodes and a slow rate in â low volâ episodes. (b) The trained LSTM agent outperformed UCB, Thompson sampling, and a Rescorla-Wagner (R-W) learner with ï¬ xed learning rate α=0.5 (selected for being optimal on average in this distribution of environments). (c,d) We ï¬ t R-W models by maximum likelihood both to the behavior of R-W (as a control) and to the behavior of LSTM. Models including a learning rate that could vary between episodes (â | 1611.05763#23 | 1611.05763#25 | 1611.05763 | [
"1611.01578"
]
|
1611.05763#25 | Learning to reinforcement learn | abâ and â abeâ ) outperformed models without these free parameters on LSTMâ s data, but not on R-Wâ s data. Addition of a lapse parameter further improved model ï¬ ts on LSTMâ s data (â beâ and â abeâ ), suggesting that the algorithm implemented by LSTM is not exactly Rescorla-Wagner. (e,f) The LSTMâ s, but not R-Wâ s, estimated learning rate was higher in volatile episodes. Small jitter added to visualize overlapping points. 3.2 MARKOV DECISION PROBLEMS The foregoing experiments focused on bandit tasks in which actions do not affect the taskâ s underlying state. We turn now to MDPs where actions do inï¬ uence state. We begin with a task derived from the neuroscience literature and then turn to a task, originally studied in the context of animal learning, which requires learning of abstract task structure. As in the previous experiments, our focus is on examining how meta-RL adapts to invariances in task structure. We wrap up by reviewing an experiment recently reported in a related paper (Mirowski et al., 2016), which demonstrates how meta-RL can scale to large-scale navigation tasks with rich visual inputs. | 1611.05763#24 | 1611.05763#26 | 1611.05763 | [
"1611.01578"
]
|
1611.05763#26 | Learning to reinforcement learn | 3.2.1 THE â TWO-STEP TASKâ Here we examine meta-RL in a setting that has been widely used in the neuroscience literature to distinguish the contribution of different systems viewed to support decision making (Daw et al., 2005). Speciï¬ cally, this paradigm â known as the â two-step taskâ (Daw et al., 2011) â was developed to dissociate a model-free system that caches values of actions in states (e.g. TD(1) Q-learning; see Sutton and Barto, 1998), from a model-based system which learns an internal model of the environment and evaluates the value of actions at the time of decision-making through look-ahead planning (Daw et al., 2005). Our interest was in whether meta-RL would give rise to behavior emulating a model-based strategy, despite the use of a model-free algorithm (in this case A2C) to train the system weights. | 1611.05763#25 | 1611.05763#27 | 1611.05763 | [
"1611.01578"
]
|
1611.05763#27 | Learning to reinforcement learn | 9 We used a modiï¬ ed version of the two-step task, designed to bolster the utility of model-based over model-free control (see Kool et al., 2016). The taskâ s structure is diagrammed in Figure 5a. From the ï¬ rst-stage state S1, action a1 leads to second-stage states S2 and S3 with probability 0.75 and 0.25, respectively, while action a2 leads to S2 and S3 with probabilities 0.25 and 0.75. One second-stage state yielded a reward of 1.0 with probability 0.9 (and otherwise zero); the other yielded the same reward with probability 0.1. The identity of the higher-valued state was assigned randomly for each episode. Thus, the expected values for the two ï¬ rst-stage actions were either ra = 0.9 and rb = 0.1, or ra = 0.1 and rb = 0.9. All three states were represented by one-hot vectors, with the transition model held constant across episodes: i.e. only the expected value of the second stage states changed from episode to episode. We applied the conventional analysis used in the neuroscience literature to dissociate model-free from model-based control (Daw et al., 2011). This focuses on the â stay probability,â that is, the probability with which a ï¬ rst-stage action is selected at trial t + 1 following a second-stage reward at trial t, as a function of whether trial t involved a common transition (e.g. action a1 at state S1 led to S2) or rare transition (action a2 at state S1 led to S3). Under the standard interpretation (see Daw et al., 2011), model-free control â à la TD(1) â predicts that there should be a main effect of reward: First-stage actions will tend to be repeated if followed by reward, regardless of transition type, and such actions will tend not to be repeated (choice switch) if followed by non-reward (Figure 5b). In contrast, model-based control predicts an interaction between the reward and transition type, reï¬ ecting a more goal-directed strategy, which takes the transition structure into account. | 1611.05763#26 | 1611.05763#28 | 1611.05763 | [
"1611.01578"
]
|
1611.05763#28 | Learning to reinforcement learn | Intuitively, if you receive a second-stage reward (e.g. at S2) following a rare transition (i.e. having taken action a2 at state S1), to maximize your chances of getting to this reward on the next trial based on your knowledge of the transition structure, the optimal ï¬ rst stage action is a1 (i.e. switch). The results of the stay-probability analysis performed on the agentâ s choices show a pattern conven- tionally interpreted as implying the operation of model-based control (Figure 5c). As in previous experiments, when reward information was withheld at the level of network input, performance was at chance levels. If interpreted following standard practice in neuroscience, the behavior of the model in this experiment reï¬ ects a surprising effect: training with model-free RL gives rise to behavior reï¬ ecting model-based control. We hasten to note that different interpretations of the observed pattern of behavior are available (Akam et al., 2015), a point to which we will return below. However, notwithstanding this caveat, the results of the present experiment provide a further illustration of the point that the learning procedure that emerges from meta-RL can differ starkly from the original RL algorithm used to train the network weights, and takes a form that exploits consistent task structure. # 3.2.2 LEARNING ABSTRACT TASK STRUCTURE | 1611.05763#27 | 1611.05763#29 | 1611.05763 | [
"1611.01578"
]
|
1611.05763#29 | Learning to reinforcement learn | In the ï¬ nal experiment we conducted, we took a step towards examining the scalabilty of meta-RL, by studying a task that involves rich visual inputs, longer time horizons and sparse rewards. Additionally, in this experiment we studied a meta-learning task that requires the system to tune into an abstract task structure, in which a series of objects play deï¬ ned roles which the system must infer. The task was adapted from a classic study of animal behavior, conducted by Harlow (1949). On each trial in the original task, Harlow presented a monkey with two visually contrasting objects. One of these covered a small well containing a morsel of food; the other covered an empty well. The animal chose freely between the two objects and could retrieve the food reward if present. The stage was then hidden and the left-right positions of the objects were randomly reset. A new trial then began, with the animal again choosing freely. This process continued for a set number of trials using the same two objects. At completion of this set of trials, two entirely new and unfamiliar objects were substituted for the original two, and the process began again. Importantly, within each block of trials, one object was chosen to be consistently rewarded (regardless of its left-right position), with the other being consistently unrewarded. What Harlow (Harlow, 1949) observed was that, after substantial practice, monkeys displayed behavior that reï¬ ected an understanding of the taskâ s rules. When two new objects were presented, the monkeyâ s ï¬ rst choice between them was necessarily arbitrary. But after observing the outcome of this ï¬ rst choice, the monkey was at ceiling thereafter, always choosing the rewarded object. | 1611.05763#28 | 1611.05763#30 | 1611.05763 | [
"1611.01578"
]
|
1611.05763#30 | Learning to reinforcement learn | 10 (a) Two-step task (b) Model predictions Model-based Model-free Last trial transition Common = Fare § & . 6 0.0 0.0 & Lasttrial rewarded Last trial not rewarded Last trial rewarded Last trial not rewarded Last trial transition â == Common mm Rare Last trial rewarded Last trial not rewarded # (c) LSTM A2C with reward input Figure 5: Three-state MDP modeled after the â two-step taskâ from Daw et al. (2011). (a) MDP with 3 states and 2 actions. All trials start in state S1, with transition probabilities after taking actions a1 or a2 depicted in the graph. S2 and S3 result in expected rewards ra and rb (see text). (b) Predictions of choice probabilities given either a model-based strategy or a model-free strategy (Daw et al., 2011). | 1611.05763#29 | 1611.05763#31 | 1611.05763 | [
"1611.01578"
]
|
1611.05763#31 | Learning to reinforcement learn | Speciï¬ cally, model-based strategies take into account transition probabilities and would predict an interaction between the amount of reward received on the last trial and the transition (common or uncommon) observed. (c) Agent displays a perfectly model-based proï¬ le when given the reward as input. We anticipated that meta-RL should give rise to the same pattern of abstract one-shot learning. In order to test this, we adapted Harlowâ s paradigm into a visual ï¬ xation task, as follows. A 84x84 pixel input represented a simulated computer screen (see Figure 6a-c). At the beginning of each trial, this display was blank except for a small central ï¬ xation cross (red crosshairs). The agent selected discrete left-right actions which shifted its view approximately 4.4 degrees in the corresponding direction, with a small momentum effect (alternatively, a no-op action could be selected). The completion of a trial required performing two tasks: saccading to the central ï¬ xation cross, followed by saccading to the correct image. If the agent held the ï¬ xation cross in the center of the ï¬ eld of view (within a tolerance of 3.5 degrees visual angle) for a minimum of four time steps, it received a reward of 0.2. The ï¬ xation cross then disappeared and two images â drawn randomly from the ImageNet dataset (Deng et al., 2009) and resized to 34x34 â appeared on the left and right side of the display (Figure 6b). | 1611.05763#30 | 1611.05763#32 | 1611.05763 | [
"1611.01578"
]
|
1611.05763#32 | Learning to reinforcement learn | The agentâ s task was then to â selectâ one of the images by rotating until the center of the image aligned with the center of the visual ï¬ eld of view (within a tolerance of 7 degrees visual angle). Once one of the images was selected, both images disappeared and, after an intertrial interval of 10 time-steps, the ï¬ xation cross reappeared, initiating the next trial. Each episode contained a maximum of 10 trials or 3600 steps. Following Mirowski et al. (2016), we implemented an action repeat of 4, meaning that selecting an image took a minimum of three independent decisions (twelve primitive actions) after having completed the ï¬ | 1611.05763#31 | 1611.05763#33 | 1611.05763 | [
"1611.01578"
]
|
1611.05763#33 | Learning to reinforcement learn | xation. It should be noted, however, that the rotational position of the agent was not limited; that is, 360 degree rotations could occur, while the simulated computer screen only subtended 65 degrees. Although new ImageNet images were chosen at the beginning of each episode (sampled with replacement from a set of 1000 images), the same images were re-used across all trials within an episode, though in randomly varying left-right placement, similar to the objects in Harlowâ s experiment. And as in that experiment, one image was arbitrarily chosen to be the â rewardedâ image throughout the episode. Selection of this image yielded a reward of 1.0, while the other image yielded a reward of -1.0. During test, the A3C learning rate was set to zero and ImageNet images were drawn from a separate held-out set of 1000, never presented during training. A grid search was conducted for optimal hyperparameters. At perfect performance, agents can complete one trial per 20-30 steps and achieve a maximum expected reward of 9 per 10 trials. | 1611.05763#32 | 1611.05763#34 | 1611.05763 | [
"1611.01578"
]
|
1611.05763#34 | Learning to reinforcement learn | Given 11 (a) Fixation (b) Image display (c) Right saccade and selection (d) Training performance (e) Robustness over random seeds (f) One-shot learning Rewardstvial Random ° 40000 89000 120000 Episodesthread 12345678910 Tale Random 0 2% 46d BO 100 Rank Figure 6: Learning abstract task structure in visually rich 3D environment. a-c) Example of a single trial, beginning with a central ï¬ xation, followed by two images with random left-right placement. d) Average performance (measured in average reward per trial) of top 40 out of 100 seeds during training. Maximum expected performance is indicated with black dashed line. e) Performance at episode 100,000 for 100 random seeds, in decreasing order of performance. f) Probability of selecting the rewarded image, as a function of trial number for a single A3C stacked LSTM agent for a range of training durations (episodes per thread, 32 threads). the nature of the task â which requires one-shot image-reward memory together with maintenance of this information over a relatively long timescale (i.e. over ï¬ xation-cross selections and across trials) â we assessed the performance of not only a convolutional-LSTM architecture which receives reward and action as additional input (see Figure 1b and Table 1), but also a convolutional-stacked LSTM architecture used in a navigation task discussed below (see Figure 1c). Agent performance is illustrated in Figure 6d-f. Whilst the single LSTM agent was relatively successful at solving the task, the stacked-LSTM variant exhibited much better robustness. That is, 43% of random seeds of the best hyperparameter set performed at ceiling (Figure 6e), compared to 26% of the single LSTM. | 1611.05763#33 | 1611.05763#35 | 1611.05763 | [
"1611.01578"
]
|
1611.05763#35 | Learning to reinforcement learn | Like the monkeys in Harlowâ s experiment (Harlow, 1949), the networks converge on an optimal policy: Not only does the agent successfully ï¬ xate to begin each trial, but starting on the second trial of each episode it invariably selects the rewarded image, regardless of which image it selected on the ï¬ rst trial(Figure 6f). This reï¬ ects an impressive form of one-shot learning, which reï¬ ects an implicit understanding of the task structure: After observing one trial outcome, the agent binds a complex, unfamiliar image to a speciï¬ c task role. Further experiments, reported elsewhere (Wang et al., 2017), conï¬ rmed that the same recurrent A3C system is also able to solve a substantially more difï¬ cult version of the task. In this task, only one image â which was randomly designated to be either the rewarding item to be selected, or the unrewarding item to be avoided â was presented on every trial during an episode, with the other image presented being novel on every trial. 3.2.3 ONE-SHOT NAVIGATION The experiments using the Harlow task demonstrate the capacity of meta-RL to operate effectively within a visually rich environment, with relatively long time horizons. Here we consider related experiments recently reported within the navigation domain (Mirowski et al., 2016) (see also Jaderberg et al., 2016), and discuss how these can be recast as examples of meta-RL â attesting to the scaleability of this principle to more typical MDP settings that pose challenging RL problems due to dynamically changing sparse rewards. | 1611.05763#34 | 1611.05763#36 | 1611.05763 | [
"1611.01578"
]
|
1611.05763#36 | Learning to reinforcement learn | 12 (a) Labryinth I-maze (b) Illustrative Episode sy Value function w i) 100 200 300 400 500 600 700 800 Time step in episode â FF A3c (87) â Nav 3c (260) 0.0 0.2 o4 0.6 08 Lo ee # (c) Performance # (d) Value Function Figure 7: a) view of I-maze showing goal object in one of the 4 alcoves b) following initial exploration (light trajectories), agent repeatedly goes to goal (blue trajectories) c) Performance of stacked LSTM (termed â Nav A3Câ ) and feedforward (â FF A3Câ ) architectures, per episode (goal = 10 points) averaged across top 5 hyperparameters. e) following initial goal discovery (goal hits marked in red), value function occurs well in advance of the agent seeing the goal which is hidden in an alcove. | 1611.05763#35 | 1611.05763#37 | 1611.05763 | [
"1611.01578"
]
|
1611.05763#37 | Learning to reinforcement learn | Figure used with permission from Mirowski et al. (2016). Speciï¬ cally, we consider a setting where the environment layout is ï¬ xed but the goal changes location randomly each episode (Figure 7; Mirowski et al., 2016). Although the layout is relatively simple, the Labyrinth environment (see for details Mirowski et al., 2016) is richer and more ï¬ nely discretized (cf VizDoom), resulting in long time horizons; a trained agent takes approximately 100 steps (10 seconds) to reach the goal for the ï¬ | 1611.05763#36 | 1611.05763#38 | 1611.05763 | [
"1611.01578"
]
|
1611.05763#38 | Learning to reinforcement learn | rst time in a given episode. Results show that a stacked LSTM architecture (Figure 1c), that receives reward and action as additional inputs equivalent to that used in our Harlow experiment achieves near-optimal behavior â showing one-shot memory for the goal location after an initial exploratory period, followed by repeated exploitation (see Figure 7c). This is evidenced by a substantial decrease in latency to reach the goal for the ï¬ rst time (~100 timesteps) compared to subsequent visits (~30 timesteps). Notably, a feedforward network (see Figure 7c), that receives only a single image as observation, is unable to solve the task (i.e. no decrease in latency between successive goal rewards). Whilst not interpreted as such in Mirowski et al. (2016), this provides a clear demonstration of the effectiveness of meta-RL: a separate RL algorithm with the capability of one-shot learning emerges through training with a ï¬ xed and more incremental RL algorithm (i.e. policy gradient). Meta-RL can be viewed as allowing the agent to infer the optimal value function following initial exploration (see Figure 7d) â with the additional LSTM providing information about the currently relevant goal location to the LSTM that outputs the policy over the extended timeframe of the episode. Taken together, meta-RL allows a base model-free RL algorithm to solve a challenging RL problem that might otherwise require fundamentally different approaches (e.g. based on successor representations or fully model-based RL). # 4 RELATED WORK We have already touched on the relationship between deep meta-RL and pioneering work by Hochre- iter et al. (2001) using recurrent networks to perform meta-learning in the setting of full supervision | 1611.05763#37 | 1611.05763#39 | 1611.05763 | [
"1611.01578"
]
|
1611.05763#39 | Learning to reinforcement learn | 13 (see also Cotter and Conwell, 1990; Prokhorov et al., 2002; Younger et al., 1999). That approach was recently extended in Santoro et al. (2016), which demonstrated the utility of leveraging an external memory structure. The idea of crossing meta-learning with reinforcement learning has been previ- ously discussed by Schmidhuber et al. (1996). That work, which appears to have introduced the term â meta-RL,â differs from ours in that it did not involve a neural network implementation. More recently, however, there has been a surge of interest in using neural networks to learn optimization procedures, using a range of innovative meta-learning techniques (Andrychowicz et al., 2016; Chen et al., 2016; Li and Malik, 2016; Zoph and Le, 2016). Recent work by Chen et al. (2016) is particularly close in spirit to the work we have presented here, and can be viewed as treating the case of â | 1611.05763#38 | 1611.05763#40 | 1611.05763 | [
"1611.01578"
]
|
1611.05763#40 | Learning to reinforcement learn | inï¬ nite banditsâ using a meta-learning strategy broadly analogous to the one we have pursued. The present research also bears a close relationship with a different body of recent work that has not been framed in terms of meta-learning. A number of studies have used deep RL to train recurrent neural networks on navigation tasks, where the structure of the task (e.g., goal location or maze conï¬ guration) varies across episodes (Jaderberg et al., 2016; Mirowski et al., 2016). The ï¬ nal experiment that we presented above, drawn from (Mirowski et al., 2016), is one example. To the extent that such experiments involve the key ingredients of deep meta-RL â a neural network with memory, trained through RL on a series of interrelated tasks â they are almost certain to involve the kind of meta-learning we have described in the present work. This related work provides an indication that meta-RL can be fruitfully applied to larger scale problems than the ones we have studied in our own experiments. Importantly, it indicates that a key ingredient in scaling the approach may be to incorporate memory mechanisms beyond those inherent in unstructured recurrent neural networks (see Graves et al., 2016; Mirowski et al., 2016; Santoro et al., 2016; Weston et al., 2014). Our work, for its part, suggests that there is untapped potential in deep recurrent RL agents to meta-learn quite abstract aspects of task structure, and to discover strategies that exploit such structure toward rapid, ï¬ exible adaptation. During completion of the present research, closely related work was reported by Duan et al. (2016). Like us, Duan and colleagues use deep RL to train a recurrent network on a series of interrelated tasks, with the result that the network dynamics learn a second RL procedure which operates on a faster time-scale than the original algorithm. They compare the performance of these learned procedures against conventional RL algorithms in a number of domains, including bandits and navigation. An important difference between this parallel work and our own is the formerâ | 1611.05763#39 | 1611.05763#41 | 1611.05763 | [
"1611.01578"
]
|
1611.05763#41 | Learning to reinforcement learn | s primary focus on relatively unstructured task distributions (e.g., uniformly distributed bandit problems and random MDPs); our main interest, in contrast, has been in structured task distributions (e.g., dependent bandits and the task introduced by Harlow, 1949), because it is in this setting where the system can learn a biased â and therefore efï¬ cient â RL procedure that exploits regular task structure. The two perspectives are, in this regard, nicely complementary. # 5 CONCLUSION A current challenge in artiï¬ cial intelligence is to design agents that can adapt rapidly to new tasks by leveraging knowledge acquired through previous experience with related activities. In the present work we have reported initial explorations of what we believe is one promising avenue toward this goal. Deep meta-RL involves a combination of three ingredients: (1) Use of a deep RL algorithm to train a recurrent neural network, (2) a training set that includes a series of interrelated tasks, (3) network input that includes the action selected and reward received in the previous time interval. The key result, which emerges naturally from the setup rather than being specially engineered, is that the recurrent network dynamics learn to implement a second RL procedure, independent from and potentially very different from the algorithm used to train the network weights. Critically, this learned RL algorithm is tuned to the shared structure of the training tasks. In this sense, the learned algorithm builds in domain-appropriate biases, which can allow it to operate with greater efï¬ ciency than a general-purpose algorithm. This bias effect was particularly evident in the results of our experiments involving dependent bandits (sections 3.1.2 and 3.1.3), where the system learned to take advantage of the taskâ s covariance structure; and in our study of Harlowâ s animal learning task (section 3.2.2), where the recurrent network learned to exploit the taskâ s structure in order to display one-shot learning with complex novel stimuli. | 1611.05763#40 | 1611.05763#42 | 1611.05763 | [
"1611.01578"
]
|
1611.05763#42 | Learning to reinforcement learn | 14 One of our experiments (section 3.2.1) illustrated the point that a system trained using a model-free RL algorithm can develop behavior that emulates model-based control. A few further comments on this result are warranted. As noted in our presentation of the simulation results, the pattern of choice behavior displayed by the network has been considered in the cognitive and neuroscience literatures as reï¬ ecting model-based control or tree search. However, as has been remarked in very recent work, the same pattern can arise from a model-free system with an appropriate state representation (Akam et al., 2015). Indeed, we suspect this may be how our network in fact operates. However, other ï¬ ndings suggest that a more explicitly model-based control mechanism can emerge when a similar system is trained on a more diverse set of tasks. In particular, Ilin et al. (2007) showed that recurrent networks trained on random mazes can approximate dynamic programming procedures (see also Silver et al., 2017; Tamar et al., 2016). At the same time, as we have stressed, we consider it an important aspect of deep meta-RL that it yields a learned RL algorithm that capitalizes on invariances in task structure. As a result, when faced with widely varying but still structured environments, deep meta-RL seems likely to generate RL procedures that occupy a grey area between model-free and model-based RL. The two-step decision problem studied in Section 3.2.1 was derived from neuroscience, and we believe deep meta-RL may have important implications in that arena (Wang et al., 2017). The notion of meta-RL has been discussed previously in neuroscience but only in a narrow sense, according to which meta-learning adjusts scalar hyperparameters such as the learning rate or softmax inverse temperature (Khamassi et al., 2011; 2013; Kobayashi et al., 2009; Lee and Wang, 2009; Schweighofer and Doya, 2003; Soltani et al., 2006). In recent work (Wang et al., 2017) we have shown that deep meta-RL can account for a wider range of experimental observations, providing an integrative framework for understanding the respective roles of dopamine and the prefrontal cortex in biological reinforcement learning. | 1611.05763#41 | 1611.05763#43 | 1611.05763 | [
"1611.01578"
]
|
1611.05763#43 | Learning to reinforcement learn | ACKNOWLEDGEMENTS We would like the thank the following colleagues for useful discussion and feedback: Nando de Freitas, David Silver, Koray Kavukcuoglu, Daan Wierstra, Demis Hassabis, Matt Hoffman, Piotr Mirowski, Andrea Banino, Sam Ritter, Neil Rabinowitz, Peter Dayan, Peter Battaglia, Alex Lerchner, Tim Lillicrap and Greg Wayne. # REFERENCES Thomas Akam, Rui Costa, and Peter Dayan. Simple plans or sophisticated habits? state, transition and learning interactions in the two-step task. PLoS Comput Biol, 11(12):e1004648, 2015. | 1611.05763#42 | 1611.05763#44 | 1611.05763 | [
"1611.01578"
]
|
1611.05763#44 | Learning to reinforcement learn | Marcin Andrychowicz, Misha Denil, Sergio Gomez, Matthew W Hoffman, David Pfau, Tom Schaul, and Nando de Freitas. Learning to learn by gradient descent by gradient descent. arXiv preprint arXiv:1606.04474, 2016. Peter Auer, Nicolo Cesa-Bianchi, and Paul Fischer. Finite-time analysis of the multiarmed bandit problem. Machine learning, 47(2-3):235â 256, 2002. | 1611.05763#43 | 1611.05763#45 | 1611.05763 | [
"1611.01578"
]
|
1611.05763#45 | Learning to reinforcement learn | Timothy EJ Behrens, Mark W Woolrich, Mark E Walton, and Matthew FS Rushworth. Learning the value of information in an uncertain world. Nature neuroscience, 10(9):1214â 1221, 2007. Ethan S Bromberg-Martin and Okihide Hikosaka. Midbrain dopamine neurons signal preference for advance information about upcoming rewards. Neuron, 63(1):119â 126, 2009. Yutian Chen, Matthew W Hoffman, Sergio Gomez, Misha Denil, Timothy P Lillicrap, and Nando de Freitas. Learning to learn for global optimization of black box functions. arXiv preprint arXiv:1611.03824, 2016. NE Cotter and PR Conwell. Fixed-weight networks can learn. In 1990 IJCNN International Joint Conference on Neural Networks, pages 553â 559, 1990. Nathaniel D Daw, Yael Niv, and Peter Dayan. | 1611.05763#44 | 1611.05763#46 | 1611.05763 | [
"1611.01578"
]
|
1611.05763#46 | Learning to reinforcement learn | Uncertainty-based competition between prefrontal and dorsolateral striatal systems for behavioral control. Nature neuroscience, 8(12):1704â 1711, 2005. Nathaniel D Daw, Samuel J Gershman, Ben Seymour, Peter Dayan, and Raymond J Dolan. Model-based inï¬ uences on humansâ choices and striatal prediction errors. Neuron, 69(6):1204â 1215, 2011. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pages 248â 255. IEEE, 2009. Yan Duan, John Schulman, Xi Chen, Peter L. Bartlett, Ilya Sutskever, and Pieter Abbeel. | 1611.05763#45 | 1611.05763#47 | 1611.05763 | [
"1611.01578"
]
|
1611.05763#47 | Learning to reinforcement learn | Rl2: Fast reinforcement learning via slow reinforcement learning. arXiv preprint arXiv:1611.02779, 2016. URL http://arxiv. 15 org/abs/1611.02779. Marcos Economides, Zeb Kurth-Nelson, Annika Lübbert, Marc Guitart-Masip, and Raymond Dolan. Model- based reasoning in humans becomes automatic with training. PLoS Computational Biology, 11(9):e1004463, 2015. | 1611.05763#46 | 1611.05763#48 | 1611.05763 | [
"1611.01578"
]
|
1611.05763#48 | Learning to reinforcement learn | John C Gittins. Bandit processes and dynamic allocation indices. Journal of the Royal Statistical Society. Series B (Methodological), pages 148â 177, 1979. Alex Graves, Greg Wayne, Malcolm Reynolds, Tim Harley, Ivo Danihelka, Agnieszka Grabska-Barwi´nska, Sergio Gómez Colmenarejo, Edward Grefenstette, Tiago Ramalho, John Agapiou, et al. | 1611.05763#47 | 1611.05763#49 | 1611.05763 | [
"1611.01578"
]
|
1611.05763#49 | Learning to reinforcement learn | Hybrid computing using a neural network with dynamic external memory. Nature, 2016. Harry F Harlow. The formation of learning sets. Psychological review, 56(1):51, 1949. Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural computation, 9(8):1735â 1780, 1997. Sepp Hochreiter, A Steven Younger, and Peter R Conwell. | 1611.05763#48 | 1611.05763#50 | 1611.05763 | [
"1611.01578"
]
|
1611.05763#50 | Learning to reinforcement learn | Learning to learn using gradient descent. International Conference on Artiï¬ cial Neural Networks, pages 87â 94. Springer, 2001. In Roman Ilin, Robert Kozma, and Paul J Werbos. Efï¬ cient learning in cellular simultaneous recurrent neural networks-the case of maze navigation problem. In 2007 IEEE International Symposium on Approximate Dynamic Programming and Reinforcement Learning, pages 324â 329. IEEE, 2007. Max Jaderberg, Volodymir Mnih, Wojciech Czarnecki, Tom Schaul, Joel Z. Leibo, David Silver, and Koray Kavukcuoglu. Reinforcement learning with unsupervised auxiliary tasks. arXiv preprint arXiv:1611.05397, 2016. URL http://arxiv.org/abs/1611.05397. | 1611.05763#49 | 1611.05763#51 | 1611.05763 | [
"1611.01578"
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.