doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1611.06440 | 25 | âAlexNet / Flowers-102 VGG-16 / Birds-200 Weight Activation OBD Taylor Weight âActivation OBD Taylor Mutual Mean S.d. APoZ Mean S.d. APoZ Info. Per layer 017 0.65 067 054 0.64 0.77 0.27 056 057 «(0.35 «(059 «(0.73 0.28 All layers 028 051 053 041 0.68 0.37 0.34 0.35 «030 «043° «(0.65 (0.14 0.35 (w/fs-norm) 0.13 (0.63«0.61«0.60 = (O75, 0.33 «0.64 «(066 «(0.51 2«=«-~=S.73 0.47 AlexNet / Birds-200 VGG-16 / Flowers-102 Per layer 036 «0.57 065 042 054 0.81 0.19 051 047 036 021 06 All layers 032 037 051 0.28 061 0.37 0.35 053 045 0.61 0.28 0.02 (w/fs-norm) 0.23 0.54. 0.57 0.49 - 0.78 0.28 «0.66 «(065 «(061 ~~ - | 1611.06440#25 | Pruning Convolutional Neural Networks for Resource Efficient Inference | We propose a new formulation for pruning convolutional kernels in neural
networks to enable efficient inference. We interleave greedy criteria-based
pruning with fine-tuning by backpropagation - a computationally efficient
procedure that maintains good generalization in the pruned network. We propose
a new criterion based on Taylor expansion that approximates the change in the
cost function induced by pruning network parameters. We focus on transfer
learning, where large pretrained networks are adapted to specialized tasks. The
proposed criterion demonstrates superior performance compared to other
criteria, e.g. the norm of kernel weights or feature map activation, for
pruning large CNNs after adaptation to fine-grained classification tasks
(Birds-200 and Flowers-102) relaying only on the first order gradient
information. We also show that pruning can lead to more than 10x theoretical
(5x practical) reduction in adapted 3D-convolutional filters with a small drop
in accuracy in a recurrent gesture classifier. Finally, we show results for the
large-scale ImageNet dataset to emphasize the flexibility of our approach. | http://arxiv.org/pdf/1611.06440 | Pavlo Molchanov, Stephen Tyree, Tero Karras, Timo Aila, Jan Kautz | cs.LG, stat.ML | 17 pages, 14 figures, ICLR 2017 paper | null | cs.LG | 20161119 | 20170608 | [
{
"id": "1512.08571"
},
{
"id": "1607.03250"
},
{
"id": "1509.09308"
}
] |
1611.06440 | 27 | # OBD Taylor Mutual
Table 1: Spearmanâs rank correlation of criteria vs. oracle for convolutional feature maps of VGG-16 and AlexNet ï¬ne-tuned on Birds-200 and Flowers-102 datasets, and AlexNet trained on ImageNet.
0.8 07 0.6 3g 205 £ S04 g 5 0.3| * + Activation (mean) go i g ++ Minimum weight < 0.2|| = Tver: flops reg / â#1 es Random âA A From scratch 0.1) .. opp (Lecun et al., 1990) © APoZ (Hu et al., 2016) 0 â1 80% 80% 60% 40% 20% 0% Parameters | 1611.06440#27 | Pruning Convolutional Neural Networks for Resource Efficient Inference | We propose a new formulation for pruning convolutional kernels in neural
networks to enable efficient inference. We interleave greedy criteria-based
pruning with fine-tuning by backpropagation - a computationally efficient
procedure that maintains good generalization in the pruned network. We propose
a new criterion based on Taylor expansion that approximates the change in the
cost function induced by pruning network parameters. We focus on transfer
learning, where large pretrained networks are adapted to specialized tasks. The
proposed criterion demonstrates superior performance compared to other
criteria, e.g. the norm of kernel weights or feature map activation, for
pruning large CNNs after adaptation to fine-grained classification tasks
(Birds-200 and Flowers-102) relaying only on the first order gradient
information. We also show that pruning can lead to more than 10x theoretical
(5x practical) reduction in adapted 3D-convolutional filters with a small drop
in accuracy in a recurrent gesture classifier. Finally, we show results for the
large-scale ImageNet dataset to emphasize the flexibility of our approach. | http://arxiv.org/pdf/1611.06440 | Pavlo Molchanov, Stephen Tyree, Tero Karras, Timo Aila, Jan Kautz | cs.LG, stat.ML | 17 pages, 14 figures, ICLR 2017 paper | null | cs.LG | 20161119 | 20170608 | [
{
"id": "1512.08571"
},
{
"id": "1607.03250"
},
{
"id": "1509.09308"
}
] |
1611.06440 | 28 | 0.8 0.8 07 0.7 0.6 0.6 3g 205 205 8 S04 0.4 e Taylor \ 0.3| * + Activation (mean) 503 Activation (mean) | go i 3g 0. 5 ++ Minimum weight 3 Minimum weight 0.2|| = Tver: flops reg / < 02 Taylor, flops reg â#1 es Random Random. âA A From scratch From scratch \ 0.1) .. opp (Lecun et al., 1990) 0.1 OBD (LeCun et al., 1990) © APoZ (Hu et al., 2016) APoz (Hu et al., 2016) ~ 0 â1 0.0 80% 80% 60% 40% 20% 0% 30 25 20 15 10 5 0 Parameters GFLOPs
0.8 0.7 0.6 3g 205 8 0.4 e Taylor \ 503 Activation (mean) | 3g 0. 5 3 Minimum weight < 02 Taylor, flops reg Random. From scratch \ 0.1 OBD (LeCun et al., 1990) APoz (Hu et al., 2016) ~ 0.0 30 25 20 15 10 5 0 GFLOPs
Figure 4: Pruning of feature maps in VGG-16 ï¬ne-tuned on the Birds-200 dataset. | 1611.06440#28 | Pruning Convolutional Neural Networks for Resource Efficient Inference | We propose a new formulation for pruning convolutional kernels in neural
networks to enable efficient inference. We interleave greedy criteria-based
pruning with fine-tuning by backpropagation - a computationally efficient
procedure that maintains good generalization in the pruned network. We propose
a new criterion based on Taylor expansion that approximates the change in the
cost function induced by pruning network parameters. We focus on transfer
learning, where large pretrained networks are adapted to specialized tasks. The
proposed criterion demonstrates superior performance compared to other
criteria, e.g. the norm of kernel weights or feature map activation, for
pruning large CNNs after adaptation to fine-grained classification tasks
(Birds-200 and Flowers-102) relaying only on the first order gradient
information. We also show that pruning can lead to more than 10x theoretical
(5x practical) reduction in adapted 3D-convolutional filters with a small drop
in accuracy in a recurrent gesture classifier. Finally, we show results for the
large-scale ImageNet dataset to emphasize the flexibility of our approach. | http://arxiv.org/pdf/1611.06440 | Pavlo Molchanov, Stephen Tyree, Tero Karras, Timo Aila, Jan Kautz | cs.LG, stat.ML | 17 pages, 14 figures, ICLR 2017 paper | null | cs.LG | 20161119 | 20170608 | [
{
"id": "1512.08571"
},
{
"id": "1607.03250"
},
{
"id": "1509.09308"
}
] |
1611.06440 | 29 | Figure 4: Pruning of feature maps in VGG-16 ï¬ne-tuned on the Birds-200 dataset.
even if their relationship is not linear. Given the difference between oracle4 and criterion ranks di = rank(Îoracle(i)) â rank(Îcriterion(i)) for each parameter i, the rank correlation is computed:
N 6 S=1- â_â_ d 10 N(N? = 1) » (10)
where N is the number of parameters (and the highest rank). This correlation coefï¬cient takes values in [â1, 1], where â1 implies full negative correlation, 0 no correlation, and 1 full positive correlation. | 1611.06440#29 | Pruning Convolutional Neural Networks for Resource Efficient Inference | We propose a new formulation for pruning convolutional kernels in neural
networks to enable efficient inference. We interleave greedy criteria-based
pruning with fine-tuning by backpropagation - a computationally efficient
procedure that maintains good generalization in the pruned network. We propose
a new criterion based on Taylor expansion that approximates the change in the
cost function induced by pruning network parameters. We focus on transfer
learning, where large pretrained networks are adapted to specialized tasks. The
proposed criterion demonstrates superior performance compared to other
criteria, e.g. the norm of kernel weights or feature map activation, for
pruning large CNNs after adaptation to fine-grained classification tasks
(Birds-200 and Flowers-102) relaying only on the first order gradient
information. We also show that pruning can lead to more than 10x theoretical
(5x practical) reduction in adapted 3D-convolutional filters with a small drop
in accuracy in a recurrent gesture classifier. Finally, we show results for the
large-scale ImageNet dataset to emphasize the flexibility of our approach. | http://arxiv.org/pdf/1611.06440 | Pavlo Molchanov, Stephen Tyree, Tero Karras, Timo Aila, Jan Kautz | cs.LG, stat.ML | 17 pages, 14 figures, ICLR 2017 paper | null | cs.LG | 20161119 | 20170608 | [
{
"id": "1512.08571"
},
{
"id": "1607.03250"
},
{
"id": "1509.09308"
}
] |
1611.06440 | 30 | We show Spearmanâs correlation in Table |1|to compare the oracle-abs ranking to rankings by different criteria on a set of networks/datasets some of which are going to be introduced later. Data-dependent criteria (all except weight magnitude) are computed on training data during the fine-tuning before or between pruning iterations. As a sanity check, we evaluate random ranking and observe 0.0 correlation across all layers. âPer layerâ analysis shows ranking within each convolutional layer, while âAll layersâ describes ranking across layers. While several criteria do not scale well across layers with raw values, a layer-wise £2-normalization significantly improves performance. The Taylor criterion has the highest correlation among the criteria, both within layers and across layers (with C2 normalization). OBD shows the best correlation across layers when no normalization used; it also shows best results for correlation on ImageNet dataset. (See Appendi for further analysis.)
# 3.3 PRUNING FINE-TUNED IMAGENET NETWORKS
We now evaluate the full iterative pruning procedure on two transfer learning problems. We focus on reducing the number of convolutional feature maps and the total estimated ï¬oating point operations (FLOPs). Fine-grained recognition is difï¬cult for relatively small datasets without relying on transfer | 1611.06440#30 | Pruning Convolutional Neural Networks for Resource Efficient Inference | We propose a new formulation for pruning convolutional kernels in neural
networks to enable efficient inference. We interleave greedy criteria-based
pruning with fine-tuning by backpropagation - a computationally efficient
procedure that maintains good generalization in the pruned network. We propose
a new criterion based on Taylor expansion that approximates the change in the
cost function induced by pruning network parameters. We focus on transfer
learning, where large pretrained networks are adapted to specialized tasks. The
proposed criterion demonstrates superior performance compared to other
criteria, e.g. the norm of kernel weights or feature map activation, for
pruning large CNNs after adaptation to fine-grained classification tasks
(Birds-200 and Flowers-102) relaying only on the first order gradient
information. We also show that pruning can lead to more than 10x theoretical
(5x practical) reduction in adapted 3D-convolutional filters with a small drop
in accuracy in a recurrent gesture classifier. Finally, we show results for the
large-scale ImageNet dataset to emphasize the flexibility of our approach. | http://arxiv.org/pdf/1611.06440 | Pavlo Molchanov, Stephen Tyree, Tero Karras, Timo Aila, Jan Kautz | cs.LG, stat.ML | 17 pages, 14 figures, ICLR 2017 paper | null | cs.LG | 20161119 | 20170608 | [
{
"id": "1512.08571"
},
{
"id": "1607.03250"
},
{
"id": "1509.09308"
}
] |
1611.06440 | 31 | # 4We use Oracle-abs because of better performance in previous experiment
7
Published as a conference paper at ICLR 2017
© a © a ° a ° a © ES © ES Accuracy, test set Accuracy, test set Taylor âTaylor 0.3] ++ Activation (mean) 0.3] ++ Activation (mean) â+ Minimum weight â+ Minimum weight 0.2} ++. Random 0.2} =â*. Random a rom race a romana \ 0.1} «+. OBD (LeCun et al., 1990) 0.1} ++ 08D (LeCun et al., 1990) s+ APoZ (Hu et al., 2016) + APoZ (Hu et al., 2016) \L 0.9 0.0 100% 80% 60% 40% 20% 0% 14.12 «10 #08 O06 04 02 00 Parameters GFLOPs
© a ° a © ES Accuracy, test set Taylor 0.3] ++ Activation (mean) â+ Minimum weight 0.2} ++. Random a rom race 0.1} «+. OBD (LeCun et al., 1990) s+ APoZ (Hu et al., 2016) 0.9 100% 80% 60% 40% 20% 0% Parameters | 1611.06440#31 | Pruning Convolutional Neural Networks for Resource Efficient Inference | We propose a new formulation for pruning convolutional kernels in neural
networks to enable efficient inference. We interleave greedy criteria-based
pruning with fine-tuning by backpropagation - a computationally efficient
procedure that maintains good generalization in the pruned network. We propose
a new criterion based on Taylor expansion that approximates the change in the
cost function induced by pruning network parameters. We focus on transfer
learning, where large pretrained networks are adapted to specialized tasks. The
proposed criterion demonstrates superior performance compared to other
criteria, e.g. the norm of kernel weights or feature map activation, for
pruning large CNNs after adaptation to fine-grained classification tasks
(Birds-200 and Flowers-102) relaying only on the first order gradient
information. We also show that pruning can lead to more than 10x theoretical
(5x practical) reduction in adapted 3D-convolutional filters with a small drop
in accuracy in a recurrent gesture classifier. Finally, we show results for the
large-scale ImageNet dataset to emphasize the flexibility of our approach. | http://arxiv.org/pdf/1611.06440 | Pavlo Molchanov, Stephen Tyree, Tero Karras, Timo Aila, Jan Kautz | cs.LG, stat.ML | 17 pages, 14 figures, ICLR 2017 paper | null | cs.LG | 20161119 | 20170608 | [
{
"id": "1512.08571"
},
{
"id": "1607.03250"
},
{
"id": "1509.09308"
}
] |
1611.06440 | 32 | © a ° a © ES Accuracy, test set âTaylor 0.3] ++ Activation (mean) â+ Minimum weight 0.2} =â*. Random a romana \ 0.1} ++ 08D (LeCun et al., 1990) + APoZ (Hu et al., 2016) \L 0.0 14.12 «10 #08 O06 04 02 00 GFLOPs
Figure 5: Pruning of feature maps in AlexNet on ï¬ne-tuned on Flowers-102.
learning. Branson et al. (2014) show that training CNN from scratch on the Birds-200 dataset achieves test accuracy of only 10.9%. We compare results to training a randomly initialized CNN with half the number of parameters per layer, denoted "from scratch". | 1611.06440#32 | Pruning Convolutional Neural Networks for Resource Efficient Inference | We propose a new formulation for pruning convolutional kernels in neural
networks to enable efficient inference. We interleave greedy criteria-based
pruning with fine-tuning by backpropagation - a computationally efficient
procedure that maintains good generalization in the pruned network. We propose
a new criterion based on Taylor expansion that approximates the change in the
cost function induced by pruning network parameters. We focus on transfer
learning, where large pretrained networks are adapted to specialized tasks. The
proposed criterion demonstrates superior performance compared to other
criteria, e.g. the norm of kernel weights or feature map activation, for
pruning large CNNs after adaptation to fine-grained classification tasks
(Birds-200 and Flowers-102) relaying only on the first order gradient
information. We also show that pruning can lead to more than 10x theoretical
(5x practical) reduction in adapted 3D-convolutional filters with a small drop
in accuracy in a recurrent gesture classifier. Finally, we show results for the
large-scale ImageNet dataset to emphasize the flexibility of our approach. | http://arxiv.org/pdf/1611.06440 | Pavlo Molchanov, Stephen Tyree, Tero Karras, Timo Aila, Jan Kautz | cs.LG, stat.ML | 17 pages, 14 figures, ICLR 2017 paper | null | cs.LG | 20161119 | 20170608 | [
{
"id": "1512.08571"
},
{
"id": "1607.03250"
},
{
"id": "1509.09308"
}
] |
1611.06440 | 33 | Fig. 4 shows pruning of VGG-16 after ï¬ne-tuning on the Birds-200 dataset (as described previously). At each pruning iteration, we remove a single feature map and then perform 30 minibatch SGD updates with batch-size 32, momentum 0.9, learning rate 10â4, and weight decay 10â4. The ï¬gure depicts accuracy relative to the pruning rate (left) and estimated GFLOPs (right). The Taylor criterion shows the highest accuracy for nearly the entire range of pruning ratios, and with FLOPs regularization demonstrates the best performance relative to the number of operations. OBD shows slightly worse performance of pruning in terms of parameters, however signiï¬cantly worse in terms of FLOPs. | 1611.06440#33 | Pruning Convolutional Neural Networks for Resource Efficient Inference | We propose a new formulation for pruning convolutional kernels in neural
networks to enable efficient inference. We interleave greedy criteria-based
pruning with fine-tuning by backpropagation - a computationally efficient
procedure that maintains good generalization in the pruned network. We propose
a new criterion based on Taylor expansion that approximates the change in the
cost function induced by pruning network parameters. We focus on transfer
learning, where large pretrained networks are adapted to specialized tasks. The
proposed criterion demonstrates superior performance compared to other
criteria, e.g. the norm of kernel weights or feature map activation, for
pruning large CNNs after adaptation to fine-grained classification tasks
(Birds-200 and Flowers-102) relaying only on the first order gradient
information. We also show that pruning can lead to more than 10x theoretical
(5x practical) reduction in adapted 3D-convolutional filters with a small drop
in accuracy in a recurrent gesture classifier. Finally, we show results for the
large-scale ImageNet dataset to emphasize the flexibility of our approach. | http://arxiv.org/pdf/1611.06440 | Pavlo Molchanov, Stephen Tyree, Tero Karras, Timo Aila, Jan Kautz | cs.LG, stat.ML | 17 pages, 14 figures, ICLR 2017 paper | null | cs.LG | 20161119 | 20170608 | [
{
"id": "1512.08571"
},
{
"id": "1607.03250"
},
{
"id": "1509.09308"
}
] |
1611.06440 | 34 | In Fig. 5, we show pruning of the CaffeNet implementation of AlexNet (Krizhevsky et al., 2012) after adapting to the Oxford Flowers 102 dataset (Nilsback & Zisserman, 2008), with 2040 training and 6129 test images from 102 species of ï¬owers. Criteria correlation with oracle-abs is summarized in Table 1. We initially ï¬ne-tune the network for 20 epochs using a learning rate of 0.001, achieving a ï¬nal test accuracy of 80.1%. Then pruning procedes as previously described for Birds-200, except with only 10 mini-batch updates between pruning iterations. We observe the superior performance of the Taylor and OBD criteria in both number of parameters and GFLOPs.
We observed that Taylor criterion shows the best performance which is closely followed by OBD with a bit lower Spearmanâs rank correlation coefï¬cient. Implementing OBD takes more effort because of computation of diagonal of the Hessian and it is 50% to 300% slower than Taylor criteria that relies on ï¬rst order gradient only.
Fig. 6 shows pruning with the Taylor technique and a varying number of ï¬ne-tuning updates between pruning iterations. Increasing the number of updates results in higher accuracy, but at the cost of additional runtime of the pruning procedure. | 1611.06440#34 | Pruning Convolutional Neural Networks for Resource Efficient Inference | We propose a new formulation for pruning convolutional kernels in neural
networks to enable efficient inference. We interleave greedy criteria-based
pruning with fine-tuning by backpropagation - a computationally efficient
procedure that maintains good generalization in the pruned network. We propose
a new criterion based on Taylor expansion that approximates the change in the
cost function induced by pruning network parameters. We focus on transfer
learning, where large pretrained networks are adapted to specialized tasks. The
proposed criterion demonstrates superior performance compared to other
criteria, e.g. the norm of kernel weights or feature map activation, for
pruning large CNNs after adaptation to fine-grained classification tasks
(Birds-200 and Flowers-102) relaying only on the first order gradient
information. We also show that pruning can lead to more than 10x theoretical
(5x practical) reduction in adapted 3D-convolutional filters with a small drop
in accuracy in a recurrent gesture classifier. Finally, we show results for the
large-scale ImageNet dataset to emphasize the flexibility of our approach. | http://arxiv.org/pdf/1611.06440 | Pavlo Molchanov, Stephen Tyree, Tero Karras, Timo Aila, Jan Kautz | cs.LG, stat.ML | 17 pages, 14 figures, ICLR 2017 paper | null | cs.LG | 20161119 | 20170608 | [
{
"id": "1512.08571"
},
{
"id": "1607.03250"
},
{
"id": "1509.09308"
}
] |
1611.06440 | 35 | During pruning we observe a small drop in accuracy. One of the reasons is ï¬ne-tuning between pruning iterations. Accuracy of the initial network can be improved with longer ï¬ne tunning and search of better optimization parameters. For example accuracy of unpruned VGG16 network on Birds-200 goes up to 75% after extra 128k updates. And AlexNet on Flowers-102 goes up to 82.9% after 130k updates. It should be noted that with farther ï¬ne-tuning of pruned networks we can achieve higher accuracy as well, therefore the one-to-one comparison of accuracies is rough.
3.4 PRUNING A RECURRENT 3D-CNN NETWORK FOR HAND GESTURE RECOGNITION | 1611.06440#35 | Pruning Convolutional Neural Networks for Resource Efficient Inference | We propose a new formulation for pruning convolutional kernels in neural
networks to enable efficient inference. We interleave greedy criteria-based
pruning with fine-tuning by backpropagation - a computationally efficient
procedure that maintains good generalization in the pruned network. We propose
a new criterion based on Taylor expansion that approximates the change in the
cost function induced by pruning network parameters. We focus on transfer
learning, where large pretrained networks are adapted to specialized tasks. The
proposed criterion demonstrates superior performance compared to other
criteria, e.g. the norm of kernel weights or feature map activation, for
pruning large CNNs after adaptation to fine-grained classification tasks
(Birds-200 and Flowers-102) relaying only on the first order gradient
information. We also show that pruning can lead to more than 10x theoretical
(5x practical) reduction in adapted 3D-convolutional filters with a small drop
in accuracy in a recurrent gesture classifier. Finally, we show results for the
large-scale ImageNet dataset to emphasize the flexibility of our approach. | http://arxiv.org/pdf/1611.06440 | Pavlo Molchanov, Stephen Tyree, Tero Karras, Timo Aila, Jan Kautz | cs.LG, stat.ML | 17 pages, 14 figures, ICLR 2017 paper | null | cs.LG | 20161119 | 20170608 | [
{
"id": "1512.08571"
},
{
"id": "1607.03250"
},
{
"id": "1509.09308"
}
] |
1611.06440 | 36 | 3.4 PRUNING A RECURRENT 3D-CNN NETWORK FOR HAND GESTURE RECOGNITION
Molchanov et al. (2016) learn to recognize 25 dynamic hand gestures in streaming video with a large recurrent neural network. The network is constructed by adding recurrent connections to a 3D-CNN pretrained on the Sports-1M video dataset (Karpathy et al., 2014) and ï¬ne tuning on a gesture dataset. The full network achieves an accuracy of 80.7% when trained on the depth modality, but a single inference requires an estimated 37.8 GFLOPs, too much for deployment on an embedded GPU. After several iterations of pruning with the Taylor criterion with learning rate 0.0003, momentum 0.9, FLOPs regularization 10â3, we reduce inference to 3.0 GFLOPs, as shown in Fig. 7. While pruning
8
Published as a conference paper at ICLR 2017
0.9 Accuracy, test set 0.3||¢* Tyler 10 updates â © Taylor, 30 updates t © Taylor, 60 updates \ 0.2) ee Taylor, 1000 updates . âA A From scratch . 0.1 14 12 1.0 0.8 0.6 0.4 0.2 0.0 GFLOPs | 1611.06440#36 | Pruning Convolutional Neural Networks for Resource Efficient Inference | We propose a new formulation for pruning convolutional kernels in neural
networks to enable efficient inference. We interleave greedy criteria-based
pruning with fine-tuning by backpropagation - a computationally efficient
procedure that maintains good generalization in the pruned network. We propose
a new criterion based on Taylor expansion that approximates the change in the
cost function induced by pruning network parameters. We focus on transfer
learning, where large pretrained networks are adapted to specialized tasks. The
proposed criterion demonstrates superior performance compared to other
criteria, e.g. the norm of kernel weights or feature map activation, for
pruning large CNNs after adaptation to fine-grained classification tasks
(Birds-200 and Flowers-102) relaying only on the first order gradient
information. We also show that pruning can lead to more than 10x theoretical
(5x practical) reduction in adapted 3D-convolutional filters with a small drop
in accuracy in a recurrent gesture classifier. Finally, we show results for the
large-scale ImageNet dataset to emphasize the flexibility of our approach. | http://arxiv.org/pdf/1611.06440 | Pavlo Molchanov, Stephen Tyree, Tero Karras, Timo Aila, Jan Kautz | cs.LG, stat.ML | 17 pages, 14 figures, ICLR 2017 paper | null | cs.LG | 20161119 | 20170608 | [
{
"id": "1512.08571"
},
{
"id": "1607.03250"
},
{
"id": "1509.09308"
}
] |
1611.06440 | 37 | 2 gq P] Accuracy, test set fine-tuning. 2 Taylor, flops reg, 10 updates A A fine-tuned after pruning 40 35 30 2 20 15 10 5 O GFLOPs
Figure 6: Varying the number of minibatch updates between pruning iterations with AlexNet/Flowers-102 and the Taylor criterion.
Figure 7: Pruning of a recurrent 3D-CNN for dynamic hand gesture recognition (Molchanov et al., 2016).
0.8 0.7 0.6 â© @ Taylor, 100 updates 0.21). Taylor, 1000 updates © © Weight, 100 updates â© © Random, 100 updates " e-* Random, 1000 updates 0. \Bos 80% 60% 40% Parameters Accuracy (top-5), validation set x! 20% 0%
° wo Accuracy, test set ° £ © Taylor, 100 updates .\ -* Taylor, 1000 updates va © Weight, 100 updates i 0.1}]e © Random, 100 updates â ¢* Random, 1000 updates . 0.0 a ° R â14 12 10 0.8 0.6 0.4 0.2 0.0 GFLOPs
Figure 8: Pruning of AlexNet on Imagenet with varying number of updates between pruning iterations. | 1611.06440#37 | Pruning Convolutional Neural Networks for Resource Efficient Inference | We propose a new formulation for pruning convolutional kernels in neural
networks to enable efficient inference. We interleave greedy criteria-based
pruning with fine-tuning by backpropagation - a computationally efficient
procedure that maintains good generalization in the pruned network. We propose
a new criterion based on Taylor expansion that approximates the change in the
cost function induced by pruning network parameters. We focus on transfer
learning, where large pretrained networks are adapted to specialized tasks. The
proposed criterion demonstrates superior performance compared to other
criteria, e.g. the norm of kernel weights or feature map activation, for
pruning large CNNs after adaptation to fine-grained classification tasks
(Birds-200 and Flowers-102) relaying only on the first order gradient
information. We also show that pruning can lead to more than 10x theoretical
(5x practical) reduction in adapted 3D-convolutional filters with a small drop
in accuracy in a recurrent gesture classifier. Finally, we show results for the
large-scale ImageNet dataset to emphasize the flexibility of our approach. | http://arxiv.org/pdf/1611.06440 | Pavlo Molchanov, Stephen Tyree, Tero Karras, Timo Aila, Jan Kautz | cs.LG, stat.ML | 17 pages, 14 figures, ICLR 2017 paper | null | cs.LG | 20161119 | 20170608 | [
{
"id": "1512.08571"
},
{
"id": "1607.03250"
},
{
"id": "1509.09308"
}
] |
1611.06440 | 38 | Figure 8: Pruning of AlexNet on Imagenet with varying number of updates between pruning iterations.
increases classiï¬cation error by nearly 6%, additional ï¬ne-tuning restores much of the lost accuracy, yielding a ï¬nal pruned network with a 12.6à reduction in GFLOPs and only a 2.5% loss in accuracy.
# 3.5 PRUNING NETWORKS FOR IMAGENET
We also test our pruning scheme on the large- scale ImageNet classiï¬cation task. In the ï¬rst experiment, we begin with a trained CaffeNet implementation of AlexNet with 79.2% top-5 validation accuracy. Between pruning iterations, we ï¬ne-tune with learning rate 10â4, momen- tum 0.9, weight decay 10â4, batch size 32, and drop-out 50%. Using a subset of 5000 training images, we compute oracle-abs and Spearmanâs rank correlation with the criteria, as shown in Table 1. Pruning traces are illustrated in Fig. 8. | 1611.06440#38 | Pruning Convolutional Neural Networks for Resource Efficient Inference | We propose a new formulation for pruning convolutional kernels in neural
networks to enable efficient inference. We interleave greedy criteria-based
pruning with fine-tuning by backpropagation - a computationally efficient
procedure that maintains good generalization in the pruned network. We propose
a new criterion based on Taylor expansion that approximates the change in the
cost function induced by pruning network parameters. We focus on transfer
learning, where large pretrained networks are adapted to specialized tasks. The
proposed criterion demonstrates superior performance compared to other
criteria, e.g. the norm of kernel weights or feature map activation, for
pruning large CNNs after adaptation to fine-grained classification tasks
(Birds-200 and Flowers-102) relaying only on the first order gradient
information. We also show that pruning can lead to more than 10x theoretical
(5x practical) reduction in adapted 3D-convolutional filters with a small drop
in accuracy in a recurrent gesture classifier. Finally, we show results for the
large-scale ImageNet dataset to emphasize the flexibility of our approach. | http://arxiv.org/pdf/1611.06440 | Pavlo Molchanov, Stephen Tyree, Tero Karras, Timo Aila, Jan Kautz | cs.LG, stat.ML | 17 pages, 14 figures, ICLR 2017 paper | null | cs.LG | 20161119 | 20170608 | [
{
"id": "1512.08571"
},
{
"id": "1607.03250"
},
{
"id": "1509.09308"
}
] |
1611.06440 | 39 | We observe: 1) Taylor performs better than ran- dom or minimum weight pruning when 100 up- dates are used between pruning iterations. When results are displayed w.r.t. FLOPs, the differ- ence with random pruning is only 0%â4%, but the difference is higher, 1%â10%, when plot- ted with the number of feature maps pruned. 2) Increasing the number of updates from 100 to 1000 improves performance of pruning signiï¬- cantly for both the Taylor criterion and random pruning.
o Ss om @ S_8 Ss a o 2 gq a Accuracy (top-5), validation set ° & & ee Qn aes e-® Taylor, flops reg, 100 updates Fine-tuning 30 25 20 15 10 5 GFLOPs
Figure 9: Pruning of the VGG-16 network on ImageNet, with additional following ï¬ne-tuning at 11.5 and 8 GFLOPs.
9
Published as a conference paper at ICLR 2017 | 1611.06440#39 | Pruning Convolutional Neural Networks for Resource Efficient Inference | We propose a new formulation for pruning convolutional kernels in neural
networks to enable efficient inference. We interleave greedy criteria-based
pruning with fine-tuning by backpropagation - a computationally efficient
procedure that maintains good generalization in the pruned network. We propose
a new criterion based on Taylor expansion that approximates the change in the
cost function induced by pruning network parameters. We focus on transfer
learning, where large pretrained networks are adapted to specialized tasks. The
proposed criterion demonstrates superior performance compared to other
criteria, e.g. the norm of kernel weights or feature map activation, for
pruning large CNNs after adaptation to fine-grained classification tasks
(Birds-200 and Flowers-102) relaying only on the first order gradient
information. We also show that pruning can lead to more than 10x theoretical
(5x practical) reduction in adapted 3D-convolutional filters with a small drop
in accuracy in a recurrent gesture classifier. Finally, we show results for the
large-scale ImageNet dataset to emphasize the flexibility of our approach. | http://arxiv.org/pdf/1611.06440 | Pavlo Molchanov, Stephen Tyree, Tero Karras, Timo Aila, Jan Kautz | cs.LG, stat.ML | 17 pages, 14 figures, ICLR 2017 paper | null | cs.LG | 20161119 | 20170608 | [
{
"id": "1512.08571"
},
{
"id": "1607.03250"
},
{
"id": "1509.09308"
}
] |
1611.06440 | 40 | Hardware Batch Accuracy Time, ms Accuracy Time (speed up) Accuracy Time (speed up) AlexNet / Flowers-102, 1.46 GFLOPs CPU: Intel Core i7-5930K GPU: GeForce GTX TITAN X (Pascal) GPU: GeForce GTX TITAN X (Pascal) GPU: NVIDIA Jetson TX1 16 16 512 32 80.1% 226.4 4.8 88.3 169.2 41% feature maps, 0.4 GFLOPs 79.8%(-0.3%) 121.4 (1.9x) 2.4 (2.0x) 36.6 (2.4x) 73.6 (2.3x) 19.5% feature maps, 0.2 GFLOPs 87.0 (2.6x) 74.1%(-6.0%) 1.9 (2.5x) 27.4 (3.2x) 58.6 (2.9x) VGG-16 / ImageNet, 30.96 GFLOPs CPU: Intel Core i7-5930K GPU: GeForce GTX TITAN X (Pascal) GPU: NVIDIA Jetson TX1 16 16 4 89.3% 2564.7 68.3 456.6 66% feature maps, 11.5 GFLOPs 1483.3 | 1611.06440#40 | Pruning Convolutional Neural Networks for Resource Efficient Inference | We propose a new formulation for pruning convolutional kernels in neural
networks to enable efficient inference. We interleave greedy criteria-based
pruning with fine-tuning by backpropagation - a computationally efficient
procedure that maintains good generalization in the pruned network. We propose
a new criterion based on Taylor expansion that approximates the change in the
cost function induced by pruning network parameters. We focus on transfer
learning, where large pretrained networks are adapted to specialized tasks. The
proposed criterion demonstrates superior performance compared to other
criteria, e.g. the norm of kernel weights or feature map activation, for
pruning large CNNs after adaptation to fine-grained classification tasks
(Birds-200 and Flowers-102) relaying only on the first order gradient
information. We also show that pruning can lead to more than 10x theoretical
(5x practical) reduction in adapted 3D-convolutional filters with a small drop
in accuracy in a recurrent gesture classifier. Finally, we show results for the
large-scale ImageNet dataset to emphasize the flexibility of our approach. | http://arxiv.org/pdf/1611.06440 | Pavlo Molchanov, Stephen Tyree, Tero Karras, Timo Aila, Jan Kautz | cs.LG, stat.ML | 17 pages, 14 figures, ICLR 2017 paper | null | cs.LG | 20161119 | 20170608 | [
{
"id": "1512.08571"
},
{
"id": "1607.03250"
},
{
"id": "1509.09308"
}
] |
1611.06440 | 42 | Table 2: Actual speed up of networks pruned by Taylor criterion for various hardware setup. All measurements were performed with PyTorch with cuDNN v5.1.0, except R3DCNN which was implemented in C++ with cuDNN v4.0.4). Results for ImageNet dataset are reported as top-5 accuracy on validation set. Results on AlexNet / Flowers-102 are reported for pruning with 1000 updates between iterations and no ï¬ne-tuning after pruning.
For a second experiment, we prune a trained VGG-16 network with the same parameters as before, except enabling FLOPs regularization. We stop pruning at two points, 11.5 and 8.0 GFLOPs, and ï¬ne-tune both models for an additional ï¬ve epochs with learning rate 10â4. Fine-tuning after pruning signiï¬cantly improves results: the network pruned to 11.5 GFLOPs improves from 83% to 87% top-5 validation accuracy, and the network pruned to 8.0 GFLOPs improves from 77.8% to 84.5%.
3.6 SPEED UP MEASUREMENTS | 1611.06440#42 | Pruning Convolutional Neural Networks for Resource Efficient Inference | We propose a new formulation for pruning convolutional kernels in neural
networks to enable efficient inference. We interleave greedy criteria-based
pruning with fine-tuning by backpropagation - a computationally efficient
procedure that maintains good generalization in the pruned network. We propose
a new criterion based on Taylor expansion that approximates the change in the
cost function induced by pruning network parameters. We focus on transfer
learning, where large pretrained networks are adapted to specialized tasks. The
proposed criterion demonstrates superior performance compared to other
criteria, e.g. the norm of kernel weights or feature map activation, for
pruning large CNNs after adaptation to fine-grained classification tasks
(Birds-200 and Flowers-102) relaying only on the first order gradient
information. We also show that pruning can lead to more than 10x theoretical
(5x practical) reduction in adapted 3D-convolutional filters with a small drop
in accuracy in a recurrent gesture classifier. Finally, we show results for the
large-scale ImageNet dataset to emphasize the flexibility of our approach. | http://arxiv.org/pdf/1611.06440 | Pavlo Molchanov, Stephen Tyree, Tero Karras, Timo Aila, Jan Kautz | cs.LG, stat.ML | 17 pages, 14 figures, ICLR 2017 paper | null | cs.LG | 20161119 | 20170608 | [
{
"id": "1512.08571"
},
{
"id": "1607.03250"
},
{
"id": "1509.09308"
}
] |
1611.06440 | 43 | 3.6 SPEED UP MEASUREMENTS
During pruning we were measuring reduction in computations by FLOPs, which is a common practice (Han et al., 2015; Lavin, 2015a;b). Improvements in FLOPs result in monotonically decreasing inference time of the networks because of removing entire feature map from the layer. However, time consumed by inference dependents on particular implementation of convolution operator, parallelization algorithm, hardware, scheduling, memory transfer rate etc. Therefore we measure improvement in the inference time for selected networks to see real speed up compared to unpruned networks in Table 2. We observe signiï¬cant speed ups by proposed pruning scheme.
# 4 CONCLUSIONS
We propose a new scheme for iteratively pruning deep convolutional neural networks. We ï¬nd: 1) CNNs may be successfully pruned by iteratively removing the least important parametersâfeature maps in this caseâaccording to heuristic selection criteria; 2) a Taylor expansion-based criterion demonstrates signiï¬cant improvement over other criteria; 3) per-layer normalization of the criterion is important to obtain global scaling.
# REFERENCES | 1611.06440#43 | Pruning Convolutional Neural Networks for Resource Efficient Inference | We propose a new formulation for pruning convolutional kernels in neural
networks to enable efficient inference. We interleave greedy criteria-based
pruning with fine-tuning by backpropagation - a computationally efficient
procedure that maintains good generalization in the pruned network. We propose
a new criterion based on Taylor expansion that approximates the change in the
cost function induced by pruning network parameters. We focus on transfer
learning, where large pretrained networks are adapted to specialized tasks. The
proposed criterion demonstrates superior performance compared to other
criteria, e.g. the norm of kernel weights or feature map activation, for
pruning large CNNs after adaptation to fine-grained classification tasks
(Birds-200 and Flowers-102) relaying only on the first order gradient
information. We also show that pruning can lead to more than 10x theoretical
(5x practical) reduction in adapted 3D-convolutional filters with a small drop
in accuracy in a recurrent gesture classifier. Finally, we show results for the
large-scale ImageNet dataset to emphasize the flexibility of our approach. | http://arxiv.org/pdf/1611.06440 | Pavlo Molchanov, Stephen Tyree, Tero Karras, Timo Aila, Jan Kautz | cs.LG, stat.ML | 17 pages, 14 figures, ICLR 2017 paper | null | cs.LG | 20161119 | 20170608 | [
{
"id": "1512.08571"
},
{
"id": "1607.03250"
},
{
"id": "1509.09308"
}
] |
1611.06440 | 44 | # REFERENCES
Jose M Alvarez and Mathieu Salzmann. Learning the Number of Neurons in Deep Networks. In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett (eds.), Advances in Neural Information Processing Systems 29, pp. 2262â2270. Curran Associates, Inc., 2016.
Sajid Anwar, Kyuyeon Hwang, and Wonyong Sung. Structured pruning of deep convolutional neural networks. arXiv preprint arXiv:1512.08571, 2015. URL http://arxiv.org/abs/1512. 08571.
Costas Bekas, Effrosyni Kokiopoulou, and Yousef Saad. An estimator for the diagonal of a matrix. Applied numerical mathematics, 57(11):1214â1229, 2007.
Steve Branson, Grant Van Horn, Serge Belongie, and Pietro Perona. Bird species categorization using pose normalized deep convolutional nets. arXiv preprint arXiv:1406.2952, 2014.
Yann Dauphin, Harm de Vries, and Yoshua Bengio. Equilibrated adaptive learning rates for non- convex optimization. In Advances in Neural Information Processing Systems, pp. 1504â1512, 2015.
10 | 1611.06440#44 | Pruning Convolutional Neural Networks for Resource Efficient Inference | We propose a new formulation for pruning convolutional kernels in neural
networks to enable efficient inference. We interleave greedy criteria-based
pruning with fine-tuning by backpropagation - a computationally efficient
procedure that maintains good generalization in the pruned network. We propose
a new criterion based on Taylor expansion that approximates the change in the
cost function induced by pruning network parameters. We focus on transfer
learning, where large pretrained networks are adapted to specialized tasks. The
proposed criterion demonstrates superior performance compared to other
criteria, e.g. the norm of kernel weights or feature map activation, for
pruning large CNNs after adaptation to fine-grained classification tasks
(Birds-200 and Flowers-102) relaying only on the first order gradient
information. We also show that pruning can lead to more than 10x theoretical
(5x practical) reduction in adapted 3D-convolutional filters with a small drop
in accuracy in a recurrent gesture classifier. Finally, we show results for the
large-scale ImageNet dataset to emphasize the flexibility of our approach. | http://arxiv.org/pdf/1611.06440 | Pavlo Molchanov, Stephen Tyree, Tero Karras, Timo Aila, Jan Kautz | cs.LG, stat.ML | 17 pages, 14 figures, ICLR 2017 paper | null | cs.LG | 20161119 | 20170608 | [
{
"id": "1512.08571"
},
{
"id": "1607.03250"
},
{
"id": "1509.09308"
}
] |
1611.06440 | 45 | 10
Published as a conference paper at ICLR 2017
Mikhail Figurnov, Aizhan Ibraimova, Dmitry P Vetrov, and Pushmeet Kohli. PerforatedCNNs: Acceleration through elimination of redundant convolutions. In Advances in Neural Information Processing Systems, pp. 947â955, 2016.
Suyog Gupta, Ankur Agrawal, Kailash Gopalakrishnan, and Pritish Narayanan. Deep learning with limited numerical precision. CoRR, abs/1502.02551, 392, 2015. URL http://arxiv.org/ abs/1502.025513.
Song Han, Jeff Pool, John Tran, and William Dally. Learning both weights and connections for efï¬cient neural network. In Advances in Neural Information Processing Systems, pp. 1135â1143, 2015.
Song Han, Xingyu Liu, Huizi Mao, Jing Pu, Ardavan Pedram, Mark A. Horowitz, and William J. Dally. EIE: Efï¬cient inference engine on compressed deep neural network. In Proceedings of the 43rd International Symposium on Computer Architecture, ISCA â16, pp. 243â254, Piscataway, NJ, USA, 2016. IEEE Press. | 1611.06440#45 | Pruning Convolutional Neural Networks for Resource Efficient Inference | We propose a new formulation for pruning convolutional kernels in neural
networks to enable efficient inference. We interleave greedy criteria-based
pruning with fine-tuning by backpropagation - a computationally efficient
procedure that maintains good generalization in the pruned network. We propose
a new criterion based on Taylor expansion that approximates the change in the
cost function induced by pruning network parameters. We focus on transfer
learning, where large pretrained networks are adapted to specialized tasks. The
proposed criterion demonstrates superior performance compared to other
criteria, e.g. the norm of kernel weights or feature map activation, for
pruning large CNNs after adaptation to fine-grained classification tasks
(Birds-200 and Flowers-102) relaying only on the first order gradient
information. We also show that pruning can lead to more than 10x theoretical
(5x practical) reduction in adapted 3D-convolutional filters with a small drop
in accuracy in a recurrent gesture classifier. Finally, we show results for the
large-scale ImageNet dataset to emphasize the flexibility of our approach. | http://arxiv.org/pdf/1611.06440 | Pavlo Molchanov, Stephen Tyree, Tero Karras, Timo Aila, Jan Kautz | cs.LG, stat.ML | 17 pages, 14 figures, ICLR 2017 paper | null | cs.LG | 20161119 | 20170608 | [
{
"id": "1512.08571"
},
{
"id": "1607.03250"
},
{
"id": "1509.09308"
}
] |
1611.06440 | 46 | Babak Hassibi and David G. Stork. Second order derivatives for network pruning: Optimal brain surgeon. In Advances in Neural Information Processing Systems (NIPS), pp. 164â171, 1993.
Hengyuan Hu, Rui Peng, Yu-Wing Tai, and Chi-Keung Tang. Network trimming: A data-driven neuron pruning approach towards efï¬cient deep architectures. arXiv preprint arXiv:1607.03250, 2016.
Andrej Karpathy, George Toderici, Sanketh Shetty, Thomas Leung, Rahul Sukthankar, and Li Fei-Fei. Large-scale video classiï¬cation with convolutional neural networks. In CVPR, 2014.
Yong-Deok Kim, Eunhyeok Park, Sungjoo Yoo, Taelim Choi, Lu Yang, and Dongjun Shin. Com- pression of deep convolutional neural networks for fast and low power mobile applications. In Proceedings of the International Conference on Learning Representations (ICLR), 2015.
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classiï¬cation with deep convolu- tional neural networks. In Advances in neural information processing systems, pp. 1097â1105, 2012. | 1611.06440#46 | Pruning Convolutional Neural Networks for Resource Efficient Inference | We propose a new formulation for pruning convolutional kernels in neural
networks to enable efficient inference. We interleave greedy criteria-based
pruning with fine-tuning by backpropagation - a computationally efficient
procedure that maintains good generalization in the pruned network. We propose
a new criterion based on Taylor expansion that approximates the change in the
cost function induced by pruning network parameters. We focus on transfer
learning, where large pretrained networks are adapted to specialized tasks. The
proposed criterion demonstrates superior performance compared to other
criteria, e.g. the norm of kernel weights or feature map activation, for
pruning large CNNs after adaptation to fine-grained classification tasks
(Birds-200 and Flowers-102) relaying only on the first order gradient
information. We also show that pruning can lead to more than 10x theoretical
(5x practical) reduction in adapted 3D-convolutional filters with a small drop
in accuracy in a recurrent gesture classifier. Finally, we show results for the
large-scale ImageNet dataset to emphasize the flexibility of our approach. | http://arxiv.org/pdf/1611.06440 | Pavlo Molchanov, Stephen Tyree, Tero Karras, Timo Aila, Jan Kautz | cs.LG, stat.ML | 17 pages, 14 figures, ICLR 2017 paper | null | cs.LG | 20161119 | 20170608 | [
{
"id": "1512.08571"
},
{
"id": "1607.03250"
},
{
"id": "1509.09308"
}
] |
1611.06440 | 47 | Andrew Lavin. maxDNN: An Efï¬cient Convolution Kernel for Deep Learning with Maxwell GPUs. CoRR, abs/1501.06633, 2015a. URL http://arxiv.org/abs/1501.06633.
Andrew Lavin. Fast algorithms for convolutional neural networks. arXiv preprint arXiv:1509.09308, 2015b.
Vadim Lebedev and Victor Lempitsky. Fast convnets using group-wise brain damage. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2554â2564, 2016.
Yann LeCun, J. S. Denker, S. Solla, R. E. Howard, and L. D. Jackel. Optimal brain damage. In Advances in Neural Information Processing Systems (NIPS), 1990.
Yann LeCun, Leon Bottou, Genevieve B. Orr, and Klaus Robert Müller. Efï¬cient BackProp, pp. 9â50. Springer Berlin Heidelberg, Berlin, Heidelberg, 1998.
James Martens. Deep learning via Hessian-free optimization. In Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 735â742, 2010. | 1611.06440#47 | Pruning Convolutional Neural Networks for Resource Efficient Inference | We propose a new formulation for pruning convolutional kernels in neural
networks to enable efficient inference. We interleave greedy criteria-based
pruning with fine-tuning by backpropagation - a computationally efficient
procedure that maintains good generalization in the pruned network. We propose
a new criterion based on Taylor expansion that approximates the change in the
cost function induced by pruning network parameters. We focus on transfer
learning, where large pretrained networks are adapted to specialized tasks. The
proposed criterion demonstrates superior performance compared to other
criteria, e.g. the norm of kernel weights or feature map activation, for
pruning large CNNs after adaptation to fine-grained classification tasks
(Birds-200 and Flowers-102) relaying only on the first order gradient
information. We also show that pruning can lead to more than 10x theoretical
(5x practical) reduction in adapted 3D-convolutional filters with a small drop
in accuracy in a recurrent gesture classifier. Finally, we show results for the
large-scale ImageNet dataset to emphasize the flexibility of our approach. | http://arxiv.org/pdf/1611.06440 | Pavlo Molchanov, Stephen Tyree, Tero Karras, Timo Aila, Jan Kautz | cs.LG, stat.ML | 17 pages, 14 figures, ICLR 2017 paper | null | cs.LG | 20161119 | 20170608 | [
{
"id": "1512.08571"
},
{
"id": "1607.03250"
},
{
"id": "1509.09308"
}
] |
1611.06440 | 48 | James Martens. Deep learning via Hessian-free optimization. In Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 735â742, 2010.
James Martens, Ilya Sutskever, and Kevin Swersky. Estimating the Hessian by back-propagating curvature. arXiv preprint arXiv:1206.6464, 2012.
Pavlo Molchanov, Xiaodong Yang, Shalini Gupta, Kihwan Kim, Stephen Tyree, and Jan Kautz. Online detection and classiï¬cation of dynamic hand gestures with recurrent 3d convolutional neural network. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2016.
M-E. Nilsback and A. Zisserman. Automated ï¬ower classiï¬cation over a large number of classes. In Proceedings of the Indian Conference on Computer Vision, Graphics and Image Processing, Dec 2008.
11
Published as a conference paper at ICLR 2017
Barak A. Pearlmutter. Fast Exact Multiplication by the Hessian. Neural Computation, 6:147â160, 1994. | 1611.06440#48 | Pruning Convolutional Neural Networks for Resource Efficient Inference | We propose a new formulation for pruning convolutional kernels in neural
networks to enable efficient inference. We interleave greedy criteria-based
pruning with fine-tuning by backpropagation - a computationally efficient
procedure that maintains good generalization in the pruned network. We propose
a new criterion based on Taylor expansion that approximates the change in the
cost function induced by pruning network parameters. We focus on transfer
learning, where large pretrained networks are adapted to specialized tasks. The
proposed criterion demonstrates superior performance compared to other
criteria, e.g. the norm of kernel weights or feature map activation, for
pruning large CNNs after adaptation to fine-grained classification tasks
(Birds-200 and Flowers-102) relaying only on the first order gradient
information. We also show that pruning can lead to more than 10x theoretical
(5x practical) reduction in adapted 3D-convolutional filters with a small drop
in accuracy in a recurrent gesture classifier. Finally, we show results for the
large-scale ImageNet dataset to emphasize the flexibility of our approach. | http://arxiv.org/pdf/1611.06440 | Pavlo Molchanov, Stephen Tyree, Tero Karras, Timo Aila, Jan Kautz | cs.LG, stat.ML | 17 pages, 14 figures, ICLR 2017 paper | null | cs.LG | 20161119 | 20170608 | [
{
"id": "1512.08571"
},
{
"id": "1607.03250"
},
{
"id": "1509.09308"
}
] |
1611.06440 | 49 | Barak A. Pearlmutter. Fast Exact Multiplication by the Hessian. Neural Computation, 6:147â160, 1994.
Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, and Ali Farhadi. XNOR-Net: ImageNet Classiï¬cation Using Binary Convolutional Neural Networks. CoRR, abs/1603.05279, 2016. URL http://arxiv.org/abs/1603.05279.
Russell Reed. Pruning algorithms-a survey. IEEE transactions on Neural Networks, 4(5):740â747, 1993.
Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV), 115 (3):211â252, 2015.
K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. CoRR, abs/1409.1556, 2014. | 1611.06440#49 | Pruning Convolutional Neural Networks for Resource Efficient Inference | We propose a new formulation for pruning convolutional kernels in neural
networks to enable efficient inference. We interleave greedy criteria-based
pruning with fine-tuning by backpropagation - a computationally efficient
procedure that maintains good generalization in the pruned network. We propose
a new criterion based on Taylor expansion that approximates the change in the
cost function induced by pruning network parameters. We focus on transfer
learning, where large pretrained networks are adapted to specialized tasks. The
proposed criterion demonstrates superior performance compared to other
criteria, e.g. the norm of kernel weights or feature map activation, for
pruning large CNNs after adaptation to fine-grained classification tasks
(Birds-200 and Flowers-102) relaying only on the first order gradient
information. We also show that pruning can lead to more than 10x theoretical
(5x practical) reduction in adapted 3D-convolutional filters with a small drop
in accuracy in a recurrent gesture classifier. Finally, we show results for the
large-scale ImageNet dataset to emphasize the flexibility of our approach. | http://arxiv.org/pdf/1611.06440 | Pavlo Molchanov, Stephen Tyree, Tero Karras, Timo Aila, Jan Kautz | cs.LG, stat.ML | 17 pages, 14 figures, ICLR 2017 paper | null | cs.LG | 20161119 | 20170608 | [
{
"id": "1512.08571"
},
{
"id": "1607.03250"
},
{
"id": "1509.09308"
}
] |
1611.06440 | 50 | K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. CoRR, abs/1409.1556, 2014.
Suraj Srinivas and R. Venkatesh Babu. Data-free parameter pruning for deep neural networks. In Mark W. Jones Xianghua Xie and Gary K. L. Tam (eds.), Proceedings of the British Machine Vision Conference (BMVC), pp. 31.1â31.12. BMVA Press, September 2015.
Theano Development Team. Theano: A Python framework for fast computation of mathematical expressions. arXiv e-prints, abs/1605.02688, May 2016. URL http://arxiv.org/abs/ 1605.02688.
Catherine Wah, Steve Branson, Peter Welinder, Pietro Perona, and Serge Belongie. The caltech-ucsd birds-200-2011 dataset. 2011.
Wei Wen, Chunpeng Wu, Yandan Wang, Yiran Chen, and Hai Li. Learning structured sparsity in deep neural networks. In Advances in Neural Information Processing Systems, pp. 2074â2082, 2016. | 1611.06440#50 | Pruning Convolutional Neural Networks for Resource Efficient Inference | We propose a new formulation for pruning convolutional kernels in neural
networks to enable efficient inference. We interleave greedy criteria-based
pruning with fine-tuning by backpropagation - a computationally efficient
procedure that maintains good generalization in the pruned network. We propose
a new criterion based on Taylor expansion that approximates the change in the
cost function induced by pruning network parameters. We focus on transfer
learning, where large pretrained networks are adapted to specialized tasks. The
proposed criterion demonstrates superior performance compared to other
criteria, e.g. the norm of kernel weights or feature map activation, for
pruning large CNNs after adaptation to fine-grained classification tasks
(Birds-200 and Flowers-102) relaying only on the first order gradient
information. We also show that pruning can lead to more than 10x theoretical
(5x practical) reduction in adapted 3D-convolutional filters with a small drop
in accuracy in a recurrent gesture classifier. Finally, we show results for the
large-scale ImageNet dataset to emphasize the flexibility of our approach. | http://arxiv.org/pdf/1611.06440 | Pavlo Molchanov, Stephen Tyree, Tero Karras, Timo Aila, Jan Kautz | cs.LG, stat.ML | 17 pages, 14 figures, ICLR 2017 paper | null | cs.LG | 20161119 | 20170608 | [
{
"id": "1512.08571"
},
{
"id": "1607.03250"
},
{
"id": "1509.09308"
}
] |
1611.06440 | 51 | Hao Zhou, Jose M. Alvarez, and Fatih Porikli. Less is more: Towards compact cnns. In European Conference on Computer Vision, pp. 662â677, Amsterdam, the Netherlands, October 2016.
12
Published as a conference paper at ICLR 2017
A APPENDIX
A.1 FLOPS COMPUTATION
To compute the number of ï¬oating-point operations (FLOPs), we assume convolution is implemented as a sliding window and that the nonlinearity function is computed for free. For convolutional kernels we have:
FLOPs = 2HW (CinK 2 + 1)Cout, (11)
where H, W and Cin are height, width and number of channels of the input feature map, K is the kernel width (assumed to be symmetric), and Cout is the number of output channels.
For fully connected layers we compute FLOPs as:
FLOPs = (2I â 1)O, (12)
where I is the input dimensionality and O is the output dimensionality.
We apply FLOPs regularization during pruning to prune neurons with higher FLOPs ï¬rst. FLOPs per convolutional neuron in every layer: | 1611.06440#51 | Pruning Convolutional Neural Networks for Resource Efficient Inference | We propose a new formulation for pruning convolutional kernels in neural
networks to enable efficient inference. We interleave greedy criteria-based
pruning with fine-tuning by backpropagation - a computationally efficient
procedure that maintains good generalization in the pruned network. We propose
a new criterion based on Taylor expansion that approximates the change in the
cost function induced by pruning network parameters. We focus on transfer
learning, where large pretrained networks are adapted to specialized tasks. The
proposed criterion demonstrates superior performance compared to other
criteria, e.g. the norm of kernel weights or feature map activation, for
pruning large CNNs after adaptation to fine-grained classification tasks
(Birds-200 and Flowers-102) relaying only on the first order gradient
information. We also show that pruning can lead to more than 10x theoretical
(5x practical) reduction in adapted 3D-convolutional filters with a small drop
in accuracy in a recurrent gesture classifier. Finally, we show results for the
large-scale ImageNet dataset to emphasize the flexibility of our approach. | http://arxiv.org/pdf/1611.06440 | Pavlo Molchanov, Stephen Tyree, Tero Karras, Timo Aila, Jan Kautz | cs.LG, stat.ML | 17 pages, 14 figures, ICLR 2017 paper | null | cs.LG | 20161119 | 20170608 | [
{
"id": "1512.08571"
},
{
"id": "1607.03250"
},
{
"id": "1509.09308"
}
] |
1611.06440 | 53 | # A.2 NORMALIZATION ACROSS LAYERS
Scaling a criterion across layers is very important for pruning. If the criterion is not properly scaled, then a hand-tuned multiplier would need to be selected for each layer. Statistics of feature map ranking by different criteria are shown in Fig.{10] Without normalization (Fig. [4a}fT%d). the weight magnitude criterion tends to rank feature maps from the first layers more important than last layers; the activation criterion ranks middle layers more important; and Taylor ranks first layers higher. After â¬y normalization (Fig.[TOd}{T0f), all criteria have a shape more similar to the oracle, where each layer has some feature maps which are highly important and others which are unimportant.
(a) Weight (b) Activation (mean) (c) Taylor errresrs) ee a en 7 _ = median (d) Weight + ¢2 (e) Activation (mean) + £2 (f) Taylor + £2 | 1611.06440#53 | Pruning Convolutional Neural Networks for Resource Efficient Inference | We propose a new formulation for pruning convolutional kernels in neural
networks to enable efficient inference. We interleave greedy criteria-based
pruning with fine-tuning by backpropagation - a computationally efficient
procedure that maintains good generalization in the pruned network. We propose
a new criterion based on Taylor expansion that approximates the change in the
cost function induced by pruning network parameters. We focus on transfer
learning, where large pretrained networks are adapted to specialized tasks. The
proposed criterion demonstrates superior performance compared to other
criteria, e.g. the norm of kernel weights or feature map activation, for
pruning large CNNs after adaptation to fine-grained classification tasks
(Birds-200 and Flowers-102) relaying only on the first order gradient
information. We also show that pruning can lead to more than 10x theoretical
(5x practical) reduction in adapted 3D-convolutional filters with a small drop
in accuracy in a recurrent gesture classifier. Finally, we show results for the
large-scale ImageNet dataset to emphasize the flexibility of our approach. | http://arxiv.org/pdf/1611.06440 | Pavlo Molchanov, Stephen Tyree, Tero Karras, Timo Aila, Jan Kautz | cs.LG, stat.ML | 17 pages, 14 figures, ICLR 2017 paper | null | cs.LG | 20161119 | 20170608 | [
{
"id": "1512.08571"
},
{
"id": "1607.03250"
},
{
"id": "1509.09308"
}
] |
1611.06440 | 55 | MI Weight Activation OBD Taylor Mean S.d. APoZ Per layer Layer 1 0.41 0.40 0.65 0.78 0.36 0.54 0.95 Layer 2 0.23 0.57 0.56 0.59 0.33 0.78 0.90 Layer 3 0.14 0.55 0.48 0.45 0.51 0.66 0.74 Layer 4 0.26 0.23 0.58 0.42 0.10 0.36 0.80 Layer 5 0.17 0.28 0.49 0.52 0.15 0.54 0.69 Layer 6 0.21 0.18 0.41 0.48 0.16 0.49 0.63 Layer 7 0.12 0.19 0.54 0.49 0.38 0.55 0.71 Layer 8 0.18 0.23 0.43 0.42 0.30 0.50 0.54 Layer 9 0.21 0.18 0.50 0.55 0.35 0.53 0.61 Layer 10 0.26 0.15 0.59 0.60 0.45 0.61 0.66 Layer 11 0.41 0.12 0.61 0.65 0.45 0.64 0.72 Layer 12 0.47 0.15 0.60 0.66 0.39 0.66 0.72 Layer 13 0.61 | 1611.06440#55 | Pruning Convolutional Neural Networks for Resource Efficient Inference | We propose a new formulation for pruning convolutional kernels in neural
networks to enable efficient inference. We interleave greedy criteria-based
pruning with fine-tuning by backpropagation - a computationally efficient
procedure that maintains good generalization in the pruned network. We propose
a new criterion based on Taylor expansion that approximates the change in the
cost function induced by pruning network parameters. We focus on transfer
learning, where large pretrained networks are adapted to specialized tasks. The
proposed criterion demonstrates superior performance compared to other
criteria, e.g. the norm of kernel weights or feature map activation, for
pruning large CNNs after adaptation to fine-grained classification tasks
(Birds-200 and Flowers-102) relaying only on the first order gradient
information. We also show that pruning can lead to more than 10x theoretical
(5x practical) reduction in adapted 3D-convolutional filters with a small drop
in accuracy in a recurrent gesture classifier. Finally, we show results for the
large-scale ImageNet dataset to emphasize the flexibility of our approach. | http://arxiv.org/pdf/1611.06440 | Pavlo Molchanov, Stephen Tyree, Tero Karras, Timo Aila, Jan Kautz | cs.LG, stat.ML | 17 pages, 14 figures, ICLR 2017 paper | null | cs.LG | 20161119 | 20170608 | [
{
"id": "1512.08571"
},
{
"id": "1607.03250"
},
{
"id": "1509.09308"
}
] |
1611.06440 | 57 | Table 3: Spearmanâs rank correlation of criteria vs oracle-abs in VGG-16 ï¬ne-tuned on Birds 200.
A.3 ORACLE COMPUTATION FOR VGG-16 ON BIRDS-200
We compute the change in the loss caused by removing individual feature maps from the VGG-16 network, after ï¬ne-tuning on the Birds-200 dataset. Results are illustrated in Fig. 11a-11b for each feature map in layers 1 and 13, respectively. To compute the oracle estimate for a feature map, we remove the feature map and compute the network prediction for each image in the training set using the central crop with no data augmentation or dropout. We draw the following conclusions:
⢠The contribution of feature maps range from positive (above the red line) to slightly negative (below the red line), implying the existence of some feature maps which decrease the training cost when removed.
⢠There are many feature maps with little contribution to the network output, indicated by almost zero change in loss when removed.
⢠Both layers contain a small number of feature maps which induce a signiï¬cant increase in the loss when removed.
(a) Layer 1 (b) Layer 13
oor 0.008 0.006 0.004 change in loss 0.002, 0.000 00025 30 0 35 20 i0 0 Feature map Index | 1611.06440#57 | Pruning Convolutional Neural Networks for Resource Efficient Inference | We propose a new formulation for pruning convolutional kernels in neural
networks to enable efficient inference. We interleave greedy criteria-based
pruning with fine-tuning by backpropagation - a computationally efficient
procedure that maintains good generalization in the pruned network. We propose
a new criterion based on Taylor expansion that approximates the change in the
cost function induced by pruning network parameters. We focus on transfer
learning, where large pretrained networks are adapted to specialized tasks. The
proposed criterion demonstrates superior performance compared to other
criteria, e.g. the norm of kernel weights or feature map activation, for
pruning large CNNs after adaptation to fine-grained classification tasks
(Birds-200 and Flowers-102) relaying only on the first order gradient
information. We also show that pruning can lead to more than 10x theoretical
(5x practical) reduction in adapted 3D-convolutional filters with a small drop
in accuracy in a recurrent gesture classifier. Finally, we show results for the
large-scale ImageNet dataset to emphasize the flexibility of our approach. | http://arxiv.org/pdf/1611.06440 | Pavlo Molchanov, Stephen Tyree, Tero Karras, Timo Aila, Jan Kautz | cs.LG, stat.ML | 17 pages, 14 figures, ICLR 2017 paper | null | cs.LG | 20161119 | 20170608 | [
{
"id": "1512.08571"
},
{
"id": "1607.03250"
},
{
"id": "1509.09308"
}
] |
1611.06440 | 58 | oor 0.008 0.006 0.004 change in loss 0.002, 0.000 00025 30 0 35 20 i0 0 Feature map Index
0.0035, 0.0030) 0.0025 0.0029) 0.0015; ange in toss c.0010) 0.0008 556 a0 300 200 700 0 Feature map index
Figure 11: Change in training loss as a function of the removal of a single feature map from the VGG-16 network after ï¬ne-tuning on Birds-200. Results are plotted for two convolutional layers w.r.t. the index of the removed feature map index. The loss with all feature maps, 0.00461, is indicated with a red horizontal line.
14
Published as a conference paper at ICLR 2017
100% <â & regularization, > = 0.01 80% " larization, 7 = 0.04 " lor, 50 updates ~~ Taylor, 100 updates 60% â Taylor, 200 updates Parameters 40% 20% 0%, 0 50 ~ 100 ~+150°~S*S*«S 002; Mini-batch updates, x1000 | 1611.06440#58 | Pruning Convolutional Neural Networks for Resource Efficient Inference | We propose a new formulation for pruning convolutional kernels in neural
networks to enable efficient inference. We interleave greedy criteria-based
pruning with fine-tuning by backpropagation - a computationally efficient
procedure that maintains good generalization in the pruned network. We propose
a new criterion based on Taylor expansion that approximates the change in the
cost function induced by pruning network parameters. We focus on transfer
learning, where large pretrained networks are adapted to specialized tasks. The
proposed criterion demonstrates superior performance compared to other
criteria, e.g. the norm of kernel weights or feature map activation, for
pruning large CNNs after adaptation to fine-grained classification tasks
(Birds-200 and Flowers-102) relaying only on the first order gradient
information. We also show that pruning can lead to more than 10x theoretical
(5x practical) reduction in adapted 3D-convolutional filters with a small drop
in accuracy in a recurrent gesture classifier. Finally, we show results for the
large-scale ImageNet dataset to emphasize the flexibility of our approach. | http://arxiv.org/pdf/1611.06440 | Pavlo Molchanov, Stephen Tyree, Tero Karras, Timo Aila, Jan Kautz | cs.LG, stat.ML | 17 pages, 14 figures, ICLR 2017 paper | null | cs.LG | 20161119 | 20170608 | [
{
"id": "1512.08571"
},
{
"id": "1607.03250"
},
{
"id": "1509.09308"
}
] |
1611.06440 | 59 | 100% <â & regularization, > = 0.01 80% " larization, 7 = 0.04 " lor, 50 updates ~~ Taylor, 100 updates 60% â Taylor, 200 updates Parameters 40% Accuracy, test set 20% 0%, 0 50 ~ 100 ~+150°~S*S*«S 002; 380% 80% 60% 40% 20% 0% Mini-batch updates, x1000 Parameters
Accuracy, test set 380% 80% 60% 40% 20% 0% Parameters
Figure 12: Comparison of our iterative pruning with pruning by regularization
Table|3|contains a layer-by-layer listing of Spearmanâs rank correlation of several criteria with the ranking of oracle-abs. In this more detailed comparison, we see the Taylor criterion shows higher correlation for all individual layers. For several methods including Taylor, the worst correlations are observed for the middle of the network, layers 5-10. We also evaluate several techniques for normalization of the raw criteria values for comparison across layers. The table shows the best performance is obtained by £2 normalization, hence we select it for our method.
# A.4 COMPARISON WITH WEIGHT REGULARIZATION | 1611.06440#59 | Pruning Convolutional Neural Networks for Resource Efficient Inference | We propose a new formulation for pruning convolutional kernels in neural
networks to enable efficient inference. We interleave greedy criteria-based
pruning with fine-tuning by backpropagation - a computationally efficient
procedure that maintains good generalization in the pruned network. We propose
a new criterion based on Taylor expansion that approximates the change in the
cost function induced by pruning network parameters. We focus on transfer
learning, where large pretrained networks are adapted to specialized tasks. The
proposed criterion demonstrates superior performance compared to other
criteria, e.g. the norm of kernel weights or feature map activation, for
pruning large CNNs after adaptation to fine-grained classification tasks
(Birds-200 and Flowers-102) relaying only on the first order gradient
information. We also show that pruning can lead to more than 10x theoretical
(5x practical) reduction in adapted 3D-convolutional filters with a small drop
in accuracy in a recurrent gesture classifier. Finally, we show results for the
large-scale ImageNet dataset to emphasize the flexibility of our approach. | http://arxiv.org/pdf/1611.06440 | Pavlo Molchanov, Stephen Tyree, Tero Karras, Timo Aila, Jan Kautz | cs.LG, stat.ML | 17 pages, 14 figures, ICLR 2017 paper | null | cs.LG | 20161119 | 20170608 | [
{
"id": "1512.08571"
},
{
"id": "1607.03250"
},
{
"id": "1509.09308"
}
] |
1611.06440 | 60 | # A.4 COMPARISON WITH WEIGHT REGULARIZATION
5) find that fine-tuning with high @, or ¢2 regularization causes unimportant connections to be suppressed. Connections with energy lower than some threshold can be removed on the assumption that they do not contribute much to subsequent layers. The same work also finds that thresholds must be set separately for each layer depending on its sensitivity to pruning. The procedure to evaluate sensitivity is time-consuming as it requires pruning layers independently during evaluation.
The idea of pruning with high regularization can be extended to removing the kernels for an entire feature map if the £2 norm of those kernels is below a predefined threshold. We compare our approach with this regularization-based pruning for the task of pruning the last convolutional layer of VGG-16 fine-tuned for Birds-200. By considering only a single layer, we avoid the need to compute layerwise sensitivity. Parameters for optimization during fine-tuning are the same as other experiments with the Birds-200 dataset. For the regularization technique, the pruning threshold is set to ¢ = 10~° while we vary the regularization coefficient 7 of the £2 norm on each feature map kernel} We prune only kernel weights, while keeping the bias to maintain the same expected output. | 1611.06440#60 | Pruning Convolutional Neural Networks for Resource Efficient Inference | We propose a new formulation for pruning convolutional kernels in neural
networks to enable efficient inference. We interleave greedy criteria-based
pruning with fine-tuning by backpropagation - a computationally efficient
procedure that maintains good generalization in the pruned network. We propose
a new criterion based on Taylor expansion that approximates the change in the
cost function induced by pruning network parameters. We focus on transfer
learning, where large pretrained networks are adapted to specialized tasks. The
proposed criterion demonstrates superior performance compared to other
criteria, e.g. the norm of kernel weights or feature map activation, for
pruning large CNNs after adaptation to fine-grained classification tasks
(Birds-200 and Flowers-102) relaying only on the first order gradient
information. We also show that pruning can lead to more than 10x theoretical
(5x practical) reduction in adapted 3D-convolutional filters with a small drop
in accuracy in a recurrent gesture classifier. Finally, we show results for the
large-scale ImageNet dataset to emphasize the flexibility of our approach. | http://arxiv.org/pdf/1611.06440 | Pavlo Molchanov, Stephen Tyree, Tero Karras, Timo Aila, Jan Kautz | cs.LG, stat.ML | 17 pages, 14 figures, ICLR 2017 paper | null | cs.LG | 20161119 | 20170608 | [
{
"id": "1512.08571"
},
{
"id": "1607.03250"
},
{
"id": "1509.09308"
}
] |
1611.06440 | 61 | A comparison between pruning based on regularization and our greedy scheme is illustrated in Fig. 12. We observe that our approach has higher test accuracy for the same number of remaining unpruned feature maps, when pruning 85% or more of the feature maps. We observe that with high regularization all weights tend to zero, not only unimportant weights as Han et al. (2015) observe in the case of ImageNet networks. The intuition here is that with regularization we push all weights down and potentially can affect important connections for transfer learning, whereas in our iterative procedure we only remove unimportant parameters leaving others untouched.
A.5 COMBINATION OF CRITERIA
One of the possibilities to improve saliency estimation is to combine several criteria together. One of the straight forward combinations is Taylor and mean activation of the neuron. We compute the joint criteria as Îjoint(z(k) ) and perform a grid search of parameter λ in Fig.13. The highest correlation value for each dataset is marked with with vertical bar with λ and gain. We observe that the gain of linearly combining criteria is negligibly small (see ââs in the ï¬gure).
5In our implementation, the regularization coefï¬cient is multiplied by the learning rate equal to 10â4.
15
Published as a conference paper at ICLR 2017 | 1611.06440#61 | Pruning Convolutional Neural Networks for Resource Efficient Inference | We propose a new formulation for pruning convolutional kernels in neural
networks to enable efficient inference. We interleave greedy criteria-based
pruning with fine-tuning by backpropagation - a computationally efficient
procedure that maintains good generalization in the pruned network. We propose
a new criterion based on Taylor expansion that approximates the change in the
cost function induced by pruning network parameters. We focus on transfer
learning, where large pretrained networks are adapted to specialized tasks. The
proposed criterion demonstrates superior performance compared to other
criteria, e.g. the norm of kernel weights or feature map activation, for
pruning large CNNs after adaptation to fine-grained classification tasks
(Birds-200 and Flowers-102) relaying only on the first order gradient
information. We also show that pruning can lead to more than 10x theoretical
(5x practical) reduction in adapted 3D-convolutional filters with a small drop
in accuracy in a recurrent gesture classifier. Finally, we show results for the
large-scale ImageNet dataset to emphasize the flexibility of our approach. | http://arxiv.org/pdf/1611.06440 | Pavlo Molchanov, Stephen Tyree, Tero Karras, Timo Aila, Jan Kautz | cs.LG, stat.ML | 17 pages, 14 figures, ICLR 2017 paper | null | cs.LG | 20161119 | 20170608 | [
{
"id": "1512.08571"
},
{
"id": "1607.03250"
},
{
"id": "1509.09308"
}
] |
1611.06440 | 62 | 5In our implementation, the regularization coefï¬cient is multiplied by the learning rate equal to 10â4.
15
Published as a conference paper at ICLR 2017
oo s Ga S 0.006400 = 169e03 Correlation, higher better 0.00.4 0.05, 4 o im 107 10° d, criterion = (1 - \)*Taylor + \*Activation â S = VGG-16/Birds-200 â AlexNet/Flowers-102 â VGG-16/Flowers-102 â AlexNet/ImageNet â AlexNet/Birds-200
Figure 13: Spearman rank correlation for linear combination of criteria. The per layer metric is used. Each â indicates the gain in correlation for one experiment.
A.6 OPTIMAL BRAIN DAMAGE IMPLEMENTATION | 1611.06440#62 | Pruning Convolutional Neural Networks for Resource Efficient Inference | We propose a new formulation for pruning convolutional kernels in neural
networks to enable efficient inference. We interleave greedy criteria-based
pruning with fine-tuning by backpropagation - a computationally efficient
procedure that maintains good generalization in the pruned network. We propose
a new criterion based on Taylor expansion that approximates the change in the
cost function induced by pruning network parameters. We focus on transfer
learning, where large pretrained networks are adapted to specialized tasks. The
proposed criterion demonstrates superior performance compared to other
criteria, e.g. the norm of kernel weights or feature map activation, for
pruning large CNNs after adaptation to fine-grained classification tasks
(Birds-200 and Flowers-102) relaying only on the first order gradient
information. We also show that pruning can lead to more than 10x theoretical
(5x practical) reduction in adapted 3D-convolutional filters with a small drop
in accuracy in a recurrent gesture classifier. Finally, we show results for the
large-scale ImageNet dataset to emphasize the flexibility of our approach. | http://arxiv.org/pdf/1611.06440 | Pavlo Molchanov, Stephen Tyree, Tero Karras, Timo Aila, Jan Kautz | cs.LG, stat.ML | 17 pages, 14 figures, ICLR 2017 paper | null | cs.LG | 20161119 | 20170608 | [
{
"id": "1512.08571"
},
{
"id": "1607.03250"
},
{
"id": "1509.09308"
}
] |
1611.06440 | 63 | A.6 OPTIMAL BRAIN DAMAGE IMPLEMENTATION
OBD computes saliency of a parameter by computing a product of the squared magnitude of the parameter and the corresponding element on the diagonal of the Hessian. For many deep learning frameworks, an efï¬cient implementation of the diagonal evaluation is not straightforward and approximation techniques must be applied. Our implementation of Hessian diagonal computation was inspired by Dauphin et al. (2015) work, where the technique proposed by Bekas et al. (2007) was used to evaluate SGD preconditioned with the Jacobi preconditioner. It was shown that diagonal of the Hessian can be approximated as:
diag(H) = E[v © Hv] = E[v© V(VC -v)], (13) | 1611.06440#63 | Pruning Convolutional Neural Networks for Resource Efficient Inference | We propose a new formulation for pruning convolutional kernels in neural
networks to enable efficient inference. We interleave greedy criteria-based
pruning with fine-tuning by backpropagation - a computationally efficient
procedure that maintains good generalization in the pruned network. We propose
a new criterion based on Taylor expansion that approximates the change in the
cost function induced by pruning network parameters. We focus on transfer
learning, where large pretrained networks are adapted to specialized tasks. The
proposed criterion demonstrates superior performance compared to other
criteria, e.g. the norm of kernel weights or feature map activation, for
pruning large CNNs after adaptation to fine-grained classification tasks
(Birds-200 and Flowers-102) relaying only on the first order gradient
information. We also show that pruning can lead to more than 10x theoretical
(5x practical) reduction in adapted 3D-convolutional filters with a small drop
in accuracy in a recurrent gesture classifier. Finally, we show results for the
large-scale ImageNet dataset to emphasize the flexibility of our approach. | http://arxiv.org/pdf/1611.06440 | Pavlo Molchanov, Stephen Tyree, Tero Karras, Timo Aila, Jan Kautz | cs.LG, stat.ML | 17 pages, 14 figures, ICLR 2017 paper | null | cs.LG | 20161119 | 20170608 | [
{
"id": "1512.08571"
},
{
"id": "1607.03250"
},
{
"id": "1509.09308"
}
] |
1611.06440 | 64 | diag(H) = E[v © Hv] = E[v© V(VC -v)], (13)
where © is the element-wise product, v are random vectors with entries +1, and V is the gradient operator. To compute saliency with OBD, we randomly draw v and compute the diagonal over 10 iterations for a single minibatch for 1000 mini batches. We found that this number of mini batches is required to compute close approximation of the Hessianâs diagonal (which we verified). Computing saliency this way is computationally expensive for iterative pruning, and we use a slightly different but more efficient procedure. Before the first pruning iteration, saliency is initialized from values computed off-line with 1000 minibatches and 10 iterations, as described above. Then, at every minibatch we compute the OBD criteria with only one iteration and apply an exponential moving averaging with a coefficient of 0.99. We verified that this computes a close approximation to the Hessianâs diagonal.
A.7 CORRELATION OF TAYLOR CRITERION WITH GRADIENT AND ACTIVATION | 1611.06440#64 | Pruning Convolutional Neural Networks for Resource Efficient Inference | We propose a new formulation for pruning convolutional kernels in neural
networks to enable efficient inference. We interleave greedy criteria-based
pruning with fine-tuning by backpropagation - a computationally efficient
procedure that maintains good generalization in the pruned network. We propose
a new criterion based on Taylor expansion that approximates the change in the
cost function induced by pruning network parameters. We focus on transfer
learning, where large pretrained networks are adapted to specialized tasks. The
proposed criterion demonstrates superior performance compared to other
criteria, e.g. the norm of kernel weights or feature map activation, for
pruning large CNNs after adaptation to fine-grained classification tasks
(Birds-200 and Flowers-102) relaying only on the first order gradient
information. We also show that pruning can lead to more than 10x theoretical
(5x practical) reduction in adapted 3D-convolutional filters with a small drop
in accuracy in a recurrent gesture classifier. Finally, we show results for the
large-scale ImageNet dataset to emphasize the flexibility of our approach. | http://arxiv.org/pdf/1611.06440 | Pavlo Molchanov, Stephen Tyree, Tero Karras, Timo Aila, Jan Kautz | cs.LG, stat.ML | 17 pages, 14 figures, ICLR 2017 paper | null | cs.LG | 20161119 | 20170608 | [
{
"id": "1512.08571"
},
{
"id": "1607.03250"
},
{
"id": "1509.09308"
}
] |
1611.06440 | 65 | A.7 CORRELATION OF TAYLOR CRITERION WITH GRADIENT AND ACTIVATION
The Taylor criterion is composed of both an activation term and a gradient term. In Figure 14, we depict the correlation between the Taylor criterion and each constituent part. We consider expected absolute value of the gradient instead of the mean, because otherwise it tends to zero. The plots are computed from pruning criteria for an unpruned VGG network ï¬ne-tuned for the Birds-200 dataset. (Values are shown after layer-wise normalization). Figure 14(a-b) depict the Taylor criterion in the y-axis for all neurons w.r.t. the gradient and activation components, respectively. The bottom 10% of neurons (lowest Taylor criterion, most likely to be pruned) are depicted in red, while the top 10% are shown in green. Considering all neurons, both gradient and activation components demonstrate a linear trend with the Taylor criterion. However, for the bottom 10% of neurons, as shown in Figure 14(c-d), the activation criterion shows much stronger correlation, with lower activations indicating lower Taylor scores.
16
Published as a conference paper at ICLR 2017
0.25 . . . . E : oss F oxo :
0.25 . . E : oss . F oxo
(a) (b)
i 0.002 activation (normalized) | 1611.06440#65 | Pruning Convolutional Neural Networks for Resource Efficient Inference | We propose a new formulation for pruning convolutional kernels in neural
networks to enable efficient inference. We interleave greedy criteria-based
pruning with fine-tuning by backpropagation - a computationally efficient
procedure that maintains good generalization in the pruned network. We propose
a new criterion based on Taylor expansion that approximates the change in the
cost function induced by pruning network parameters. We focus on transfer
learning, where large pretrained networks are adapted to specialized tasks. The
proposed criterion demonstrates superior performance compared to other
criteria, e.g. the norm of kernel weights or feature map activation, for
pruning large CNNs after adaptation to fine-grained classification tasks
(Birds-200 and Flowers-102) relaying only on the first order gradient
information. We also show that pruning can lead to more than 10x theoretical
(5x practical) reduction in adapted 3D-convolutional filters with a small drop
in accuracy in a recurrent gesture classifier. Finally, we show results for the
large-scale ImageNet dataset to emphasize the flexibility of our approach. | http://arxiv.org/pdf/1611.06440 | Pavlo Molchanov, Stephen Tyree, Tero Karras, Timo Aila, Jan Kautz | cs.LG, stat.ML | 17 pages, 14 figures, ICLR 2017 paper | null | cs.LG | 20161119 | 20170608 | [
{
"id": "1512.08571"
},
{
"id": "1607.03250"
},
{
"id": "1509.09308"
}
] |
1611.06216 | 0 | 6 1 0 2
v o N 8 1 ] L C . s c [
1 v 6 1 2 6 0 . 1 1 6 1 : v i X r a
# Generative Deep Neural Networks for Dialogue: A Short Review
Iulian Vlad Serban Department of Computer Science and Operations Research, University of Montreal
# Ryan Lowe School of Computer Science, McGill University
# Laurent Charlin School of Computer Science, McGill University
# Joelle Pineau School of Computer Science, McGill University
# Abstract
Researchers have recently started investigating deep neural networks for dialogue applications. In particular, generative sequence-to-sequence (Seq2Seq) models have shown promising results for unstructured tasks, such as word-level dialogue response generation. The hope is that such models will be able to leverage massive amounts of data to learn meaningful natural language representations and response generation strategies, while requiring a minimum amount of domain knowledge and hand-crafting. An important challenge is to develop models that can effectively incorporate dialogue context and generate meaningful and diverse responses. In support of this goal, we review recently proposed models based on generative encoder-decoder neural network architectures, and show that these models have better ability to incorporate long-term dialogue history, to model uncertainty and ambiguity in dialogue, and to generate responses with high-level compositional structure.
# Introduction | 1611.06216#0 | Generative Deep Neural Networks for Dialogue: A Short Review | Researchers have recently started investigating deep neural networks for
dialogue applications. In particular, generative sequence-to-sequence (Seq2Seq)
models have shown promising results for unstructured tasks, such as word-level
dialogue response generation. The hope is that such models will be able to
leverage massive amounts of data to learn meaningful natural language
representations and response generation strategies, while requiring a minimum
amount of domain knowledge and hand-crafting. An important challenge is to
develop models that can effectively incorporate dialogue context and generate
meaningful and diverse responses. In support of this goal, we review recently
proposed models based on generative encoder-decoder neural network
architectures, and show that these models have better ability to incorporate
long-term dialogue history, to model uncertainty and ambiguity in dialogue, and
to generate responses with high-level compositional structure. | http://arxiv.org/pdf/1611.06216 | Iulian Vlad Serban, Ryan Lowe, Laurent Charlin, Joelle Pineau | cs.CL, cs.AI, cs.NE, I.5.1; I.2.7 | 6 pages, 1 figure, 3 tables; NIPS 2016 workshop on Learning Methods
for Dialogue | null | cs.CL | 20161118 | 20161118 | [
{
"id": "1605.06069"
},
{
"id": "1606.00776"
},
{
"id": "1604.04562"
},
{
"id": "1604.06045"
},
{
"id": "1605.07683"
},
{
"id": "1606.01541"
}
] |
1611.06216 | 1 | # Introduction
Researchers have recently started investigating sequence-to-sequence (Seq2Seq) models for dialogue applications. These models typically use neural networks to both represent dialogue histories and to generate or select appropriate responses. Such models are able to leverage large amounts of data in order to learn meaningful natural language representations and generation strategies, while requiring a minimum amount of domain knowledge and hand-crafting. Although the Seq2Seq framework is different from the well-established goal-oriented setting [Gorin et al., 1997, Young, 2000, Singh et al., 2002], these models have already been applied to several real-world applications, with Microsoftâs system Xiaoice [Markoff and Mozur, 2015] and Googleâs Smart Reply system [Kannan et al., 2016] as two prominent examples. | 1611.06216#1 | Generative Deep Neural Networks for Dialogue: A Short Review | Researchers have recently started investigating deep neural networks for
dialogue applications. In particular, generative sequence-to-sequence (Seq2Seq)
models have shown promising results for unstructured tasks, such as word-level
dialogue response generation. The hope is that such models will be able to
leverage massive amounts of data to learn meaningful natural language
representations and response generation strategies, while requiring a minimum
amount of domain knowledge and hand-crafting. An important challenge is to
develop models that can effectively incorporate dialogue context and generate
meaningful and diverse responses. In support of this goal, we review recently
proposed models based on generative encoder-decoder neural network
architectures, and show that these models have better ability to incorporate
long-term dialogue history, to model uncertainty and ambiguity in dialogue, and
to generate responses with high-level compositional structure. | http://arxiv.org/pdf/1611.06216 | Iulian Vlad Serban, Ryan Lowe, Laurent Charlin, Joelle Pineau | cs.CL, cs.AI, cs.NE, I.5.1; I.2.7 | 6 pages, 1 figure, 3 tables; NIPS 2016 workshop on Learning Methods
for Dialogue | null | cs.CL | 20161118 | 20161118 | [
{
"id": "1605.06069"
},
{
"id": "1606.00776"
},
{
"id": "1604.04562"
},
{
"id": "1604.06045"
},
{
"id": "1605.07683"
},
{
"id": "1606.01541"
}
] |
1611.06216 | 2 | Researchers have mainly explored two types of Seq2Seq models. The ï¬rst are generative models, which are usually trained with cross-entropy to generate responses word-by-word conditioned on a dialogue context [Ritter et al., 2011, Vinyals and Le, 2015, Sordoni et al., 2015, Shang et al., 2015, Li et al., 2016a, Serban et al., 2016b]. The second are discriminative models, which are trained to select an appropriate response from a set of candidate responses [Lowe et al., 2015, Bordes and Weston, 2016, Inaba and Takahashi, 2016, Yu et al., 2016]. In a related strand of work, researchers have also investigated applying neural networks to the different components of a standard dialogue system, including natural language understanding, natural language generation, dialogue state tracking and
30th Conference on Neural Information Processing Systems (NIPS 2016), Workshop on Learning Methods for Dialogue, Barcelona, Spain.
evaluation [Wen et al., 2016, 2015, Henderson et al., 2013, Mrkši´c et al., 2015, Su et al., 2015]. In this paper, we focus on generative models trained with cross-entropy. | 1611.06216#2 | Generative Deep Neural Networks for Dialogue: A Short Review | Researchers have recently started investigating deep neural networks for
dialogue applications. In particular, generative sequence-to-sequence (Seq2Seq)
models have shown promising results for unstructured tasks, such as word-level
dialogue response generation. The hope is that such models will be able to
leverage massive amounts of data to learn meaningful natural language
representations and response generation strategies, while requiring a minimum
amount of domain knowledge and hand-crafting. An important challenge is to
develop models that can effectively incorporate dialogue context and generate
meaningful and diverse responses. In support of this goal, we review recently
proposed models based on generative encoder-decoder neural network
architectures, and show that these models have better ability to incorporate
long-term dialogue history, to model uncertainty and ambiguity in dialogue, and
to generate responses with high-level compositional structure. | http://arxiv.org/pdf/1611.06216 | Iulian Vlad Serban, Ryan Lowe, Laurent Charlin, Joelle Pineau | cs.CL, cs.AI, cs.NE, I.5.1; I.2.7 | 6 pages, 1 figure, 3 tables; NIPS 2016 workshop on Learning Methods
for Dialogue | null | cs.CL | 20161118 | 20161118 | [
{
"id": "1605.06069"
},
{
"id": "1606.00776"
},
{
"id": "1604.04562"
},
{
"id": "1604.06045"
},
{
"id": "1605.07683"
},
{
"id": "1606.01541"
}
] |
1611.06216 | 3 | One weakness of current generative models is their limited ability to incorporate rich dialogue context and to generate meaningful and diverse responses [Serban et al., 2016b, Li et al., 2016a]. To overcome this challenge, we propose new generative models that are better able to incorporate long-term dialogue history, to model uncertainty and ambiguity in dialogue, and to generate responses with high-level compositional structure. Our experiments demonstrate the importance of the model architecture and the related inductive biases in achieving this improved performance.
CEOS) GORE a O dated A) Classic LSTM Cc) MrRNN B) VHRED
Figure 1: Probabilistic graphical models for dialogue response generation. Variables w represent natural language utterances. Variables z represent discrete or continuous stochastic latent variables. (A): Classic LSTM model, which uses a shallow generation process. This is problematic because it has no mechanism for incorporating uncertainty and ambiguity and because it forces the model to generate compositional and long-term structure incrementally on a word-by-word basis. (B): VHRED expands the generation process by adding one latent variable for each utterance, which helps incorporate uncertainty and ambiguity in the representations and generate meaningful, diverse responses. (C): MrRNN expands the generation process by adding a sequence of discrete stochastic variables for each utterance, which helps generate responses with high-level compositional structure.
# 2 Models | 1611.06216#3 | Generative Deep Neural Networks for Dialogue: A Short Review | Researchers have recently started investigating deep neural networks for
dialogue applications. In particular, generative sequence-to-sequence (Seq2Seq)
models have shown promising results for unstructured tasks, such as word-level
dialogue response generation. The hope is that such models will be able to
leverage massive amounts of data to learn meaningful natural language
representations and response generation strategies, while requiring a minimum
amount of domain knowledge and hand-crafting. An important challenge is to
develop models that can effectively incorporate dialogue context and generate
meaningful and diverse responses. In support of this goal, we review recently
proposed models based on generative encoder-decoder neural network
architectures, and show that these models have better ability to incorporate
long-term dialogue history, to model uncertainty and ambiguity in dialogue, and
to generate responses with high-level compositional structure. | http://arxiv.org/pdf/1611.06216 | Iulian Vlad Serban, Ryan Lowe, Laurent Charlin, Joelle Pineau | cs.CL, cs.AI, cs.NE, I.5.1; I.2.7 | 6 pages, 1 figure, 3 tables; NIPS 2016 workshop on Learning Methods
for Dialogue | null | cs.CL | 20161118 | 20161118 | [
{
"id": "1605.06069"
},
{
"id": "1606.00776"
},
{
"id": "1604.04562"
},
{
"id": "1604.06045"
},
{
"id": "1605.07683"
},
{
"id": "1606.01541"
}
] |
1611.06216 | 4 | # 2 Models
HRED: The Hierarchical Recurrent Encoder-Decoder model (HRED) [Serban et al., 2016b] is a type of Seq2Seq model that decomposes a dialogue into a two-level hierarchy: a sequence of utterances, each of which is a sequence of words. HRED consists of three recurrent neural networks (RNNs): an encoder RNN, a context RNN and a decoder RNN. Each utterance is encoded into a real-valued vector representation by the encoder RNN. These utterance representations are given as input to the context RNN, which computes a real-valued vector representation summarizing the dialogue at every turn. This summary is given as input to the decoder RNN, which generates a response word-by-word. Unlike the RNN encoders in previous Seq2Seq models, the context RNN is only updated once every dialogue turn and uses the same parameters for each update. This gives HRED an inductive bias that helps incorporate long-term context and learn invariant representations. | 1611.06216#4 | Generative Deep Neural Networks for Dialogue: A Short Review | Researchers have recently started investigating deep neural networks for
dialogue applications. In particular, generative sequence-to-sequence (Seq2Seq)
models have shown promising results for unstructured tasks, such as word-level
dialogue response generation. The hope is that such models will be able to
leverage massive amounts of data to learn meaningful natural language
representations and response generation strategies, while requiring a minimum
amount of domain knowledge and hand-crafting. An important challenge is to
develop models that can effectively incorporate dialogue context and generate
meaningful and diverse responses. In support of this goal, we review recently
proposed models based on generative encoder-decoder neural network
architectures, and show that these models have better ability to incorporate
long-term dialogue history, to model uncertainty and ambiguity in dialogue, and
to generate responses with high-level compositional structure. | http://arxiv.org/pdf/1611.06216 | Iulian Vlad Serban, Ryan Lowe, Laurent Charlin, Joelle Pineau | cs.CL, cs.AI, cs.NE, I.5.1; I.2.7 | 6 pages, 1 figure, 3 tables; NIPS 2016 workshop on Learning Methods
for Dialogue | null | cs.CL | 20161118 | 20161118 | [
{
"id": "1605.06069"
},
{
"id": "1606.00776"
},
{
"id": "1604.04562"
},
{
"id": "1604.06045"
},
{
"id": "1605.07683"
},
{
"id": "1606.01541"
}
] |
1611.06216 | 5 | VHRED: The Latent Variable Hierarchical Recurrent Encoder-Decoder model (VHRED) [Serban et al., 2016c] is an HRED model with an additional component: a high-dimensional stochastic latent variable at every dialogue turn. As in HRED, the dialogue context is encoded into a vector representation using encoder and context RNNs. Conditioned on the summary vector at each dialogue turn, VHRED samples a multivariate Gaussian variable, which is given along with the summary vector as input to the decoder RNN. The multivariate Gaussian latent variable allows modelling ambiguity and uncertainty in the dialogue through the latent variable distribution parameters (mean and variance parameters). This provides a useful inductive bias, which helps VHRED encode the dialogue context into a real-valued embedding space even when the dialogue context is ambiguous or uncertain, and it helps VHRED generate more diverse responses. | 1611.06216#5 | Generative Deep Neural Networks for Dialogue: A Short Review | Researchers have recently started investigating deep neural networks for
dialogue applications. In particular, generative sequence-to-sequence (Seq2Seq)
models have shown promising results for unstructured tasks, such as word-level
dialogue response generation. The hope is that such models will be able to
leverage massive amounts of data to learn meaningful natural language
representations and response generation strategies, while requiring a minimum
amount of domain knowledge and hand-crafting. An important challenge is to
develop models that can effectively incorporate dialogue context and generate
meaningful and diverse responses. In support of this goal, we review recently
proposed models based on generative encoder-decoder neural network
architectures, and show that these models have better ability to incorporate
long-term dialogue history, to model uncertainty and ambiguity in dialogue, and
to generate responses with high-level compositional structure. | http://arxiv.org/pdf/1611.06216 | Iulian Vlad Serban, Ryan Lowe, Laurent Charlin, Joelle Pineau | cs.CL, cs.AI, cs.NE, I.5.1; I.2.7 | 6 pages, 1 figure, 3 tables; NIPS 2016 workshop on Learning Methods
for Dialogue | null | cs.CL | 20161118 | 20161118 | [
{
"id": "1605.06069"
},
{
"id": "1606.00776"
},
{
"id": "1604.04562"
},
{
"id": "1604.06045"
},
{
"id": "1605.07683"
},
{
"id": "1606.01541"
}
] |
1611.06216 | 6 | MrRNN: The Multiresolution RNN (MrRNN) [Serban et al., 2016a] models dialogue as two parallel stochastic sequences: a sequence of high-level coarse tokens (coarse sequences), and a sequence of low-level natural language words (utterances). The coarse sequences follow a latent stochastic processâanalogous to hidden Markov modelsâwhich conditions the utterances through a hierar- chical generation process. The hierarchical generation process ï¬rst generates the coarse sequence, and conditioned on this generates the natural language utterance. In our experiments, the coarse
2
Table 1: Model response examples. The arrows indicate change of turn. | 1611.06216#6 | Generative Deep Neural Networks for Dialogue: A Short Review | Researchers have recently started investigating deep neural networks for
dialogue applications. In particular, generative sequence-to-sequence (Seq2Seq)
models have shown promising results for unstructured tasks, such as word-level
dialogue response generation. The hope is that such models will be able to
leverage massive amounts of data to learn meaningful natural language
representations and response generation strategies, while requiring a minimum
amount of domain knowledge and hand-crafting. An important challenge is to
develop models that can effectively incorporate dialogue context and generate
meaningful and diverse responses. In support of this goal, we review recently
proposed models based on generative encoder-decoder neural network
architectures, and show that these models have better ability to incorporate
long-term dialogue history, to model uncertainty and ambiguity in dialogue, and
to generate responses with high-level compositional structure. | http://arxiv.org/pdf/1611.06216 | Iulian Vlad Serban, Ryan Lowe, Laurent Charlin, Joelle Pineau | cs.CL, cs.AI, cs.NE, I.5.1; I.2.7 | 6 pages, 1 figure, 3 tables; NIPS 2016 workshop on Learning Methods
for Dialogue | null | cs.CL | 20161118 | 20161118 | [
{
"id": "1605.06069"
},
{
"id": "1606.00776"
},
{
"id": "1604.04562"
},
{
"id": "1604.06045"
},
{
"id": "1605.07683"
},
{
"id": "1606.01541"
}
] |
1611.06216 | 7 | Dialogue Context Hello I tried to install ubuntu studio but I get W : Failed to fetch <url > 404 Not Found when apt-get update â the ppa you added is not online hi community.. is there any difference between " /.bashrc " " /.bashrc. local " â yes, theyâre different ï¬les :-) the default /.bashrc sources /.bashrc. local â sorry I could undersatnd... If I write on terminal " gedit /.bashrc " it opens a ï¬le but how I open " /.bashrc. local " â " gedit /.bashrc. local "... â but it just open blank ï¬le nothing inside it Hey guys what do you in general use for irc something ubuntu xchat or xchat-gnome ? â without -gnome. that is just cut down â you mean drop xchat-gnome and go with xchat ? when setting up rules with iptables command only writes changes this ï¬le " etciptables. rules "? i ask this so i can backup before messing anything â sudo iptables-save something . dat to backup your rules restore with sudo iptables-restore < something . dat Model Response MrRNN Act. -Ent.: try a different mirror MrRNN Noun: something VHRED: but I dont know how to add it HRED: | 1611.06216#7 | Generative Deep Neural Networks for Dialogue: A Short Review | Researchers have recently started investigating deep neural networks for
dialogue applications. In particular, generative sequence-to-sequence (Seq2Seq)
models have shown promising results for unstructured tasks, such as word-level
dialogue response generation. The hope is that such models will be able to
leverage massive amounts of data to learn meaningful natural language
representations and response generation strategies, while requiring a minimum
amount of domain knowledge and hand-crafting. An important challenge is to
develop models that can effectively incorporate dialogue context and generate
meaningful and diverse responses. In support of this goal, we review recently
proposed models based on generative encoder-decoder neural network
architectures, and show that these models have better ability to incorporate
long-term dialogue history, to model uncertainty and ambiguity in dialogue, and
to generate responses with high-level compositional structure. | http://arxiv.org/pdf/1611.06216 | Iulian Vlad Serban, Ryan Lowe, Laurent Charlin, Joelle Pineau | cs.CL, cs.AI, cs.NE, I.5.1; I.2.7 | 6 pages, 1 figure, 3 tables; NIPS 2016 workshop on Learning Methods
for Dialogue | null | cs.CL | 20161118 | 20161118 | [
{
"id": "1605.06069"
},
{
"id": "1606.00776"
},
{
"id": "1604.04562"
},
{
"id": "1604.06045"
},
{
"id": "1605.07683"
},
{
"id": "1606.01541"
}
] |
1611.06216 | 8 | . dat Model Response MrRNN Act. -Ent.: try a different mirror MrRNN Noun: something VHRED: but I dont know how to add it HRED: how do I do that ? MrRNN Act.-Ent.: open up gedit /.bashrc, there is a ï¬le called " something " MrRNN Noun: the ï¬le is empty.. VHRED: itâs in /.bashrc or /.bashrc HRED: what are you trying to do ? MrRNN Act. -Ent.: im using xchat right now MrRNN Noun: what is xchat-gnome ? VHRED: correct HRED: yes MrRNN Act. -Ent.: I donât see any reason why you need iptables to do that MrRNN Noun: are you using ubuntu ? VHRED: thx HRED: thanks | 1611.06216#8 | Generative Deep Neural Networks for Dialogue: A Short Review | Researchers have recently started investigating deep neural networks for
dialogue applications. In particular, generative sequence-to-sequence (Seq2Seq)
models have shown promising results for unstructured tasks, such as word-level
dialogue response generation. The hope is that such models will be able to
leverage massive amounts of data to learn meaningful natural language
representations and response generation strategies, while requiring a minimum
amount of domain knowledge and hand-crafting. An important challenge is to
develop models that can effectively incorporate dialogue context and generate
meaningful and diverse responses. In support of this goal, we review recently
proposed models based on generative encoder-decoder neural network
architectures, and show that these models have better ability to incorporate
long-term dialogue history, to model uncertainty and ambiguity in dialogue, and
to generate responses with high-level compositional structure. | http://arxiv.org/pdf/1611.06216 | Iulian Vlad Serban, Ryan Lowe, Laurent Charlin, Joelle Pineau | cs.CL, cs.AI, cs.NE, I.5.1; I.2.7 | 6 pages, 1 figure, 3 tables; NIPS 2016 workshop on Learning Methods
for Dialogue | null | cs.CL | 20161118 | 20161118 | [
{
"id": "1605.06069"
},
{
"id": "1606.00776"
},
{
"id": "1604.04562"
},
{
"id": "1604.06045"
},
{
"id": "1605.07683"
},
{
"id": "1606.01541"
}
] |
1611.06216 | 9 | sequences are deï¬ned as either noun sequences or activity-entity pairs (predicate-argument pairs) extracted from the natural language utterances. The coarse sequences and utterances are modelled by two separate HRED models. The hierarchical generation provides an important inductive bias, because it helps MrRNN model high-level, compositional structure and generate meaningful and on-topic responses.
# 3 Experiments
We apply our generative models to dialogue response generation on the Ubuntu Dialogue Cor- pus [Lowe et al., 2015]. For each example, given a dialogue context, the model must generate an appropriate response. We also present results on Twitter in the Appendix. This task has been studied extensively in the recent literature [Ritter et al., 2011, Sordoni et al., 2015, Li et al., 2016a].
Corpus: The Ubuntu Dialogue Corpus consists of about half a million dialogues extracted from the #Ubuntu Internet Relayed Chat (IRC) channel. Users entering this chat channel usually have a speciï¬c technical problem. Typically, users ï¬rst describe their problem, and other users try to help them resolve it. The technical problems range from software-related and hardware-related issues (e.g. installing packages, ï¬xing broken drivers) to informational needs (e.g. ï¬nding software). | 1611.06216#9 | Generative Deep Neural Networks for Dialogue: A Short Review | Researchers have recently started investigating deep neural networks for
dialogue applications. In particular, generative sequence-to-sequence (Seq2Seq)
models have shown promising results for unstructured tasks, such as word-level
dialogue response generation. The hope is that such models will be able to
leverage massive amounts of data to learn meaningful natural language
representations and response generation strategies, while requiring a minimum
amount of domain knowledge and hand-crafting. An important challenge is to
develop models that can effectively incorporate dialogue context and generate
meaningful and diverse responses. In support of this goal, we review recently
proposed models based on generative encoder-decoder neural network
architectures, and show that these models have better ability to incorporate
long-term dialogue history, to model uncertainty and ambiguity in dialogue, and
to generate responses with high-level compositional structure. | http://arxiv.org/pdf/1611.06216 | Iulian Vlad Serban, Ryan Lowe, Laurent Charlin, Joelle Pineau | cs.CL, cs.AI, cs.NE, I.5.1; I.2.7 | 6 pages, 1 figure, 3 tables; NIPS 2016 workshop on Learning Methods
for Dialogue | null | cs.CL | 20161118 | 20161118 | [
{
"id": "1605.06069"
},
{
"id": "1606.00776"
},
{
"id": "1604.04562"
},
{
"id": "1604.06045"
},
{
"id": "1605.07683"
},
{
"id": "1606.01541"
}
] |
1611.06216 | 10 | Evaluation: We carry out an in-lab human study to evaluate the model responses. We recruit 5 human evaluators. We show each evaluator between 30 and 40 dialogue contexts with the ground truth response, and 4 candidate model responses. For each example, we ask the evaluators to compare the candidate responses to the ground truth response and dialogue context, and rate them for ï¬uency and relevancy on a scale 0â4, where 0 means incomprehensible or no relevancy and 4 means ï¬awless English or all relevant. In addition to the human evaluation, we also evaluate dialogue responses w.r.t. the activity-entity metrics proposed by Serban et al. [2016a]. These metrics measure whether the model response contains the same activities (e.g. download, install) and entities (e.g. ubuntu, ï¬refox) as the ground truth responses. Models that generate responses with the same activities and entities as the ground truth responsesâincluding expert responses, which often lead to solving the userâs problemâare given higher scores. Sample responses from each model are shown in Table 1. | 1611.06216#10 | Generative Deep Neural Networks for Dialogue: A Short Review | Researchers have recently started investigating deep neural networks for
dialogue applications. In particular, generative sequence-to-sequence (Seq2Seq)
models have shown promising results for unstructured tasks, such as word-level
dialogue response generation. The hope is that such models will be able to
leverage massive amounts of data to learn meaningful natural language
representations and response generation strategies, while requiring a minimum
amount of domain knowledge and hand-crafting. An important challenge is to
develop models that can effectively incorporate dialogue context and generate
meaningful and diverse responses. In support of this goal, we review recently
proposed models based on generative encoder-decoder neural network
architectures, and show that these models have better ability to incorporate
long-term dialogue history, to model uncertainty and ambiguity in dialogue, and
to generate responses with high-level compositional structure. | http://arxiv.org/pdf/1611.06216 | Iulian Vlad Serban, Ryan Lowe, Laurent Charlin, Joelle Pineau | cs.CL, cs.AI, cs.NE, I.5.1; I.2.7 | 6 pages, 1 figure, 3 tables; NIPS 2016 workshop on Learning Methods
for Dialogue | null | cs.CL | 20161118 | 20161118 | [
{
"id": "1605.06069"
},
{
"id": "1606.00776"
},
{
"id": "1604.04562"
},
{
"id": "1604.06045"
},
{
"id": "1605.07683"
},
{
"id": "1606.01541"
}
] |
1611.06216 | 11 | Table 2: Ubuntu evaluation using F1 metrics w.r.t. activities and entities (mean scores ± 90% conï¬dence intervals), and human ï¬uency and human relevancy scores given on a scale 0-4 (â indicates scores signiï¬cantly different from baseline models at 90% conï¬dence)
Model F1 Activity F1 Entity Human Fluency Human Relevancy LSTM HRED VHRED MrRNN Noun MrRNN Act.-Ent. 1.18 ±0.18 4.34 ±0.34 4.63 ±0.34 4.04 ±0.33 11.43 ±0.54 0.87 ±0.15 2.22 ±0.25 2.53 ±0.26 6.31 ±0.42 3.72 ±0.33 - 2.98 - 3.48â 3.42â - 1.01 - 1.32â 1.04
Results: The results are given in Table 2. The MrRNNs perform substantially better than the other models w.r.t. both the human evaluation study and the evaluation metrics based on activities and
3 | 1611.06216#11 | Generative Deep Neural Networks for Dialogue: A Short Review | Researchers have recently started investigating deep neural networks for
dialogue applications. In particular, generative sequence-to-sequence (Seq2Seq)
models have shown promising results for unstructured tasks, such as word-level
dialogue response generation. The hope is that such models will be able to
leverage massive amounts of data to learn meaningful natural language
representations and response generation strategies, while requiring a minimum
amount of domain knowledge and hand-crafting. An important challenge is to
develop models that can effectively incorporate dialogue context and generate
meaningful and diverse responses. In support of this goal, we review recently
proposed models based on generative encoder-decoder neural network
architectures, and show that these models have better ability to incorporate
long-term dialogue history, to model uncertainty and ambiguity in dialogue, and
to generate responses with high-level compositional structure. | http://arxiv.org/pdf/1611.06216 | Iulian Vlad Serban, Ryan Lowe, Laurent Charlin, Joelle Pineau | cs.CL, cs.AI, cs.NE, I.5.1; I.2.7 | 6 pages, 1 figure, 3 tables; NIPS 2016 workshop on Learning Methods
for Dialogue | null | cs.CL | 20161118 | 20161118 | [
{
"id": "1605.06069"
},
{
"id": "1606.00776"
},
{
"id": "1604.04562"
},
{
"id": "1604.06045"
},
{
"id": "1605.07683"
},
{
"id": "1606.01541"
}
] |
1611.06216 | 12 | 3
entities. MrRNN with noun representations obtains an F1 entity score at 6.31, while all other models obtain less than half F1 scores between 0.87 â 2.53, and human evaluators consistently rate its ï¬uency and relevancy signiï¬cantly higher than all the baseline models. MrRNN with activity representations obtains an F1 activity score at 11.43, while all other models obtain less than half F1 activity scores between 1.18 â 4.63, and performs substantially better than the baseline models w.r.t. the F1 entity score. This indicates that the MrRNNs have learned to model high-level, goal-oriented sequential structure in the Ubuntu domain. Followed by these, VHRED performs better than the HRED and LSTM models w.r.t. both activities and entities. This shows that VHRED generates more appropriate responses, which suggests that the latent variables are useful for modeling uncertainty and ambiguity. Finally, HRED performs better than the LSTM baseline w.r.t. both activities and entities, which underlines the importance of representing longer-term context. These conclusions are conï¬rmed by additional experiments on response generation for the Twitter domain (see Appendix).
# 4 Discussion | 1611.06216#12 | Generative Deep Neural Networks for Dialogue: A Short Review | Researchers have recently started investigating deep neural networks for
dialogue applications. In particular, generative sequence-to-sequence (Seq2Seq)
models have shown promising results for unstructured tasks, such as word-level
dialogue response generation. The hope is that such models will be able to
leverage massive amounts of data to learn meaningful natural language
representations and response generation strategies, while requiring a minimum
amount of domain knowledge and hand-crafting. An important challenge is to
develop models that can effectively incorporate dialogue context and generate
meaningful and diverse responses. In support of this goal, we review recently
proposed models based on generative encoder-decoder neural network
architectures, and show that these models have better ability to incorporate
long-term dialogue history, to model uncertainty and ambiguity in dialogue, and
to generate responses with high-level compositional structure. | http://arxiv.org/pdf/1611.06216 | Iulian Vlad Serban, Ryan Lowe, Laurent Charlin, Joelle Pineau | cs.CL, cs.AI, cs.NE, I.5.1; I.2.7 | 6 pages, 1 figure, 3 tables; NIPS 2016 workshop on Learning Methods
for Dialogue | null | cs.CL | 20161118 | 20161118 | [
{
"id": "1605.06069"
},
{
"id": "1606.00776"
},
{
"id": "1604.04562"
},
{
"id": "1604.06045"
},
{
"id": "1605.07683"
},
{
"id": "1606.01541"
}
] |
1611.06216 | 13 | # 4 Discussion
We have presented generative models for dialogue response generation. We have proposed ar- chitectural modiï¬cations with inductive biases towards 1) incorporating longer-term context, 2) handling uncertainty and ambiguity, and 3) generating diverse and on-topic responses with high-level compositional structure. Our experiments show the advantage of the architectural modiï¬cations quantitatively through human experiments and qualitatively through manual inspections. These experiments demonstrate the need for further research into generative model architectures. Although we have focused on three generative models, other model architectures such as memory-based models [Bordes and Weston, 2016, Weston et al., 2015] and attention-based models [Shang et al., 2015] have also demonstrated promising results and therefore deserve the attention of future research. | 1611.06216#13 | Generative Deep Neural Networks for Dialogue: A Short Review | Researchers have recently started investigating deep neural networks for
dialogue applications. In particular, generative sequence-to-sequence (Seq2Seq)
models have shown promising results for unstructured tasks, such as word-level
dialogue response generation. The hope is that such models will be able to
leverage massive amounts of data to learn meaningful natural language
representations and response generation strategies, while requiring a minimum
amount of domain knowledge and hand-crafting. An important challenge is to
develop models that can effectively incorporate dialogue context and generate
meaningful and diverse responses. In support of this goal, we review recently
proposed models based on generative encoder-decoder neural network
architectures, and show that these models have better ability to incorporate
long-term dialogue history, to model uncertainty and ambiguity in dialogue, and
to generate responses with high-level compositional structure. | http://arxiv.org/pdf/1611.06216 | Iulian Vlad Serban, Ryan Lowe, Laurent Charlin, Joelle Pineau | cs.CL, cs.AI, cs.NE, I.5.1; I.2.7 | 6 pages, 1 figure, 3 tables; NIPS 2016 workshop on Learning Methods
for Dialogue | null | cs.CL | 20161118 | 20161118 | [
{
"id": "1605.06069"
},
{
"id": "1606.00776"
},
{
"id": "1604.04562"
},
{
"id": "1604.06045"
},
{
"id": "1605.07683"
},
{
"id": "1606.01541"
}
] |
1611.06216 | 14 | In another line of work, researchers have started proposing alternative training and response selection criteria [Weston, 2016]. Li et al. [2016a] propose ranking candidate responses according to a mutual information criterion, in order to incorporate dialogue context efï¬ciently and retrieve on-topic responses. Li et al. [2016b] further propose a model trained using reinforcement learning to optimize a hand-crafted reward function. Both these models are motivated by the lack of diversity observed in the generative model responses. Similarly, Yu et al. [2016] propose a hybrid modelâcombining retrieval models, neural networks and hand-crafted rulesâtrained using reinforcement learning to optimize a hand-crafted reward function. In contrast to these approaches, without combining several models or having to modify the training or response selection criterion, VHRED generates more diverse responses than previous models. Similarly, by optimizing the joint log-likelihood over sequences, MrRNNs generate more appropriate and on-topic responses with compositional structure. Thus, improving generative model architectures has the potential to compensate â or even remove the need â for hand-crafted reward functions. | 1611.06216#14 | Generative Deep Neural Networks for Dialogue: A Short Review | Researchers have recently started investigating deep neural networks for
dialogue applications. In particular, generative sequence-to-sequence (Seq2Seq)
models have shown promising results for unstructured tasks, such as word-level
dialogue response generation. The hope is that such models will be able to
leverage massive amounts of data to learn meaningful natural language
representations and response generation strategies, while requiring a minimum
amount of domain knowledge and hand-crafting. An important challenge is to
develop models that can effectively incorporate dialogue context and generate
meaningful and diverse responses. In support of this goal, we review recently
proposed models based on generative encoder-decoder neural network
architectures, and show that these models have better ability to incorporate
long-term dialogue history, to model uncertainty and ambiguity in dialogue, and
to generate responses with high-level compositional structure. | http://arxiv.org/pdf/1611.06216 | Iulian Vlad Serban, Ryan Lowe, Laurent Charlin, Joelle Pineau | cs.CL, cs.AI, cs.NE, I.5.1; I.2.7 | 6 pages, 1 figure, 3 tables; NIPS 2016 workshop on Learning Methods
for Dialogue | null | cs.CL | 20161118 | 20161118 | [
{
"id": "1605.06069"
},
{
"id": "1606.00776"
},
{
"id": "1604.04562"
},
{
"id": "1604.06045"
},
{
"id": "1605.07683"
},
{
"id": "1606.01541"
}
] |
1611.06216 | 15 | At the same time, the models we propose are not necessarily better language models, which are more efï¬cient at compressing dialogue data as measured by word perplexity. Although these models produce responses that are preferred by humans, they often result in higher test set perplexity than traditional LSTM language models. This suggests maximizing log-likelihood (i.e. minimizing perplexity) is not a sufï¬cient training objective for these models. An important line of future work therefore lies in improving the objective functions for training and response selection, as well as learning directly from interactions with real users.
4
# References
A. Bordes and J. Weston. Learning end-to-end goal-oriented dialog. arXiv preprint arXiv:1605.07683, 2016. A. L. Gorin, G. Riccardi, and J. H. Wright. How may i help you? Speech communication, 23(1):113â127, 1997. M. Henderson, B. Thomson, and S. Young. Deep neural network approach for the dialog state tracking challenge.
In Proceedings of the SIGDIAL 2013 Conference, pages 467â471, 2013.
M. Inaba and K. Takahashi. Neural utterance ranking model for conversational dialogue systems. In 17th Annual Meeting of the Special Interest Group on Discourse and Dialogue, page 393, 2016. | 1611.06216#15 | Generative Deep Neural Networks for Dialogue: A Short Review | Researchers have recently started investigating deep neural networks for
dialogue applications. In particular, generative sequence-to-sequence (Seq2Seq)
models have shown promising results for unstructured tasks, such as word-level
dialogue response generation. The hope is that such models will be able to
leverage massive amounts of data to learn meaningful natural language
representations and response generation strategies, while requiring a minimum
amount of domain knowledge and hand-crafting. An important challenge is to
develop models that can effectively incorporate dialogue context and generate
meaningful and diverse responses. In support of this goal, we review recently
proposed models based on generative encoder-decoder neural network
architectures, and show that these models have better ability to incorporate
long-term dialogue history, to model uncertainty and ambiguity in dialogue, and
to generate responses with high-level compositional structure. | http://arxiv.org/pdf/1611.06216 | Iulian Vlad Serban, Ryan Lowe, Laurent Charlin, Joelle Pineau | cs.CL, cs.AI, cs.NE, I.5.1; I.2.7 | 6 pages, 1 figure, 3 tables; NIPS 2016 workshop on Learning Methods
for Dialogue | null | cs.CL | 20161118 | 20161118 | [
{
"id": "1605.06069"
},
{
"id": "1606.00776"
},
{
"id": "1604.04562"
},
{
"id": "1604.06045"
},
{
"id": "1605.07683"
},
{
"id": "1606.01541"
}
] |
1611.06216 | 16 | A. Kannan, K. Kurach, S. Ravi, T. Kaufmann, A. Tomkins, B. Miklos, G. Corrado, L. Lukács, M. Ganea, P. Young, et al. Smart reply: Automated response suggestion for email. In Proceedings of the ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD), volume 36, pages 495â503, 2016.
J. Li, M. Galley, C. Brockett, J. Gao, and B. Dolan. A diversity-promoting objective function for neural conversation models. In NAACL, 2016a.
J. Li, W. Monroe, A. Ritter, and D. Jurafsky. Deep reinforcement learning for dialogue generation. arXiv preprint arXiv:1606.01541, 2016b.
R. Lowe, N. Pow, I. Serban, and J. Pineau. The Ubuntu Dialogue Corpus: A Large Dataset for Research in Unstructured Multi-Turn Dialogue Systems. In Proc. of SIGDIAL-2015, 2015. | 1611.06216#16 | Generative Deep Neural Networks for Dialogue: A Short Review | Researchers have recently started investigating deep neural networks for
dialogue applications. In particular, generative sequence-to-sequence (Seq2Seq)
models have shown promising results for unstructured tasks, such as word-level
dialogue response generation. The hope is that such models will be able to
leverage massive amounts of data to learn meaningful natural language
representations and response generation strategies, while requiring a minimum
amount of domain knowledge and hand-crafting. An important challenge is to
develop models that can effectively incorporate dialogue context and generate
meaningful and diverse responses. In support of this goal, we review recently
proposed models based on generative encoder-decoder neural network
architectures, and show that these models have better ability to incorporate
long-term dialogue history, to model uncertainty and ambiguity in dialogue, and
to generate responses with high-level compositional structure. | http://arxiv.org/pdf/1611.06216 | Iulian Vlad Serban, Ryan Lowe, Laurent Charlin, Joelle Pineau | cs.CL, cs.AI, cs.NE, I.5.1; I.2.7 | 6 pages, 1 figure, 3 tables; NIPS 2016 workshop on Learning Methods
for Dialogue | null | cs.CL | 20161118 | 20161118 | [
{
"id": "1605.06069"
},
{
"id": "1606.00776"
},
{
"id": "1604.04562"
},
{
"id": "1604.06045"
},
{
"id": "1605.07683"
},
{
"id": "1606.01541"
}
] |
1611.06216 | 17 | J. Markoff and P. Mozur. For sympathetic ear, more chinese turn to smartphone program. NY Times, 2015. N. Mrkši´c, D. O. Séaghdha, B. Thomson, M. Gaši´c, P.-H. Su, D. Vandyke, T.-H. Wen, and S. Young. MultiN. Mrk&ié, D. O. Séaghdha, B. Thomson, M. Gaiié, P.-H. Su, D. Vandyke, T.-H. Wen, and S. Young. Multi- domain dialog state tracking using recurrent neural networks. In HLT-NAACL, pages 120-129, 2015.
domain dialog state tracking using recurrent neural networks. In HLT-NAACL, pages 120â129, 2015. A. Ritter, C. Cherry, and W. B. Dolan. Data-driven response generation in social media. In EMNLP, 2011. I. V. Serban, T. Klinger, G. Tesauro, K. Talamadupula, B. Zhou, Y. Bengio, and A. Courville. Multiresolution recurrent neural networks: An application to dialogue response generation. arXiv preprint arXiv:1606.00776, 2016a. | 1611.06216#17 | Generative Deep Neural Networks for Dialogue: A Short Review | Researchers have recently started investigating deep neural networks for
dialogue applications. In particular, generative sequence-to-sequence (Seq2Seq)
models have shown promising results for unstructured tasks, such as word-level
dialogue response generation. The hope is that such models will be able to
leverage massive amounts of data to learn meaningful natural language
representations and response generation strategies, while requiring a minimum
amount of domain knowledge and hand-crafting. An important challenge is to
develop models that can effectively incorporate dialogue context and generate
meaningful and diverse responses. In support of this goal, we review recently
proposed models based on generative encoder-decoder neural network
architectures, and show that these models have better ability to incorporate
long-term dialogue history, to model uncertainty and ambiguity in dialogue, and
to generate responses with high-level compositional structure. | http://arxiv.org/pdf/1611.06216 | Iulian Vlad Serban, Ryan Lowe, Laurent Charlin, Joelle Pineau | cs.CL, cs.AI, cs.NE, I.5.1; I.2.7 | 6 pages, 1 figure, 3 tables; NIPS 2016 workshop on Learning Methods
for Dialogue | null | cs.CL | 20161118 | 20161118 | [
{
"id": "1605.06069"
},
{
"id": "1606.00776"
},
{
"id": "1604.04562"
},
{
"id": "1604.06045"
},
{
"id": "1605.07683"
},
{
"id": "1606.01541"
}
] |
1611.06216 | 18 | I. V. Serban, A. Sordoni, Y. Bengio, A. C. Courville, and J. Pineau. Building end-to-end dialogue systems using generative hierarchical neural network models. In AAAI, pages 3776â3784, 2016b.
I. V. Serban, A. Sordoni, R. Lowe, L. Charlin, J. Pineau, A. Courville, and Y. Bengio. A hierarchical latent variable encoder-decoder model for generating dialogues. arXiv preprint arXiv:1605.06069, 2016c.
L. Shang, Z. Lu, and H. Li. Neural responding machine for short-text conversation. In ACL-IJCNLP, pages 1577â1586, 2015.
S. Singh, D. Litman, M. Kearns, and M. Walker. Optimizing dialogue management with reinforcement learning: Experiments with the njfun system. JAIR, 16:105â133, 2002. | 1611.06216#18 | Generative Deep Neural Networks for Dialogue: A Short Review | Researchers have recently started investigating deep neural networks for
dialogue applications. In particular, generative sequence-to-sequence (Seq2Seq)
models have shown promising results for unstructured tasks, such as word-level
dialogue response generation. The hope is that such models will be able to
leverage massive amounts of data to learn meaningful natural language
representations and response generation strategies, while requiring a minimum
amount of domain knowledge and hand-crafting. An important challenge is to
develop models that can effectively incorporate dialogue context and generate
meaningful and diverse responses. In support of this goal, we review recently
proposed models based on generative encoder-decoder neural network
architectures, and show that these models have better ability to incorporate
long-term dialogue history, to model uncertainty and ambiguity in dialogue, and
to generate responses with high-level compositional structure. | http://arxiv.org/pdf/1611.06216 | Iulian Vlad Serban, Ryan Lowe, Laurent Charlin, Joelle Pineau | cs.CL, cs.AI, cs.NE, I.5.1; I.2.7 | 6 pages, 1 figure, 3 tables; NIPS 2016 workshop on Learning Methods
for Dialogue | null | cs.CL | 20161118 | 20161118 | [
{
"id": "1605.06069"
},
{
"id": "1606.00776"
},
{
"id": "1604.04562"
},
{
"id": "1604.06045"
},
{
"id": "1605.07683"
},
{
"id": "1606.01541"
}
] |
1611.06216 | 19 | A. Sordoni, M. Galley, M. Auli, C. Brockett, Y. Ji, M. Mitchell, J.-Y. Nie, J. Gao, and B. Dolan. A neural network approach to context-sensitive generation of conversational responses. In Conference of the North American Chapter of the Association for Computational Linguistics (NAACL-HLT 2015), 2015.
P.-H. Su, D. Vandyke, M. Gasic, D. Kim, N. Mrksic, T.-H. Wen, and S. Young. Learning from real users: Rating dialogue success with neural networks for reinforcement learning in spoken dialogue systems. In SIGDIAL, 2015.
O. Vinyals and Q. Le. A neural conversational model. ICML, Workshop, 2015. T.-H. Wen, M. Gasic, N. Mrksic, P.-H. Su, D. Vandyke, and S. Young. Semantically conditioned lstm-based natural language generation for spoken dialogue systems. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1711â1721, Lisbon, Portugal, September 2015. Association for Computational Linguistics. URL http://aclweb.org/anthology/D15-1199. | 1611.06216#19 | Generative Deep Neural Networks for Dialogue: A Short Review | Researchers have recently started investigating deep neural networks for
dialogue applications. In particular, generative sequence-to-sequence (Seq2Seq)
models have shown promising results for unstructured tasks, such as word-level
dialogue response generation. The hope is that such models will be able to
leverage massive amounts of data to learn meaningful natural language
representations and response generation strategies, while requiring a minimum
amount of domain knowledge and hand-crafting. An important challenge is to
develop models that can effectively incorporate dialogue context and generate
meaningful and diverse responses. In support of this goal, we review recently
proposed models based on generative encoder-decoder neural network
architectures, and show that these models have better ability to incorporate
long-term dialogue history, to model uncertainty and ambiguity in dialogue, and
to generate responses with high-level compositional structure. | http://arxiv.org/pdf/1611.06216 | Iulian Vlad Serban, Ryan Lowe, Laurent Charlin, Joelle Pineau | cs.CL, cs.AI, cs.NE, I.5.1; I.2.7 | 6 pages, 1 figure, 3 tables; NIPS 2016 workshop on Learning Methods
for Dialogue | null | cs.CL | 20161118 | 20161118 | [
{
"id": "1605.06069"
},
{
"id": "1606.00776"
},
{
"id": "1604.04562"
},
{
"id": "1604.06045"
},
{
"id": "1605.07683"
},
{
"id": "1606.01541"
}
] |
1611.06216 | 20 | T.-H. Wen, M. Gasic, N. Mrksic, L. M. Rojas-Barahona, P.-H. Su, S. Ultes, D. Vandyke, and S. Young. A network-based end-to-end trainable task-oriented dialogue system. arXiv:1604.04562, 2016.
J. Weston. Dialog-based language learning. arXiv preprint arXiv:1604.06045, 2016. J. Weston, S. Chopra, and A. Bordes. Memory networks. ICLR, 2015. S. Young. Probabilistic methods in spokenâdialogue systems. Philosophical Transactions of the Royal Society
of London. Series A: Mathematical, Physical and Engineering Sciences, 358(1769), 2000.
Z. Yu, Z. Xu, A. W. Black, and A. I. Rudnicky. Strategy and policy learning for non-task-oriented conversational systems. In 17th Annual Meeting of the Special Interest Group on Discourse and Dialogue, page 404, 2016.
5
# Appendix
# Twitter Results | 1611.06216#20 | Generative Deep Neural Networks for Dialogue: A Short Review | Researchers have recently started investigating deep neural networks for
dialogue applications. In particular, generative sequence-to-sequence (Seq2Seq)
models have shown promising results for unstructured tasks, such as word-level
dialogue response generation. The hope is that such models will be able to
leverage massive amounts of data to learn meaningful natural language
representations and response generation strategies, while requiring a minimum
amount of domain knowledge and hand-crafting. An important challenge is to
develop models that can effectively incorporate dialogue context and generate
meaningful and diverse responses. In support of this goal, we review recently
proposed models based on generative encoder-decoder neural network
architectures, and show that these models have better ability to incorporate
long-term dialogue history, to model uncertainty and ambiguity in dialogue, and
to generate responses with high-level compositional structure. | http://arxiv.org/pdf/1611.06216 | Iulian Vlad Serban, Ryan Lowe, Laurent Charlin, Joelle Pineau | cs.CL, cs.AI, cs.NE, I.5.1; I.2.7 | 6 pages, 1 figure, 3 tables; NIPS 2016 workshop on Learning Methods
for Dialogue | null | cs.CL | 20161118 | 20161118 | [
{
"id": "1605.06069"
},
{
"id": "1606.00776"
},
{
"id": "1604.04562"
},
{
"id": "1604.06045"
},
{
"id": "1605.07683"
},
{
"id": "1606.01541"
}
] |
1611.06216 | 21 | 5
# Appendix
# Twitter Results
Corpus: We experiment on a Twitter Dialogue Corpus [Ritter et al., 2011] containing about one million dialogues. The task is to generate utterances to append to existing Twitter conversations. This task is typically categorized as a non-goal-driven task, because any ï¬uent and on-topic response may be adequate.
Evaluation: We carry out a human study on Amazon Mechanical Turk (AMT). We show human evaluators a dialogue context along with two potential responses: one response generated from each model conditioned on the dialogue context. We ask evaluators to choose the response most appropriate to the dialogue context. If the evaluators are indifferent, they can choose neither response. For each pair of models we conduct two experiments: one where the example contexts contain at least 80 unique tokens (long context), and one where they contain at least 20 (not necessarily unique) tokens (short context). We experiment with the LSTM, HRED and VHRED models, as well as a TF-IDF retrieval-based baseline model. We do not experiment with the MrRNN models, because we do not have appropriate coarse representations for this domain. | 1611.06216#21 | Generative Deep Neural Networks for Dialogue: A Short Review | Researchers have recently started investigating deep neural networks for
dialogue applications. In particular, generative sequence-to-sequence (Seq2Seq)
models have shown promising results for unstructured tasks, such as word-level
dialogue response generation. The hope is that such models will be able to
leverage massive amounts of data to learn meaningful natural language
representations and response generation strategies, while requiring a minimum
amount of domain knowledge and hand-crafting. An important challenge is to
develop models that can effectively incorporate dialogue context and generate
meaningful and diverse responses. In support of this goal, we review recently
proposed models based on generative encoder-decoder neural network
architectures, and show that these models have better ability to incorporate
long-term dialogue history, to model uncertainty and ambiguity in dialogue, and
to generate responses with high-level compositional structure. | http://arxiv.org/pdf/1611.06216 | Iulian Vlad Serban, Ryan Lowe, Laurent Charlin, Joelle Pineau | cs.CL, cs.AI, cs.NE, I.5.1; I.2.7 | 6 pages, 1 figure, 3 tables; NIPS 2016 workshop on Learning Methods
for Dialogue | null | cs.CL | 20161118 | 20161118 | [
{
"id": "1605.06069"
},
{
"id": "1606.00776"
},
{
"id": "1604.04562"
},
{
"id": "1604.06045"
},
{
"id": "1605.07683"
},
{
"id": "1606.01541"
}
] |
1611.06216 | 22 | Results: The results given in Table 3 show that VHRED is strongly preferred in the majority of the experiments. In particular, VHRED is strongly preferred over the HRED and TF-IDF baseline models for both short and long context settings. VHRED is also strongly preferred over the LSTM baseline model for long contexts, although the LSTM model is preferred over VHRED for short contexts.For short contexts, the LSTM model is often preferred over VHRED because the LSTM model tends to generate very generic responses. Such generic or safe responses are reasonable for a wide range of contexts, but are not useful when applied through-out a dialogue, because the user would loose interest in the conversation.
In conclusion, VHRED performs substantially better overall than competing models, which suggests that the high-dimensional latent variables help model uncertainty and ambiguity in the dialogue context and help generate meaningful responses.
Table 3: Wins, losses and ties (in %) of VHRED against baselines based on the human study (mean preferences ± 90% conï¬dence intervals, where â indicates signiï¬cant differences at 90% conï¬dence) | 1611.06216#22 | Generative Deep Neural Networks for Dialogue: A Short Review | Researchers have recently started investigating deep neural networks for
dialogue applications. In particular, generative sequence-to-sequence (Seq2Seq)
models have shown promising results for unstructured tasks, such as word-level
dialogue response generation. The hope is that such models will be able to
leverage massive amounts of data to learn meaningful natural language
representations and response generation strategies, while requiring a minimum
amount of domain knowledge and hand-crafting. An important challenge is to
develop models that can effectively incorporate dialogue context and generate
meaningful and diverse responses. In support of this goal, we review recently
proposed models based on generative encoder-decoder neural network
architectures, and show that these models have better ability to incorporate
long-term dialogue history, to model uncertainty and ambiguity in dialogue, and
to generate responses with high-level compositional structure. | http://arxiv.org/pdf/1611.06216 | Iulian Vlad Serban, Ryan Lowe, Laurent Charlin, Joelle Pineau | cs.CL, cs.AI, cs.NE, I.5.1; I.2.7 | 6 pages, 1 figure, 3 tables; NIPS 2016 workshop on Learning Methods
for Dialogue | null | cs.CL | 20161118 | 20161118 | [
{
"id": "1605.06069"
},
{
"id": "1606.00776"
},
{
"id": "1604.04562"
},
{
"id": "1604.06045"
},
{
"id": "1605.07683"
},
{
"id": "1606.01541"
}
] |
1611.05763 | 0 | 7 1 0 2
n a J 3 2 ] G L . s c [
3 v 3 6 7 5 0 . 1 1 6 1 : v i X r a
# LEARNING TO REINFORCEMENT LEARN
JX Wang1, Z Kurth-Nelson1, D Tirumala1, H Soyer1, JZ Leibo1, R Munos1, C Blundell1, D Kumaran1,3, M Botvinick1,2 1DeepMind, London, UK 2Gatsby Computational Neuroscience Unit, UCL, London, UK 3Institute of Cognitive Neuroscience, UCL, London, UK
{wangjane, zebk, dhruvat, soyer, jzl, munos, cblundell, dkumaran, botvinick} @google.com
# ABSTRACT | 1611.05763#0 | Learning to reinforcement learn | In recent years deep reinforcement learning (RL) systems have attained
superhuman performance in a number of challenging task domains. However, a
major limitation of such applications is their demand for massive amounts of
training data. A critical present objective is thus to develop deep RL methods
that can adapt rapidly to new tasks. In the present work we introduce a novel
approach to this challenge, which we refer to as deep meta-reinforcement
learning. Previous work has shown that recurrent networks can support
meta-learning in a fully supervised context. We extend this approach to the RL
setting. What emerges is a system that is trained using one RL algorithm, but
whose recurrent dynamics implement a second, quite separate RL procedure. This
second, learned RL algorithm can differ from the original one in arbitrary
ways. Importantly, because it is learned, it is configured to exploit structure
in the training domain. We unpack these points in a series of seven
proof-of-concept experiments, each of which examines a key aspect of deep
meta-RL. We consider prospects for extending and scaling up the approach, and
also point out some potentially important implications for neuroscience. | http://arxiv.org/pdf/1611.05763 | Jane X Wang, Zeb Kurth-Nelson, Dhruva Tirumala, Hubert Soyer, Joel Z Leibo, Remi Munos, Charles Blundell, Dharshan Kumaran, Matt Botvinick | cs.LG, cs.AI, stat.ML | 17 pages, 7 figures, 1 table | null | cs.LG | 20161117 | 20170123 | [
{
"id": "1611.01578"
},
{
"id": "1611.03824"
},
{
"id": "1611.03673"
},
{
"id": "1606.04474"
},
{
"id": "1611.02779"
},
{
"id": "1611.05397"
},
{
"id": "1602.02867"
},
{
"id": "1606.01885"
},
{
"id": "1604.00289"
}
] |
1611.05763 | 1 | # ABSTRACT
In recent years deep reinforcement learning (RL) systems have attained superhuman performance in a number of challenging task domains. However, a major limitation of such applications is their demand for massive amounts of training data. A critical present objective is thus to develop deep RL methods that can adapt rapidly to new tasks. In the present work we introduce a novel approach to this challenge, which we refer to as deep meta-reinforcement learning. Previous work has shown that recurrent networks can support meta-learning in a fully supervised context. We extend this approach to the RL setting. What emerges is a system that is trained using one RL algorithm, but whose recurrent dynamics implement a second, quite separate RL procedure. This second, learned RL algorithm can differ from the original one in arbitrary ways. Importantly, because it is learned, it is conï¬gured to exploit structure in the training domain. We unpack these points in a series of seven proof-of-concept experiments, each of which examines a key aspect of deep meta-RL. We consider prospects for extending and scaling up the approach, and also point out some potentially important implications for neuroscience.
1
# INTRODUCTION | 1611.05763#1 | Learning to reinforcement learn | In recent years deep reinforcement learning (RL) systems have attained
superhuman performance in a number of challenging task domains. However, a
major limitation of such applications is their demand for massive amounts of
training data. A critical present objective is thus to develop deep RL methods
that can adapt rapidly to new tasks. In the present work we introduce a novel
approach to this challenge, which we refer to as deep meta-reinforcement
learning. Previous work has shown that recurrent networks can support
meta-learning in a fully supervised context. We extend this approach to the RL
setting. What emerges is a system that is trained using one RL algorithm, but
whose recurrent dynamics implement a second, quite separate RL procedure. This
second, learned RL algorithm can differ from the original one in arbitrary
ways. Importantly, because it is learned, it is configured to exploit structure
in the training domain. We unpack these points in a series of seven
proof-of-concept experiments, each of which examines a key aspect of deep
meta-RL. We consider prospects for extending and scaling up the approach, and
also point out some potentially important implications for neuroscience. | http://arxiv.org/pdf/1611.05763 | Jane X Wang, Zeb Kurth-Nelson, Dhruva Tirumala, Hubert Soyer, Joel Z Leibo, Remi Munos, Charles Blundell, Dharshan Kumaran, Matt Botvinick | cs.LG, cs.AI, stat.ML | 17 pages, 7 figures, 1 table | null | cs.LG | 20161117 | 20170123 | [
{
"id": "1611.01578"
},
{
"id": "1611.03824"
},
{
"id": "1611.03673"
},
{
"id": "1606.04474"
},
{
"id": "1611.02779"
},
{
"id": "1611.05397"
},
{
"id": "1602.02867"
},
{
"id": "1606.01885"
},
{
"id": "1604.00289"
}
] |
1611.05763 | 2 | 1
# INTRODUCTION
Recent advances have allowed long-standing methods for reinforcement learning (RL) to be newly extended to such complex and large-scale task environments as Atari (Mnih et al., 2015) and Go (Silver et al., 2016). The key enabling breakthrough has been the development of techniques allowing the stable integration of RL with non-linear function approximation through deep learning (LeCun et al., 2015; Mnih et al., 2015). The resulting deep RL methods are attaining human- and often superhuman-level performance in an expanding list of domains (Jaderberg et al., 2016; Mnih et al., 2015; Silver et al., 2016). However, there are at least two aspects of human performance that they starkly lack. First, deep RL typically requires a massive volume of training data, whereas human learners can attain reasonable performance on any of a wide range of tasks with comparatively little experience. Second, deep RL systems typically specialize on one restricted task domain, whereas human learners can ï¬exibly adapt to changing task conditions. Recent critiques (e.g., Lake et al., 2016) have invoked these differences as posing a direct challenge to current deep RL research. | 1611.05763#2 | Learning to reinforcement learn | In recent years deep reinforcement learning (RL) systems have attained
superhuman performance in a number of challenging task domains. However, a
major limitation of such applications is their demand for massive amounts of
training data. A critical present objective is thus to develop deep RL methods
that can adapt rapidly to new tasks. In the present work we introduce a novel
approach to this challenge, which we refer to as deep meta-reinforcement
learning. Previous work has shown that recurrent networks can support
meta-learning in a fully supervised context. We extend this approach to the RL
setting. What emerges is a system that is trained using one RL algorithm, but
whose recurrent dynamics implement a second, quite separate RL procedure. This
second, learned RL algorithm can differ from the original one in arbitrary
ways. Importantly, because it is learned, it is configured to exploit structure
in the training domain. We unpack these points in a series of seven
proof-of-concept experiments, each of which examines a key aspect of deep
meta-RL. We consider prospects for extending and scaling up the approach, and
also point out some potentially important implications for neuroscience. | http://arxiv.org/pdf/1611.05763 | Jane X Wang, Zeb Kurth-Nelson, Dhruva Tirumala, Hubert Soyer, Joel Z Leibo, Remi Munos, Charles Blundell, Dharshan Kumaran, Matt Botvinick | cs.LG, cs.AI, stat.ML | 17 pages, 7 figures, 1 table | null | cs.LG | 20161117 | 20170123 | [
{
"id": "1611.01578"
},
{
"id": "1611.03824"
},
{
"id": "1611.03673"
},
{
"id": "1606.04474"
},
{
"id": "1611.02779"
},
{
"id": "1611.05397"
},
{
"id": "1602.02867"
},
{
"id": "1606.01885"
},
{
"id": "1604.00289"
}
] |
1611.05763 | 3 | In the present work, we outline a framework for meeting these challenges, which we refer to as deep meta-reinforcement learning, a label that is intended to both link it with and distinguish it from previous work employing the term âmeta-reinforcement learningâ (e.g. Schmidhuber et al., 1996; Schweighofer and Doya, 2003, discussed later). The key concept is to use standard deep RL techniques to train a recurrent neural network in such a way that the recurrent network comes to implement its own, free-standing RL procedure. As we shall illustrate, under the right circumstances, the secondary learned RL procedure can display an adaptiveness and sample efï¬ciency that the original RL procedure lacks.
The following sections review previous work employing recurrent neural networks in the context of meta-learning and describe the general approach for extending such methods to the RL setting. We
1
then present seven proof-of-concept experiments, each of which highlights an important ramiï¬cation of the deep meta-RL setup by characterizing agent performance in light of this framework. We close with a discussion of key challenges for next-step research, as well as some potential implications for neuroscience.
# 2 METHODS
2.1 BACKGROUND: META-LEARNING IN RECURRENT NEURAL NETWORKS | 1611.05763#3 | Learning to reinforcement learn | In recent years deep reinforcement learning (RL) systems have attained
superhuman performance in a number of challenging task domains. However, a
major limitation of such applications is their demand for massive amounts of
training data. A critical present objective is thus to develop deep RL methods
that can adapt rapidly to new tasks. In the present work we introduce a novel
approach to this challenge, which we refer to as deep meta-reinforcement
learning. Previous work has shown that recurrent networks can support
meta-learning in a fully supervised context. We extend this approach to the RL
setting. What emerges is a system that is trained using one RL algorithm, but
whose recurrent dynamics implement a second, quite separate RL procedure. This
second, learned RL algorithm can differ from the original one in arbitrary
ways. Importantly, because it is learned, it is configured to exploit structure
in the training domain. We unpack these points in a series of seven
proof-of-concept experiments, each of which examines a key aspect of deep
meta-RL. We consider prospects for extending and scaling up the approach, and
also point out some potentially important implications for neuroscience. | http://arxiv.org/pdf/1611.05763 | Jane X Wang, Zeb Kurth-Nelson, Dhruva Tirumala, Hubert Soyer, Joel Z Leibo, Remi Munos, Charles Blundell, Dharshan Kumaran, Matt Botvinick | cs.LG, cs.AI, stat.ML | 17 pages, 7 figures, 1 table | null | cs.LG | 20161117 | 20170123 | [
{
"id": "1611.01578"
},
{
"id": "1611.03824"
},
{
"id": "1611.03673"
},
{
"id": "1606.04474"
},
{
"id": "1611.02779"
},
{
"id": "1611.05397"
},
{
"id": "1602.02867"
},
{
"id": "1606.01885"
},
{
"id": "1604.00289"
}
] |
1611.05763 | 4 | # 2 METHODS
2.1 BACKGROUND: META-LEARNING IN RECURRENT NEURAL NETWORKS
Flexible, data-efï¬cient learning naturally requires the operation of prior biases. In general terms, such biases can derive from two sources; they can either be engineered into the learning system (as, for example, in convolutional networks), or they can themselves be acquired through learning. The second case has been explored in the machine learning literature under the rubric of meta-learning (Schmidhuber et al., 1996; Thrun and Pratt, 1998).
In one standard setup, the learning agent is confronted with a series of tasks that differ from one another but also share some underlying set of regularities. Meta-learning is then deï¬ned as an effect whereby the agent improves its performance in each new task more rapidly, on average, than in past tasks (Thrun and Pratt, 1998). At an architectural level, meta-learning has generally been conceptualized as involving two learning systems: one lower-level system that learns relatively quickly, and which is primarily responsible for adapting to each new task; and a slower higher-level system that works across tasks to tune and improve the lower-level system. | 1611.05763#4 | Learning to reinforcement learn | In recent years deep reinforcement learning (RL) systems have attained
superhuman performance in a number of challenging task domains. However, a
major limitation of such applications is their demand for massive amounts of
training data. A critical present objective is thus to develop deep RL methods
that can adapt rapidly to new tasks. In the present work we introduce a novel
approach to this challenge, which we refer to as deep meta-reinforcement
learning. Previous work has shown that recurrent networks can support
meta-learning in a fully supervised context. We extend this approach to the RL
setting. What emerges is a system that is trained using one RL algorithm, but
whose recurrent dynamics implement a second, quite separate RL procedure. This
second, learned RL algorithm can differ from the original one in arbitrary
ways. Importantly, because it is learned, it is configured to exploit structure
in the training domain. We unpack these points in a series of seven
proof-of-concept experiments, each of which examines a key aspect of deep
meta-RL. We consider prospects for extending and scaling up the approach, and
also point out some potentially important implications for neuroscience. | http://arxiv.org/pdf/1611.05763 | Jane X Wang, Zeb Kurth-Nelson, Dhruva Tirumala, Hubert Soyer, Joel Z Leibo, Remi Munos, Charles Blundell, Dharshan Kumaran, Matt Botvinick | cs.LG, cs.AI, stat.ML | 17 pages, 7 figures, 1 table | null | cs.LG | 20161117 | 20170123 | [
{
"id": "1611.01578"
},
{
"id": "1611.03824"
},
{
"id": "1611.03673"
},
{
"id": "1606.04474"
},
{
"id": "1611.02779"
},
{
"id": "1611.05397"
},
{
"id": "1602.02867"
},
{
"id": "1606.01885"
},
{
"id": "1604.00289"
}
] |
1611.05763 | 5 | A variety of methods have been pursued to implement this basic meta-learning setup, both within the deep learning community and beyond (Thrun and Pratt, 1998). Of particular relevance here is an approach introduced by Hochreiter and colleagues (Hochreiter et al., 2001), in which a recurrent neural network is trained on a series of interrelated tasks using standard backpropagation. A critical aspect of their setup is that the network receives, on each step within a task, an auxiliary input indicating the target output for the preceding step. For example, in a regression task, on each step the network receives as input an x value for which it is desired to output the corresponding y, but the network also receives an input disclosing the target y value for the preceding step (see Hochreiter et al., 2001; Santoro et al., 2016). In this scenario, a different function is used to generate the data in each training episode, but if the functions are all drawn from a single parametric family, then the system gradually tunes into this consistent structure, converging on accurate outputs more and more rapidly across episodes. | 1611.05763#5 | Learning to reinforcement learn | In recent years deep reinforcement learning (RL) systems have attained
superhuman performance in a number of challenging task domains. However, a
major limitation of such applications is their demand for massive amounts of
training data. A critical present objective is thus to develop deep RL methods
that can adapt rapidly to new tasks. In the present work we introduce a novel
approach to this challenge, which we refer to as deep meta-reinforcement
learning. Previous work has shown that recurrent networks can support
meta-learning in a fully supervised context. We extend this approach to the RL
setting. What emerges is a system that is trained using one RL algorithm, but
whose recurrent dynamics implement a second, quite separate RL procedure. This
second, learned RL algorithm can differ from the original one in arbitrary
ways. Importantly, because it is learned, it is configured to exploit structure
in the training domain. We unpack these points in a series of seven
proof-of-concept experiments, each of which examines a key aspect of deep
meta-RL. We consider prospects for extending and scaling up the approach, and
also point out some potentially important implications for neuroscience. | http://arxiv.org/pdf/1611.05763 | Jane X Wang, Zeb Kurth-Nelson, Dhruva Tirumala, Hubert Soyer, Joel Z Leibo, Remi Munos, Charles Blundell, Dharshan Kumaran, Matt Botvinick | cs.LG, cs.AI, stat.ML | 17 pages, 7 figures, 1 table | null | cs.LG | 20161117 | 20170123 | [
{
"id": "1611.01578"
},
{
"id": "1611.03824"
},
{
"id": "1611.03673"
},
{
"id": "1606.04474"
},
{
"id": "1611.02779"
},
{
"id": "1611.05397"
},
{
"id": "1602.02867"
},
{
"id": "1606.01885"
},
{
"id": "1604.00289"
}
] |
1611.05763 | 6 | One interesting aspect of Hochreiterâs method is that the process that underlies learning within each new task inheres entirely in the dynamics of the recurrent network, rather than in the backpropagation procedure used to tune that networkâs weights. Indeed, after an initial training period, the network can improve its performance on new tasks even if the weights are held constant (see also Cotter and Conwell, 1990; Prokhorov et al., 2002; Younger et al., 1999). A second important aspect of the approach is that the learning procedure implemented in the recurrent network is ï¬t to the structure that spans the family of tasks on which the network is trained, embedding biases that allow it to learn efï¬ciently when dealing with tasks from that family.
2.2 DEEP META-RL: DEFINITION AND KEY FEATURES | 1611.05763#6 | Learning to reinforcement learn | In recent years deep reinforcement learning (RL) systems have attained
superhuman performance in a number of challenging task domains. However, a
major limitation of such applications is their demand for massive amounts of
training data. A critical present objective is thus to develop deep RL methods
that can adapt rapidly to new tasks. In the present work we introduce a novel
approach to this challenge, which we refer to as deep meta-reinforcement
learning. Previous work has shown that recurrent networks can support
meta-learning in a fully supervised context. We extend this approach to the RL
setting. What emerges is a system that is trained using one RL algorithm, but
whose recurrent dynamics implement a second, quite separate RL procedure. This
second, learned RL algorithm can differ from the original one in arbitrary
ways. Importantly, because it is learned, it is configured to exploit structure
in the training domain. We unpack these points in a series of seven
proof-of-concept experiments, each of which examines a key aspect of deep
meta-RL. We consider prospects for extending and scaling up the approach, and
also point out some potentially important implications for neuroscience. | http://arxiv.org/pdf/1611.05763 | Jane X Wang, Zeb Kurth-Nelson, Dhruva Tirumala, Hubert Soyer, Joel Z Leibo, Remi Munos, Charles Blundell, Dharshan Kumaran, Matt Botvinick | cs.LG, cs.AI, stat.ML | 17 pages, 7 figures, 1 table | null | cs.LG | 20161117 | 20170123 | [
{
"id": "1611.01578"
},
{
"id": "1611.03824"
},
{
"id": "1611.03673"
},
{
"id": "1606.04474"
},
{
"id": "1611.02779"
},
{
"id": "1611.05397"
},
{
"id": "1602.02867"
},
{
"id": "1606.01885"
},
{
"id": "1604.00289"
}
] |
1611.05763 | 7 | 2.2 DEEP META-RL: DEFINITION AND KEY FEATURES
Importantly, Hochreiterâs original work (Hochreiter et al., 2001), as well as its subsequent extensions (Cotter and Conwell, 1990; Prokhorov et al., 2002; Santoro et al., 2016; Younger et al., 1999) only addressed supervised learning (i.e. the auxiliary input provided on each step explicitly indicated the target output on the previous step, and the network was trained using explicit targets). In the present work we consider the implications of applying the same approach in the context of reinforcement learning. Here, the tasks that make up the training series are interrelated RL problems, for example, a series of bandit problems varying only in their parameterization. Rather than presenting target outputs as auxiliary inputs, the agent receives inputs indicating the action output on the previous step and, critically, the quantity of reward resulting from that action. The same reward information is fed in parallel to a deep RL procedure, which tunes the weights of the recurrent network.
It is this setup, as well as its result, that we refer to as deep meta-RL (although from here on, for brevity, we will often simply call it meta-RL, with apologies to authors who have used that term
2 | 1611.05763#7 | Learning to reinforcement learn | In recent years deep reinforcement learning (RL) systems have attained
superhuman performance in a number of challenging task domains. However, a
major limitation of such applications is their demand for massive amounts of
training data. A critical present objective is thus to develop deep RL methods
that can adapt rapidly to new tasks. In the present work we introduce a novel
approach to this challenge, which we refer to as deep meta-reinforcement
learning. Previous work has shown that recurrent networks can support
meta-learning in a fully supervised context. We extend this approach to the RL
setting. What emerges is a system that is trained using one RL algorithm, but
whose recurrent dynamics implement a second, quite separate RL procedure. This
second, learned RL algorithm can differ from the original one in arbitrary
ways. Importantly, because it is learned, it is configured to exploit structure
in the training domain. We unpack these points in a series of seven
proof-of-concept experiments, each of which examines a key aspect of deep
meta-RL. We consider prospects for extending and scaling up the approach, and
also point out some potentially important implications for neuroscience. | http://arxiv.org/pdf/1611.05763 | Jane X Wang, Zeb Kurth-Nelson, Dhruva Tirumala, Hubert Soyer, Joel Z Leibo, Remi Munos, Charles Blundell, Dharshan Kumaran, Matt Botvinick | cs.LG, cs.AI, stat.ML | 17 pages, 7 figures, 1 table | null | cs.LG | 20161117 | 20170123 | [
{
"id": "1611.01578"
},
{
"id": "1611.03824"
},
{
"id": "1611.03673"
},
{
"id": "1606.04474"
},
{
"id": "1611.02779"
},
{
"id": "1611.05397"
},
{
"id": "1602.02867"
},
{
"id": "1606.01885"
},
{
"id": "1604.00289"
}
] |
1611.05763 | 8 | 2
previously). As in the supervised case, when the approach is successful, the dynamics of the recurrent network come to implement a learning algorithm entirely separate from the one used to train the network weights. Once again, after sufï¬cient training, learning can occur within each task even if the weights are held constant. However, here the procedure the recurrent network implements is itself a full-ï¬edged reinforcement learning algorithm, which negotiates the exploration-exploitation tradeoff and improves the agentâs policy based on reward outcomes. A key point, which we will emphasize in what follows, is that this learned RL procedure can differ starkly from the algorithm used to train the networkâs weights. In particular, its policy update procedure (including features such as the effective learning rate of that procedure), can differ dramatically from those involved in tuning the network weights, and the learned RL procedure can implement its own approach to exploration. Critically, as in the supervised case, the learned RL procedure will be ï¬t to the statistics spanning the multi-task environment, allowing it to adapt rapidly to new task instances.
2.3 FORMALISM | 1611.05763#8 | Learning to reinforcement learn | In recent years deep reinforcement learning (RL) systems have attained
superhuman performance in a number of challenging task domains. However, a
major limitation of such applications is their demand for massive amounts of
training data. A critical present objective is thus to develop deep RL methods
that can adapt rapidly to new tasks. In the present work we introduce a novel
approach to this challenge, which we refer to as deep meta-reinforcement
learning. Previous work has shown that recurrent networks can support
meta-learning in a fully supervised context. We extend this approach to the RL
setting. What emerges is a system that is trained using one RL algorithm, but
whose recurrent dynamics implement a second, quite separate RL procedure. This
second, learned RL algorithm can differ from the original one in arbitrary
ways. Importantly, because it is learned, it is configured to exploit structure
in the training domain. We unpack these points in a series of seven
proof-of-concept experiments, each of which examines a key aspect of deep
meta-RL. We consider prospects for extending and scaling up the approach, and
also point out some potentially important implications for neuroscience. | http://arxiv.org/pdf/1611.05763 | Jane X Wang, Zeb Kurth-Nelson, Dhruva Tirumala, Hubert Soyer, Joel Z Leibo, Remi Munos, Charles Blundell, Dharshan Kumaran, Matt Botvinick | cs.LG, cs.AI, stat.ML | 17 pages, 7 figures, 1 table | null | cs.LG | 20161117 | 20170123 | [
{
"id": "1611.01578"
},
{
"id": "1611.03824"
},
{
"id": "1611.03673"
},
{
"id": "1606.04474"
},
{
"id": "1611.02779"
},
{
"id": "1611.05397"
},
{
"id": "1602.02867"
},
{
"id": "1606.01885"
},
{
"id": "1604.00289"
}
] |
1611.05763 | 9 | Let us write as D a distribution (the prior) over Markov Decision Processes (MDPs). We want to demonstrate that meta-RL is able to learn a prior-dependent RL algorithm, in the sense that it will perform well on average on MDPs drawn from D or slight modiï¬cations of D. An appropriately structured agent, embedding a recurrent neural network, is trained by interacting with a sequence of MDP environments (also called tasks) through episodes. At the start of a new episode, a new MDP task m â¼ D and an initial state for this task are sampled, and the internal state of the agent (i.e., the pattern of activation over its recurrent units) is reset. The agent then executes its action-selection strategy in this environment for a certain number of discrete time-steps. At each step t an action at â A is executed as a function of the whole history Ht = {x0, a0, r0, . . . , xtâ1, atâ1, rtâ1, xt} of the agent interacting in the MDP m during the current episode (set of states {xs}0â¤sâ¤t, actions {as}0â¤s<t, and rewards {rs}0â¤s<t | 1611.05763#9 | Learning to reinforcement learn | In recent years deep reinforcement learning (RL) systems have attained
superhuman performance in a number of challenging task domains. However, a
major limitation of such applications is their demand for massive amounts of
training data. A critical present objective is thus to develop deep RL methods
that can adapt rapidly to new tasks. In the present work we introduce a novel
approach to this challenge, which we refer to as deep meta-reinforcement
learning. Previous work has shown that recurrent networks can support
meta-learning in a fully supervised context. We extend this approach to the RL
setting. What emerges is a system that is trained using one RL algorithm, but
whose recurrent dynamics implement a second, quite separate RL procedure. This
second, learned RL algorithm can differ from the original one in arbitrary
ways. Importantly, because it is learned, it is configured to exploit structure
in the training domain. We unpack these points in a series of seven
proof-of-concept experiments, each of which examines a key aspect of deep
meta-RL. We consider prospects for extending and scaling up the approach, and
also point out some potentially important implications for neuroscience. | http://arxiv.org/pdf/1611.05763 | Jane X Wang, Zeb Kurth-Nelson, Dhruva Tirumala, Hubert Soyer, Joel Z Leibo, Remi Munos, Charles Blundell, Dharshan Kumaran, Matt Botvinick | cs.LG, cs.AI, stat.ML | 17 pages, 7 figures, 1 table | null | cs.LG | 20161117 | 20170123 | [
{
"id": "1611.01578"
},
{
"id": "1611.03824"
},
{
"id": "1611.03673"
},
{
"id": "1606.04474"
},
{
"id": "1611.02779"
},
{
"id": "1611.05397"
},
{
"id": "1602.02867"
},
{
"id": "1606.01885"
},
{
"id": "1604.00289"
}
] |
1611.05763 | 11 | After training, the agentâs policy is ï¬xed (i.e. the weights are frozen, but the activations are changing due to input from the environment and the hidden state of the recurrent layer), and it is evaluated on a set of MDPs that are drawn either from the same distribution D or slight modiï¬cations of that distribution (to test the generalization capacity of the agent). The internal state is reset at the beginning of the evaluation of any new episode. Since the policy learned by the agent is history-dependent (as it makes uses of a recurrent network), when exposed to any new MDP environment, it is able to adapt and deploy a strategy that optimizes rewards for that task.
# 3 EXPERIMENTS
In order to evaluate the approach to learning that we have just described, we conducted a series of six proof-of-concept experiments, which we present here along with a seventh experiment originally reported in a related paper (Mirowski et al., 2016). One particular point of interest in these experiments was to see whether meta-RL could be used to learn an adaptive balance between exploration and exploitation, as demanded of any fully-ï¬edged RL procedure. A second and still more important focus was on the question of whether meta-RL can give rise to learning that gains efï¬ciency by capitalizing on task structure. | 1611.05763#11 | Learning to reinforcement learn | In recent years deep reinforcement learning (RL) systems have attained
superhuman performance in a number of challenging task domains. However, a
major limitation of such applications is their demand for massive amounts of
training data. A critical present objective is thus to develop deep RL methods
that can adapt rapidly to new tasks. In the present work we introduce a novel
approach to this challenge, which we refer to as deep meta-reinforcement
learning. Previous work has shown that recurrent networks can support
meta-learning in a fully supervised context. We extend this approach to the RL
setting. What emerges is a system that is trained using one RL algorithm, but
whose recurrent dynamics implement a second, quite separate RL procedure. This
second, learned RL algorithm can differ from the original one in arbitrary
ways. Importantly, because it is learned, it is configured to exploit structure
in the training domain. We unpack these points in a series of seven
proof-of-concept experiments, each of which examines a key aspect of deep
meta-RL. We consider prospects for extending and scaling up the approach, and
also point out some potentially important implications for neuroscience. | http://arxiv.org/pdf/1611.05763 | Jane X Wang, Zeb Kurth-Nelson, Dhruva Tirumala, Hubert Soyer, Joel Z Leibo, Remi Munos, Charles Blundell, Dharshan Kumaran, Matt Botvinick | cs.LG, cs.AI, stat.ML | 17 pages, 7 figures, 1 table | null | cs.LG | 20161117 | 20170123 | [
{
"id": "1611.01578"
},
{
"id": "1611.03824"
},
{
"id": "1611.03673"
},
{
"id": "1606.04474"
},
{
"id": "1611.02779"
},
{
"id": "1611.05397"
},
{
"id": "1602.02867"
},
{
"id": "1606.01885"
},
{
"id": "1604.00289"
}
] |
1611.05763 | 12 | In order to examine these questions, we performed four experiments focusing on bandit tasks and two additional experiments focusing on Markov decision problems. All of our experiments (as well as the additional experiment we report) employ a common set of methods, with minor implementational variations. In all experiments, the agent architecture centers on a recurrent neural network (LSTM; Hochreiter and Schmidhuber, 1997) feeding into a soft-max output representing discrete actions. As detailed below, the parameters of this network core, as well as some other architectural details, varied across experiments (see Figure 1 and Table 1). However, it is important to emphasize that comparisons between speciï¬c architectures are outside the scope of this paper. Our main aim is to illustrate and validate the meta-RL framework in a more general way. To this end, all experiments used the high-level task setup previously described: Both training and testing were organized into ï¬xed-length episodes, each involving a task randomly sampled from a predetermined task distribution, with the LSTM hidden state initialized at the beginning of each episode. Task-speciï¬c inputs and
3 | 1611.05763#12 | Learning to reinforcement learn | In recent years deep reinforcement learning (RL) systems have attained
superhuman performance in a number of challenging task domains. However, a
major limitation of such applications is their demand for massive amounts of
training data. A critical present objective is thus to develop deep RL methods
that can adapt rapidly to new tasks. In the present work we introduce a novel
approach to this challenge, which we refer to as deep meta-reinforcement
learning. Previous work has shown that recurrent networks can support
meta-learning in a fully supervised context. We extend this approach to the RL
setting. What emerges is a system that is trained using one RL algorithm, but
whose recurrent dynamics implement a second, quite separate RL procedure. This
second, learned RL algorithm can differ from the original one in arbitrary
ways. Importantly, because it is learned, it is configured to exploit structure
in the training domain. We unpack these points in a series of seven
proof-of-concept experiments, each of which examines a key aspect of deep
meta-RL. We consider prospects for extending and scaling up the approach, and
also point out some potentially important implications for neuroscience. | http://arxiv.org/pdf/1611.05763 | Jane X Wang, Zeb Kurth-Nelson, Dhruva Tirumala, Hubert Soyer, Joel Z Leibo, Remi Munos, Charles Blundell, Dharshan Kumaran, Matt Botvinick | cs.LG, cs.AI, stat.ML | 17 pages, 7 figures, 1 table | null | cs.LG | 20161117 | 20170123 | [
{
"id": "1611.01578"
},
{
"id": "1611.03824"
},
{
"id": "1611.03673"
},
{
"id": "1606.04474"
},
{
"id": "1611.02779"
},
{
"id": "1611.05397"
},
{
"id": "1602.02867"
},
{
"id": "1606.01885"
},
{
"id": "1604.00289"
}
] |
1611.05763 | 13 | 3
Parameter Exps.1&2 Exp.3 Exp. 4 Exp. 5 Exp. 6 No. threads 1 1 1 1 32 No. LSTMs 1 1 1 1 2 No. hiddens 48 48 48 48 256/64 Steps unrolled 100 5 150 20 100 Be annealed annealed annealed 0.05 0.001 Bo 0.05 0.05 0.05 0.05 0.4 Learning rate tuned 0.001 0.001 tuned tuned Discount factor tuned 0.8 0.8 tuned tuned Input a,r,t a,r,t a,r,t a,r,t,x a,7T, x Observation 1-hot RGB (84x84) No. trials/episode 100 5 150 10 10 Episode length 100 5 150 20 <3600 | 1611.05763#13 | Learning to reinforcement learn | In recent years deep reinforcement learning (RL) systems have attained
superhuman performance in a number of challenging task domains. However, a
major limitation of such applications is their demand for massive amounts of
training data. A critical present objective is thus to develop deep RL methods
that can adapt rapidly to new tasks. In the present work we introduce a novel
approach to this challenge, which we refer to as deep meta-reinforcement
learning. Previous work has shown that recurrent networks can support
meta-learning in a fully supervised context. We extend this approach to the RL
setting. What emerges is a system that is trained using one RL algorithm, but
whose recurrent dynamics implement a second, quite separate RL procedure. This
second, learned RL algorithm can differ from the original one in arbitrary
ways. Importantly, because it is learned, it is configured to exploit structure
in the training domain. We unpack these points in a series of seven
proof-of-concept experiments, each of which examines a key aspect of deep
meta-RL. We consider prospects for extending and scaling up the approach, and
also point out some potentially important implications for neuroscience. | http://arxiv.org/pdf/1611.05763 | Jane X Wang, Zeb Kurth-Nelson, Dhruva Tirumala, Hubert Soyer, Joel Z Leibo, Remi Munos, Charles Blundell, Dharshan Kumaran, Matt Botvinick | cs.LG, cs.AI, stat.ML | 17 pages, 7 figures, 1 table | null | cs.LG | 20161117 | 20170123 | [
{
"id": "1611.01578"
},
{
"id": "1611.03824"
},
{
"id": "1611.03673"
},
{
"id": "1606.04474"
},
{
"id": "1611.02779"
},
{
"id": "1611.05397"
},
{
"id": "1602.02867"
},
{
"id": "1606.01885"
},
{
"id": "1604.00289"
}
] |
1611.05763 | 14 | Table 1: List of hyperparameters. βe = coefï¬cient of entropy regularization loss; in Exps. 1-4, βe is annealed from 1.0 to 0.0 over the course of training. βv = coefï¬cient of value function loss (Mirowski et al., 2016). r = reward, a = last action, t = current time step, x = current observation. Exp. 1: Bandits with independent arms (Section 3.1.1); Exp. 2: Bandits with dependent arms I (Section 3.1.2); Exp. 3: Bandits with dependent arms II (Section 3.1.3); Exp. 4: Restless bandits (Section 3.1.4); Exp. 5: The âTwo-Step Taskâ (Section 3.2.1); Exp. 6: Learning abstract task structure (Section 3.2.2).
action outputs are described in conjunction with individual experiments. In all experiments except where speciï¬ed, the input included a scalar indicating the reward received on the preceding time-step as well as a one-hot representation of the action sampled on that time-step. | 1611.05763#14 | Learning to reinforcement learn | In recent years deep reinforcement learning (RL) systems have attained
superhuman performance in a number of challenging task domains. However, a
major limitation of such applications is their demand for massive amounts of
training data. A critical present objective is thus to develop deep RL methods
that can adapt rapidly to new tasks. In the present work we introduce a novel
approach to this challenge, which we refer to as deep meta-reinforcement
learning. Previous work has shown that recurrent networks can support
meta-learning in a fully supervised context. We extend this approach to the RL
setting. What emerges is a system that is trained using one RL algorithm, but
whose recurrent dynamics implement a second, quite separate RL procedure. This
second, learned RL algorithm can differ from the original one in arbitrary
ways. Importantly, because it is learned, it is configured to exploit structure
in the training domain. We unpack these points in a series of seven
proof-of-concept experiments, each of which examines a key aspect of deep
meta-RL. We consider prospects for extending and scaling up the approach, and
also point out some potentially important implications for neuroscience. | http://arxiv.org/pdf/1611.05763 | Jane X Wang, Zeb Kurth-Nelson, Dhruva Tirumala, Hubert Soyer, Joel Z Leibo, Remi Munos, Charles Blundell, Dharshan Kumaran, Matt Botvinick | cs.LG, cs.AI, stat.ML | 17 pages, 7 figures, 1 table | null | cs.LG | 20161117 | 20170123 | [
{
"id": "1611.01578"
},
{
"id": "1611.03824"
},
{
"id": "1611.03673"
},
{
"id": "1606.04474"
},
{
"id": "1611.02779"
},
{
"id": "1611.05397"
},
{
"id": "1602.02867"
},
{
"id": "1606.01885"
},
{
"id": "1604.00289"
}
] |
1611.05763 | 15 | All reinforcement learning was conducted using the Advantage Actor-Critic algorithm, as detailed in Mnih et al. (2016) and Mirowski et al. (2016) (see also Figure 1). Details of training, including the use of entropy regularization and a combined policy and value estimate loss, closely follow the methods detailed in Mirowski et al. (2016), with the exception that our experiments used a single thread unless otherwise noted. For a full listing of parameters refer to Table 1.
v v ma ma Cc | Cc | a Vv Pree ¢ * a yO = e enc enc x, a an U x, Lv a.) x, a a (a) LSTM A2C (b) LSTM A3C (c) Stacked-LSTM A3C | 1611.05763#15 | Learning to reinforcement learn | In recent years deep reinforcement learning (RL) systems have attained
superhuman performance in a number of challenging task domains. However, a
major limitation of such applications is their demand for massive amounts of
training data. A critical present objective is thus to develop deep RL methods
that can adapt rapidly to new tasks. In the present work we introduce a novel
approach to this challenge, which we refer to as deep meta-reinforcement
learning. Previous work has shown that recurrent networks can support
meta-learning in a fully supervised context. We extend this approach to the RL
setting. What emerges is a system that is trained using one RL algorithm, but
whose recurrent dynamics implement a second, quite separate RL procedure. This
second, learned RL algorithm can differ from the original one in arbitrary
ways. Importantly, because it is learned, it is configured to exploit structure
in the training domain. We unpack these points in a series of seven
proof-of-concept experiments, each of which examines a key aspect of deep
meta-RL. We consider prospects for extending and scaling up the approach, and
also point out some potentially important implications for neuroscience. | http://arxiv.org/pdf/1611.05763 | Jane X Wang, Zeb Kurth-Nelson, Dhruva Tirumala, Hubert Soyer, Joel Z Leibo, Remi Munos, Charles Blundell, Dharshan Kumaran, Matt Botvinick | cs.LG, cs.AI, stat.ML | 17 pages, 7 figures, 1 table | null | cs.LG | 20161117 | 20170123 | [
{
"id": "1611.01578"
},
{
"id": "1611.03824"
},
{
"id": "1611.03673"
},
{
"id": "1606.04474"
},
{
"id": "1611.02779"
},
{
"id": "1611.05397"
},
{
"id": "1602.02867"
},
{
"id": "1606.01885"
},
{
"id": "1604.00289"
}
] |
1611.05763 | 16 | Figure 1: Advantage actor-critic with recurrence. In all architectures, reward and last action are additional inputs to the LSTM. For non-bandit environments, observation is also fed into the LSTM either as a one-hot or passed through an encoder model [3-layer encoder: two convolutional layers (ï¬rst layer: 16 8x8 ï¬lters applied with stride 4, second layer: 32 4x4 ï¬lters with stride 2) followed by a fully connected layer with 256 units and then a ReLU non-linearity. See for details Mirowski et al. (2016)]. For bandit experiments, current time step is also fed in as input. Ï = policy; v = value function. A3C is the distributed multi-threaded asynchronous version of the advantage actor-critic algorithm (Mnih et al., 2016); A2C is single threaded. (a) Architecture used in experiments 1-5. (b) Convolutional-LSTM architecture used in experiment 6. (c) Stacked LSTM architecture with convolutional encoder used in experiments 6 and 7.
4
3.1 BANDIT PROBLEMS | 1611.05763#16 | Learning to reinforcement learn | In recent years deep reinforcement learning (RL) systems have attained
superhuman performance in a number of challenging task domains. However, a
major limitation of such applications is their demand for massive amounts of
training data. A critical present objective is thus to develop deep RL methods
that can adapt rapidly to new tasks. In the present work we introduce a novel
approach to this challenge, which we refer to as deep meta-reinforcement
learning. Previous work has shown that recurrent networks can support
meta-learning in a fully supervised context. We extend this approach to the RL
setting. What emerges is a system that is trained using one RL algorithm, but
whose recurrent dynamics implement a second, quite separate RL procedure. This
second, learned RL algorithm can differ from the original one in arbitrary
ways. Importantly, because it is learned, it is configured to exploit structure
in the training domain. We unpack these points in a series of seven
proof-of-concept experiments, each of which examines a key aspect of deep
meta-RL. We consider prospects for extending and scaling up the approach, and
also point out some potentially important implications for neuroscience. | http://arxiv.org/pdf/1611.05763 | Jane X Wang, Zeb Kurth-Nelson, Dhruva Tirumala, Hubert Soyer, Joel Z Leibo, Remi Munos, Charles Blundell, Dharshan Kumaran, Matt Botvinick | cs.LG, cs.AI, stat.ML | 17 pages, 7 figures, 1 table | null | cs.LG | 20161117 | 20170123 | [
{
"id": "1611.01578"
},
{
"id": "1611.03824"
},
{
"id": "1611.03673"
},
{
"id": "1606.04474"
},
{
"id": "1611.02779"
},
{
"id": "1611.05397"
},
{
"id": "1602.02867"
},
{
"id": "1606.01885"
},
{
"id": "1604.00289"
}
] |
1611.05763 | 17 | 4
3.1 BANDIT PROBLEMS
As an initial setting for evaluating meta-RL, we studied a series of bandit problems. Except for a very limited set of bandit environments, it is intractable to compute the (prior-dependent) Bayesian-optimal strategy. Here we demonstrate that a recurrent system trained on a set of bandit environments drawn i.i.d. from a given distribution of environments produces a bandit algorithm which performs well on problems drawn from that distribution, and to a certain extent generalizes to related distributions. Thus, meta-RL learns a prior-dependent bandit algorithm.
The specific bandit instantiation of the general meta-RL procedure described in Section[2.3]is defined as follows. Let D be a training distribution over bandit environments. The meta-RL system is trained on a sequence of bandit environments through episodes. At the start of a new episode, its LSTM state is reset and a bandit task b ~ D is sampled. A bandit task is defined as a set of distributions â one for each arm â from which rewards are sampled. The agent plays in this bandit environment for a certain number of trials and is trained to maximize observed rewards. After training, the agentâs policy is evaluated on a set of bandit tasks that are drawn from a test distribution Dâ, which can either be the same as D or a slight modification of it. | 1611.05763#17 | Learning to reinforcement learn | In recent years deep reinforcement learning (RL) systems have attained
superhuman performance in a number of challenging task domains. However, a
major limitation of such applications is their demand for massive amounts of
training data. A critical present objective is thus to develop deep RL methods
that can adapt rapidly to new tasks. In the present work we introduce a novel
approach to this challenge, which we refer to as deep meta-reinforcement
learning. Previous work has shown that recurrent networks can support
meta-learning in a fully supervised context. We extend this approach to the RL
setting. What emerges is a system that is trained using one RL algorithm, but
whose recurrent dynamics implement a second, quite separate RL procedure. This
second, learned RL algorithm can differ from the original one in arbitrary
ways. Importantly, because it is learned, it is configured to exploit structure
in the training domain. We unpack these points in a series of seven
proof-of-concept experiments, each of which examines a key aspect of deep
meta-RL. We consider prospects for extending and scaling up the approach, and
also point out some potentially important implications for neuroscience. | http://arxiv.org/pdf/1611.05763 | Jane X Wang, Zeb Kurth-Nelson, Dhruva Tirumala, Hubert Soyer, Joel Z Leibo, Remi Munos, Charles Blundell, Dharshan Kumaran, Matt Botvinick | cs.LG, cs.AI, stat.ML | 17 pages, 7 figures, 1 table | null | cs.LG | 20161117 | 20170123 | [
{
"id": "1611.01578"
},
{
"id": "1611.03824"
},
{
"id": "1611.03673"
},
{
"id": "1606.04474"
},
{
"id": "1611.02779"
},
{
"id": "1611.05397"
},
{
"id": "1602.02867"
},
{
"id": "1606.01885"
},
{
"id": "1604.00289"
}
] |
1611.05763 | 18 | We evaluate the resulting performance of the learned bandit algorithm by the cumulative regret, a measure of the loss (in expected rewards) suffered when playing sub-optimal arms. Writing Ha(b) the expected reward of arm a in bandit environment 6, and u*(b) = maXq fla(b) = Hav) (0) (where a*(b) is one optimal arm) the optimal expected reward, we define the cumulative regret (in environment b) as Rr(b) = 7)_, pe*(b) â fia, (b), where a, is the arm (action) chosen at time t. In experiment 4 (Restless bandits; Section 3.1.4), .* also depends on t. We report the performance (average over bandit environments drawn from the test distribution) either in terms of the cumulative regret: E,~p/[Rr(b)] or in terms of number of sub-optimal pulls: Eyvp[S7)_, Har 4 a*(b)}]# 3.1.1 BANDITS WITH INDEPENDENT ARMS | 1611.05763#18 | Learning to reinforcement learn | In recent years deep reinforcement learning (RL) systems have attained
superhuman performance in a number of challenging task domains. However, a
major limitation of such applications is their demand for massive amounts of
training data. A critical present objective is thus to develop deep RL methods
that can adapt rapidly to new tasks. In the present work we introduce a novel
approach to this challenge, which we refer to as deep meta-reinforcement
learning. Previous work has shown that recurrent networks can support
meta-learning in a fully supervised context. We extend this approach to the RL
setting. What emerges is a system that is trained using one RL algorithm, but
whose recurrent dynamics implement a second, quite separate RL procedure. This
second, learned RL algorithm can differ from the original one in arbitrary
ways. Importantly, because it is learned, it is configured to exploit structure
in the training domain. We unpack these points in a series of seven
proof-of-concept experiments, each of which examines a key aspect of deep
meta-RL. We consider prospects for extending and scaling up the approach, and
also point out some potentially important implications for neuroscience. | http://arxiv.org/pdf/1611.05763 | Jane X Wang, Zeb Kurth-Nelson, Dhruva Tirumala, Hubert Soyer, Joel Z Leibo, Remi Munos, Charles Blundell, Dharshan Kumaran, Matt Botvinick | cs.LG, cs.AI, stat.ML | 17 pages, 7 figures, 1 table | null | cs.LG | 20161117 | 20170123 | [
{
"id": "1611.01578"
},
{
"id": "1611.03824"
},
{
"id": "1611.03673"
},
{
"id": "1606.04474"
},
{
"id": "1611.02779"
},
{
"id": "1611.05397"
},
{
"id": "1602.02867"
},
{
"id": "1606.01885"
},
{
"id": "1604.00289"
}
] |
1611.05763 | 19 | We ï¬rst consider a simple two-armed bandit task to examine the behavior of meta-RL under conditions where theoretical guarantees exist and general purpose algorithms apply. The arm distributions are independent Bernoulli distributions (rewards are 1 with probability p and 0 with probability 1 â p), where the parameters of each arm (p1 and p2) are sampled independently and uniformly over [0, 1]. We denote by Di the corresponding distribution over these independent bandit environments (where the subscript i stands for independent arms).
At the beginning of each episode, a new bandit task is sampled and held constant for 100 trials. Training lasted for 20,000 episodes. The network is given as input the last reward, last action taken, and the trial number t, subsequently producing the action for the next trial t + 1 (Figure 1). After training, we evaluated on 300 new episodes with the learning rate set to zero (the learned policy is ï¬xed). | 1611.05763#19 | Learning to reinforcement learn | In recent years deep reinforcement learning (RL) systems have attained
superhuman performance in a number of challenging task domains. However, a
major limitation of such applications is their demand for massive amounts of
training data. A critical present objective is thus to develop deep RL methods
that can adapt rapidly to new tasks. In the present work we introduce a novel
approach to this challenge, which we refer to as deep meta-reinforcement
learning. Previous work has shown that recurrent networks can support
meta-learning in a fully supervised context. We extend this approach to the RL
setting. What emerges is a system that is trained using one RL algorithm, but
whose recurrent dynamics implement a second, quite separate RL procedure. This
second, learned RL algorithm can differ from the original one in arbitrary
ways. Importantly, because it is learned, it is configured to exploit structure
in the training domain. We unpack these points in a series of seven
proof-of-concept experiments, each of which examines a key aspect of deep
meta-RL. We consider prospects for extending and scaling up the approach, and
also point out some potentially important implications for neuroscience. | http://arxiv.org/pdf/1611.05763 | Jane X Wang, Zeb Kurth-Nelson, Dhruva Tirumala, Hubert Soyer, Joel Z Leibo, Remi Munos, Charles Blundell, Dharshan Kumaran, Matt Botvinick | cs.LG, cs.AI, stat.ML | 17 pages, 7 figures, 1 table | null | cs.LG | 20161117 | 20170123 | [
{
"id": "1611.01578"
},
{
"id": "1611.03824"
},
{
"id": "1611.03673"
},
{
"id": "1606.04474"
},
{
"id": "1611.02779"
},
{
"id": "1611.05397"
},
{
"id": "1602.02867"
},
{
"id": "1606.01885"
},
{
"id": "1604.00289"
}
] |
1611.05763 | 20 | Across model instances, we randomly sampled learning rate and discount, following Mnih et al. (2016). For all ï¬gures, we plotted the average of the top 5 runs of 100 randomly sampled hyperparameter settings, where the top agents were selected from the ï¬rst half of the 300 evaluation episodes and performance was plotted for the second half. We measured the cumulative expected regret across the episode, comparing with several algorithms tailored for this independent bandit setting: Gittins indices (Gittins, 1979) (which is Bayesian optimal in the ï¬nite-horizon case), UCB (Auer et al., 2002) (which comes with theoretical ï¬nite-time regret guarantees), and Thompson sampling (Thompson, 1933) (which is asymptotically optimal in this setting: see Kaufmann et al., 2012b). Model simulations were conducted with the PymaBandits toolbox from (Kaufmann et al., 2012a) and custom Matlab scripts. | 1611.05763#20 | Learning to reinforcement learn | In recent years deep reinforcement learning (RL) systems have attained
superhuman performance in a number of challenging task domains. However, a
major limitation of such applications is their demand for massive amounts of
training data. A critical present objective is thus to develop deep RL methods
that can adapt rapidly to new tasks. In the present work we introduce a novel
approach to this challenge, which we refer to as deep meta-reinforcement
learning. Previous work has shown that recurrent networks can support
meta-learning in a fully supervised context. We extend this approach to the RL
setting. What emerges is a system that is trained using one RL algorithm, but
whose recurrent dynamics implement a second, quite separate RL procedure. This
second, learned RL algorithm can differ from the original one in arbitrary
ways. Importantly, because it is learned, it is configured to exploit structure
in the training domain. We unpack these points in a series of seven
proof-of-concept experiments, each of which examines a key aspect of deep
meta-RL. We consider prospects for extending and scaling up the approach, and
also point out some potentially important implications for neuroscience. | http://arxiv.org/pdf/1611.05763 | Jane X Wang, Zeb Kurth-Nelson, Dhruva Tirumala, Hubert Soyer, Joel Z Leibo, Remi Munos, Charles Blundell, Dharshan Kumaran, Matt Botvinick | cs.LG, cs.AI, stat.ML | 17 pages, 7 figures, 1 table | null | cs.LG | 20161117 | 20170123 | [
{
"id": "1611.01578"
},
{
"id": "1611.03824"
},
{
"id": "1611.03673"
},
{
"id": "1606.04474"
},
{
"id": "1611.02779"
},
{
"id": "1611.05397"
},
{
"id": "1602.02867"
},
{
"id": "1606.01885"
},
{
"id": "1604.00289"
}
] |
1611.05763 | 21 | As shown in Figure 2a (green line; âIndependentâ), meta-RL outperforms both Thompson sampling (gray dashed line) and UCB (light gray dashed line), although it performs less well compared to Gittins (black dashed line). To verify the critical importance of providing reward information to the LSTM, we removed this input, leaving all other inputs as before. As expected, performance was at chance levels on all bandit tasks.
5 | 1611.05763#21 | Learning to reinforcement learn | In recent years deep reinforcement learning (RL) systems have attained
superhuman performance in a number of challenging task domains. However, a
major limitation of such applications is their demand for massive amounts of
training data. A critical present objective is thus to develop deep RL methods
that can adapt rapidly to new tasks. In the present work we introduce a novel
approach to this challenge, which we refer to as deep meta-reinforcement
learning. Previous work has shown that recurrent networks can support
meta-learning in a fully supervised context. We extend this approach to the RL
setting. What emerges is a system that is trained using one RL algorithm, but
whose recurrent dynamics implement a second, quite separate RL procedure. This
second, learned RL algorithm can differ from the original one in arbitrary
ways. Importantly, because it is learned, it is configured to exploit structure
in the training domain. We unpack these points in a series of seven
proof-of-concept experiments, each of which examines a key aspect of deep
meta-RL. We consider prospects for extending and scaling up the approach, and
also point out some potentially important implications for neuroscience. | http://arxiv.org/pdf/1611.05763 | Jane X Wang, Zeb Kurth-Nelson, Dhruva Tirumala, Hubert Soyer, Joel Z Leibo, Remi Munos, Charles Blundell, Dharshan Kumaran, Matt Botvinick | cs.LG, cs.AI, stat.ML | 17 pages, 7 figures, 1 table | null | cs.LG | 20161117 | 20170123 | [
{
"id": "1611.01578"
},
{
"id": "1611.03824"
},
{
"id": "1611.03673"
},
{
"id": "1606.04474"
},
{
"id": "1611.02779"
},
{
"id": "1611.05397"
},
{
"id": "1602.02867"
},
{
"id": "1606.01885"
},
{
"id": "1604.00289"
}
] |
1611.05763 | 22 | 5
(a) (b) Testing: Independent = Sub-optimal arm pulls âLSTM A2C âIndependentâ - 4g 3 =sGittins S = Thompson 2 uce 7 £ 3 - z1 5 fel - ° *° âSrialy O° 80100 Trial # 100 i) Testing: Dependent Uniform (d) Testing: Easy LSTM A2C âDependent Uniformâ âLSTM A2C âMediumâ 3 3) Gittins g 3) ---Gittins 5 £ = thompson 3 2 B 2 Ea | uessszeti----4 g g 5 ict : F g é 0 20 40 60 80-100 0 20 40 60 80 100 Trial # Trial # Cumulative regret ) Testing: Hard © 9 âLSTM A2C âMediumâ A Indep. 077 =-Gittins ~~ Thompson < m3 UB Unit. 1s 0.67 12 4 2 g & 22 fay 1s) ose o1a S £ E £ oi = Med. 13 07 1a 0 20 40. 60 80 100 Hard 1 15 Trial # Indep. Unif. Easy += Med. â Hard
# Testing Condition | 1611.05763#22 | Learning to reinforcement learn | In recent years deep reinforcement learning (RL) systems have attained
superhuman performance in a number of challenging task domains. However, a
major limitation of such applications is their demand for massive amounts of
training data. A critical present objective is thus to develop deep RL methods
that can adapt rapidly to new tasks. In the present work we introduce a novel
approach to this challenge, which we refer to as deep meta-reinforcement
learning. Previous work has shown that recurrent networks can support
meta-learning in a fully supervised context. We extend this approach to the RL
setting. What emerges is a system that is trained using one RL algorithm, but
whose recurrent dynamics implement a second, quite separate RL procedure. This
second, learned RL algorithm can differ from the original one in arbitrary
ways. Importantly, because it is learned, it is configured to exploit structure
in the training domain. We unpack these points in a series of seven
proof-of-concept experiments, each of which examines a key aspect of deep
meta-RL. We consider prospects for extending and scaling up the approach, and
also point out some potentially important implications for neuroscience. | http://arxiv.org/pdf/1611.05763 | Jane X Wang, Zeb Kurth-Nelson, Dhruva Tirumala, Hubert Soyer, Joel Z Leibo, Remi Munos, Charles Blundell, Dharshan Kumaran, Matt Botvinick | cs.LG, cs.AI, stat.ML | 17 pages, 7 figures, 1 table | null | cs.LG | 20161117 | 20170123 | [
{
"id": "1611.01578"
},
{
"id": "1611.03824"
},
{
"id": "1611.03673"
},
{
"id": "1606.04474"
},
{
"id": "1611.02779"
},
{
"id": "1611.05397"
},
{
"id": "1602.02867"
},
{
"id": "1606.01885"
},
{
"id": "1604.00289"
}
] |
1611.05763 | 23 | # Testing Condition
Figure 2: Performance on independent- and correlated-arm bandits. We report performance as the cumulative expected regret RT for 150 test episodes, averaged over the top 5 hyperparameters for each agent-task con- ï¬guration, where the top 5 was determined based on performance on a separate set of 150 test episodes. (a) LSTM A2C trained and evaluated on bandits with independent arms (distribution Di; see text), and compared with theoretically optimal models. (b) A single agent playing the medium difï¬culty task with distribution Dm. Suboptimal arm pulls over trials are depicted for 300 episodes. (c) LSTM A2C trained and evaluated on bandits with dependent uniform arms (distribution Du), (d) trained on medium bandit tasks (Dm) and tested on easy (De), and (e) trained on medium (Dm) and tested on hard task (Dh). (f) Cumulative regret for all possible combinations of training and testing environments (Di, Du, De, Dm, Dh).
3.1.2 BANDITS WITH DEPENDENT ARMS (I) | 1611.05763#23 | Learning to reinforcement learn | In recent years deep reinforcement learning (RL) systems have attained
superhuman performance in a number of challenging task domains. However, a
major limitation of such applications is their demand for massive amounts of
training data. A critical present objective is thus to develop deep RL methods
that can adapt rapidly to new tasks. In the present work we introduce a novel
approach to this challenge, which we refer to as deep meta-reinforcement
learning. Previous work has shown that recurrent networks can support
meta-learning in a fully supervised context. We extend this approach to the RL
setting. What emerges is a system that is trained using one RL algorithm, but
whose recurrent dynamics implement a second, quite separate RL procedure. This
second, learned RL algorithm can differ from the original one in arbitrary
ways. Importantly, because it is learned, it is configured to exploit structure
in the training domain. We unpack these points in a series of seven
proof-of-concept experiments, each of which examines a key aspect of deep
meta-RL. We consider prospects for extending and scaling up the approach, and
also point out some potentially important implications for neuroscience. | http://arxiv.org/pdf/1611.05763 | Jane X Wang, Zeb Kurth-Nelson, Dhruva Tirumala, Hubert Soyer, Joel Z Leibo, Remi Munos, Charles Blundell, Dharshan Kumaran, Matt Botvinick | cs.LG, cs.AI, stat.ML | 17 pages, 7 figures, 1 table | null | cs.LG | 20161117 | 20170123 | [
{
"id": "1611.01578"
},
{
"id": "1611.03824"
},
{
"id": "1611.03673"
},
{
"id": "1606.04474"
},
{
"id": "1611.02779"
},
{
"id": "1611.05397"
},
{
"id": "1602.02867"
},
{
"id": "1606.01885"
},
{
"id": "1604.00289"
}
] |
1611.05763 | 24 | 3.1.2 BANDITS WITH DEPENDENT ARMS (I)
As we have emphasized, a key property of meta-RL is that it gives rise to a learned RL algorithm that exploits consistent structure in the training distribution. In order to garner empirical evidence for this point, we tested the agent from our ï¬rst experiment in a more structured bandit task. Speciï¬cally, we trained the system on two-arm bandits in which arm reward distributions are correlated. In this setting, unlike the one studied in the previous section, experience with either arm provides information about the other. Standard bandit algorithms, including UCB and Thompson sampling, perform suboptimally in this setting, as they are not designed to exploit such correlations. In some cases it is possible to tailor algorithms for speciï¬c arm structures (see for example Lattimore and Munos, 2014), but extensive problem-speciï¬c analysis is typically required. Our approach aims to learn a structure-dependent bandit algorithm directly from experience with the target bandit domain. | 1611.05763#24 | Learning to reinforcement learn | In recent years deep reinforcement learning (RL) systems have attained
superhuman performance in a number of challenging task domains. However, a
major limitation of such applications is their demand for massive amounts of
training data. A critical present objective is thus to develop deep RL methods
that can adapt rapidly to new tasks. In the present work we introduce a novel
approach to this challenge, which we refer to as deep meta-reinforcement
learning. Previous work has shown that recurrent networks can support
meta-learning in a fully supervised context. We extend this approach to the RL
setting. What emerges is a system that is trained using one RL algorithm, but
whose recurrent dynamics implement a second, quite separate RL procedure. This
second, learned RL algorithm can differ from the original one in arbitrary
ways. Importantly, because it is learned, it is configured to exploit structure
in the training domain. We unpack these points in a series of seven
proof-of-concept experiments, each of which examines a key aspect of deep
meta-RL. We consider prospects for extending and scaling up the approach, and
also point out some potentially important implications for neuroscience. | http://arxiv.org/pdf/1611.05763 | Jane X Wang, Zeb Kurth-Nelson, Dhruva Tirumala, Hubert Soyer, Joel Z Leibo, Remi Munos, Charles Blundell, Dharshan Kumaran, Matt Botvinick | cs.LG, cs.AI, stat.ML | 17 pages, 7 figures, 1 table | null | cs.LG | 20161117 | 20170123 | [
{
"id": "1611.01578"
},
{
"id": "1611.03824"
},
{
"id": "1611.03673"
},
{
"id": "1606.04474"
},
{
"id": "1611.02779"
},
{
"id": "1611.05397"
},
{
"id": "1602.02867"
},
{
"id": "1606.01885"
},
{
"id": "1604.00289"
}
] |
1611.05763 | 25 | We consider Bernoulli distributions where the parameters (p1, p2) of the two arms are correlated in the sense that p1 = 1 â p2. We consider several training and test distributions. The uniform means that p1 â¼ U([0, 1]) (uniform distribution over the unit interval). The easy means that p1 â¼ U({0.1, 0.9}) (uniform distribution over those two possible values), and similarly we call medium when p1 â¼ U({0.25, 0.75}) and hard when p1 â¼ U({0.4, 0.6}). We denote by Du, De, Dm, and Dh the corresponding induced distributions over bandit environments. In addition
6 | 1611.05763#25 | Learning to reinforcement learn | In recent years deep reinforcement learning (RL) systems have attained
superhuman performance in a number of challenging task domains. However, a
major limitation of such applications is their demand for massive amounts of
training data. A critical present objective is thus to develop deep RL methods
that can adapt rapidly to new tasks. In the present work we introduce a novel
approach to this challenge, which we refer to as deep meta-reinforcement
learning. Previous work has shown that recurrent networks can support
meta-learning in a fully supervised context. We extend this approach to the RL
setting. What emerges is a system that is trained using one RL algorithm, but
whose recurrent dynamics implement a second, quite separate RL procedure. This
second, learned RL algorithm can differ from the original one in arbitrary
ways. Importantly, because it is learned, it is configured to exploit structure
in the training domain. We unpack these points in a series of seven
proof-of-concept experiments, each of which examines a key aspect of deep
meta-RL. We consider prospects for extending and scaling up the approach, and
also point out some potentially important implications for neuroscience. | http://arxiv.org/pdf/1611.05763 | Jane X Wang, Zeb Kurth-Nelson, Dhruva Tirumala, Hubert Soyer, Joel Z Leibo, Remi Munos, Charles Blundell, Dharshan Kumaran, Matt Botvinick | cs.LG, cs.AI, stat.ML | 17 pages, 7 figures, 1 table | null | cs.LG | 20161117 | 20170123 | [
{
"id": "1611.01578"
},
{
"id": "1611.03824"
},
{
"id": "1611.03673"
},
{
"id": "1606.04474"
},
{
"id": "1611.02779"
},
{
"id": "1611.05397"
},
{
"id": "1602.02867"
},
{
"id": "1606.01885"
},
{
"id": "1604.00289"
}
] |
1611.05763 | 26 | 6
we also considered the independent uniform distribution (as in the previous section, Di) where p1, p2 â¼ U([0, 1]) independently. Agents were both trained and tested on those ï¬ve distributions over bandit environments (among which four correspond to correlated distributions: Du, De, Dm and Dh; and one to the independent case: Di). As a validation of the names given to the task distributions (De, Dm, Dh), results show that the easy task is easier to learn than the medium which itself is easier than the hard one (Figure 2f). This is compatible with the general notion that the hardness of a bandit problem is inversely proportional to the difference between the expected reward of the optimal and sub-optimal arms. We again note that withholding the reward input to the LSTM resulted in chance performance on even the easiest bandit task, as should be expected. | 1611.05763#26 | Learning to reinforcement learn | In recent years deep reinforcement learning (RL) systems have attained
superhuman performance in a number of challenging task domains. However, a
major limitation of such applications is their demand for massive amounts of
training data. A critical present objective is thus to develop deep RL methods
that can adapt rapidly to new tasks. In the present work we introduce a novel
approach to this challenge, which we refer to as deep meta-reinforcement
learning. Previous work has shown that recurrent networks can support
meta-learning in a fully supervised context. We extend this approach to the RL
setting. What emerges is a system that is trained using one RL algorithm, but
whose recurrent dynamics implement a second, quite separate RL procedure. This
second, learned RL algorithm can differ from the original one in arbitrary
ways. Importantly, because it is learned, it is configured to exploit structure
in the training domain. We unpack these points in a series of seven
proof-of-concept experiments, each of which examines a key aspect of deep
meta-RL. We consider prospects for extending and scaling up the approach, and
also point out some potentially important implications for neuroscience. | http://arxiv.org/pdf/1611.05763 | Jane X Wang, Zeb Kurth-Nelson, Dhruva Tirumala, Hubert Soyer, Joel Z Leibo, Remi Munos, Charles Blundell, Dharshan Kumaran, Matt Botvinick | cs.LG, cs.AI, stat.ML | 17 pages, 7 figures, 1 table | null | cs.LG | 20161117 | 20170123 | [
{
"id": "1611.01578"
},
{
"id": "1611.03824"
},
{
"id": "1611.03673"
},
{
"id": "1606.04474"
},
{
"id": "1611.02779"
},
{
"id": "1611.05397"
},
{
"id": "1602.02867"
},
{
"id": "1606.01885"
},
{
"id": "1604.00289"
}
] |
1611.05763 | 27 | Figure 2f reports the results of all possible training-testing regimes. From observing the cumulative expected regrets, we make the following observations: i) agents trained in structured environments (Du, De, Dm, and Dh) develop prior knowledge that can be used effectively when tested on structured distributions â performing comparably to Gittins (Figure 2c-f), and superiorly compared to agents trained on independent arms (Di) in all structured tasks at test (Figure 2f). This is because an agent trained on independent rewards (Di) has not learned to exploit the reward correlations that are useful in those structured tasks. ii) Conversely, previous training on any structured distribution (Du, De, Dm, or Dh) hurts performance when agents are tested on an independent distribution (Di; Figure 2f). This makes sense, as training on correlated arms may produce a policy that relies on speciï¬c reward structure, thereby impacting performance in problems where no such structure exists. iii) Whilst the previous results emphasize the point that meta-RL gives rise to a separate learnt RL algorithm that implements prior-dependent bandit strategies, results also provide evidence that there is some generalization | 1611.05763#27 | Learning to reinforcement learn | In recent years deep reinforcement learning (RL) systems have attained
superhuman performance in a number of challenging task domains. However, a
major limitation of such applications is their demand for massive amounts of
training data. A critical present objective is thus to develop deep RL methods
that can adapt rapidly to new tasks. In the present work we introduce a novel
approach to this challenge, which we refer to as deep meta-reinforcement
learning. Previous work has shown that recurrent networks can support
meta-learning in a fully supervised context. We extend this approach to the RL
setting. What emerges is a system that is trained using one RL algorithm, but
whose recurrent dynamics implement a second, quite separate RL procedure. This
second, learned RL algorithm can differ from the original one in arbitrary
ways. Importantly, because it is learned, it is configured to exploit structure
in the training domain. We unpack these points in a series of seven
proof-of-concept experiments, each of which examines a key aspect of deep
meta-RL. We consider prospects for extending and scaling up the approach, and
also point out some potentially important implications for neuroscience. | http://arxiv.org/pdf/1611.05763 | Jane X Wang, Zeb Kurth-Nelson, Dhruva Tirumala, Hubert Soyer, Joel Z Leibo, Remi Munos, Charles Blundell, Dharshan Kumaran, Matt Botvinick | cs.LG, cs.AI, stat.ML | 17 pages, 7 figures, 1 table | null | cs.LG | 20161117 | 20170123 | [
{
"id": "1611.01578"
},
{
"id": "1611.03824"
},
{
"id": "1611.03673"
},
{
"id": "1606.04474"
},
{
"id": "1611.02779"
},
{
"id": "1611.05397"
},
{
"id": "1602.02867"
},
{
"id": "1606.01885"
},
{
"id": "1604.00289"
}
] |
1611.05763 | 28 | meta-RL gives rise to a separate learnt RL algorithm that implements prior-dependent bandit strategies, results also provide evidence that there is some generalization beyond the exact training distribution encountered (Figure 2f). For example, agents trained on the distributions De and Dm perform well when tested over a much wider structured distribution (i.e. Du). Further, our evidence suggests that there is generalization from training on the easier tasks (De,Dm) to testing on the hardest task (Dh; Figure 2e), with similar or even marginally superior performance as compared to training on the hard distribution Dh itself(Figure 2f). In contrast, training on the hard distribution Dh results in relatively poor generalization to other structured distributions (Du, De, Dm), suggesting that training purely on hard instances may result in a learned RL algorithm that is more constrained by prior knowledge, perhaps due to the difï¬culty of solving the original problem. | 1611.05763#28 | Learning to reinforcement learn | In recent years deep reinforcement learning (RL) systems have attained
superhuman performance in a number of challenging task domains. However, a
major limitation of such applications is their demand for massive amounts of
training data. A critical present objective is thus to develop deep RL methods
that can adapt rapidly to new tasks. In the present work we introduce a novel
approach to this challenge, which we refer to as deep meta-reinforcement
learning. Previous work has shown that recurrent networks can support
meta-learning in a fully supervised context. We extend this approach to the RL
setting. What emerges is a system that is trained using one RL algorithm, but
whose recurrent dynamics implement a second, quite separate RL procedure. This
second, learned RL algorithm can differ from the original one in arbitrary
ways. Importantly, because it is learned, it is configured to exploit structure
in the training domain. We unpack these points in a series of seven
proof-of-concept experiments, each of which examines a key aspect of deep
meta-RL. We consider prospects for extending and scaling up the approach, and
also point out some potentially important implications for neuroscience. | http://arxiv.org/pdf/1611.05763 | Jane X Wang, Zeb Kurth-Nelson, Dhruva Tirumala, Hubert Soyer, Joel Z Leibo, Remi Munos, Charles Blundell, Dharshan Kumaran, Matt Botvinick | cs.LG, cs.AI, stat.ML | 17 pages, 7 figures, 1 table | null | cs.LG | 20161117 | 20170123 | [
{
"id": "1611.01578"
},
{
"id": "1611.03824"
},
{
"id": "1611.03673"
},
{
"id": "1606.04474"
},
{
"id": "1611.02779"
},
{
"id": "1611.05397"
},
{
"id": "1602.02867"
},
{
"id": "1606.01885"
},
{
"id": "1604.00289"
}
] |
1611.05763 | 29 | # 3.1.3 BANDITS WITH DEPENDENT ARMS (II)
In the previous experiment, the agent could outperform standard bandit algorithms by making use of learned dependencies between arms. However, it could do this while always choosing what it believes to be the highest-paying arm. We next examine a problem where information can be gained by paying a short-term reward cost. Similar problems have been examined before as providing a challenge to standard bandit algorithms (see e.g. Russo and Van Roy, 2014). In contrast, humans and animals make decisions that sacriï¬ce immediate reward for information gain (e.g. Bromberg-Martin and Hikosaka, 2009). | 1611.05763#29 | Learning to reinforcement learn | In recent years deep reinforcement learning (RL) systems have attained
superhuman performance in a number of challenging task domains. However, a
major limitation of such applications is their demand for massive amounts of
training data. A critical present objective is thus to develop deep RL methods
that can adapt rapidly to new tasks. In the present work we introduce a novel
approach to this challenge, which we refer to as deep meta-reinforcement
learning. Previous work has shown that recurrent networks can support
meta-learning in a fully supervised context. We extend this approach to the RL
setting. What emerges is a system that is trained using one RL algorithm, but
whose recurrent dynamics implement a second, quite separate RL procedure. This
second, learned RL algorithm can differ from the original one in arbitrary
ways. Importantly, because it is learned, it is configured to exploit structure
in the training domain. We unpack these points in a series of seven
proof-of-concept experiments, each of which examines a key aspect of deep
meta-RL. We consider prospects for extending and scaling up the approach, and
also point out some potentially important implications for neuroscience. | http://arxiv.org/pdf/1611.05763 | Jane X Wang, Zeb Kurth-Nelson, Dhruva Tirumala, Hubert Soyer, Joel Z Leibo, Remi Munos, Charles Blundell, Dharshan Kumaran, Matt Botvinick | cs.LG, cs.AI, stat.ML | 17 pages, 7 figures, 1 table | null | cs.LG | 20161117 | 20170123 | [
{
"id": "1611.01578"
},
{
"id": "1611.03824"
},
{
"id": "1611.03673"
},
{
"id": "1606.04474"
},
{
"id": "1611.02779"
},
{
"id": "1611.05397"
},
{
"id": "1602.02867"
},
{
"id": "1606.01885"
},
{
"id": "1604.00289"
}
] |
1611.05763 | 30 | In this experiment, the agent was trained on 11-armed bandits with strong dependencies between arms. All arms had deterministic payouts. Nine ânon-targetâ arms had reward = 1, and one âtargetâ arm had reward = 5. Meanwhile, arm a11 was always âinformativeâ, in that the target arm was indexed by 10 times a11âs reward (e.g. a reward of 0.2 on a11 indicated that a2 was the target arm). Thus, a11âs payouts ranged from 0.1 to 1. In each episode, the index of the target arm was randomly assigned. On the ï¬rst trial of each episode, the agent could not know which arm was the target, so the informative arm returned expected reward 0.55 and every target arm returned expected reward 1.4. Choosing the informative arm thus meant foregoing immediate reward, but with the compensation of valuable information. Episodes were ï¬ve steps long. Again, the reward on the previous trial was provided as an additional observation to the agent. To facilitate learning, this was encoded in 1-hot format. | 1611.05763#30 | Learning to reinforcement learn | In recent years deep reinforcement learning (RL) systems have attained
superhuman performance in a number of challenging task domains. However, a
major limitation of such applications is their demand for massive amounts of
training data. A critical present objective is thus to develop deep RL methods
that can adapt rapidly to new tasks. In the present work we introduce a novel
approach to this challenge, which we refer to as deep meta-reinforcement
learning. Previous work has shown that recurrent networks can support
meta-learning in a fully supervised context. We extend this approach to the RL
setting. What emerges is a system that is trained using one RL algorithm, but
whose recurrent dynamics implement a second, quite separate RL procedure. This
second, learned RL algorithm can differ from the original one in arbitrary
ways. Importantly, because it is learned, it is configured to exploit structure
in the training domain. We unpack these points in a series of seven
proof-of-concept experiments, each of which examines a key aspect of deep
meta-RL. We consider prospects for extending and scaling up the approach, and
also point out some potentially important implications for neuroscience. | http://arxiv.org/pdf/1611.05763 | Jane X Wang, Zeb Kurth-Nelson, Dhruva Tirumala, Hubert Soyer, Joel Z Leibo, Remi Munos, Charles Blundell, Dharshan Kumaran, Matt Botvinick | cs.LG, cs.AI, stat.ML | 17 pages, 7 figures, 1 table | null | cs.LG | 20161117 | 20170123 | [
{
"id": "1611.01578"
},
{
"id": "1611.03824"
},
{
"id": "1611.03673"
},
{
"id": "1606.04474"
},
{
"id": "1611.02779"
},
{
"id": "1611.05397"
},
{
"id": "1602.02867"
},
{
"id": "1606.01885"
},
{
"id": "1604.00289"
}
] |
1611.05763 | 31 | Results are shown in Figure 3. The agent learned the optimal long-run strategy of sampling the informative arm once, despite the short-term cost, and then using the resulting information to exploit the high-value target arm. Thompson sampling, if supplied the true prior, searched potential target arms and exploited the target if found. UCB performed worse because it sampled every arm once even if the target arm was found early.
7
15 + LSTM A2C âOâ Optimal ~~ =~: Thompson 4 10 + UCB one Cumulative Regret Trial #
Figure 3: Learned RL procedure pays immediate cost to gain information to improve long-run returns. In this task, one arm is lower-paying but provides perfect information about which of the other ten arms is highest-paying. The remaining nine arms are intermediate in reward. The index of the informative arm is ï¬xed between episodes, but the index of the highest-paying arm is randomized between episodes. On the ï¬rst trial, the trained agent samples the informative arm. On subsequent trials, the agent uses the information it gained to deterministically exploit the highest-paying arm. Thompson sampling and UCB are not able to take advantage of the dependencies between arms.
# 3.1.4 RESTLESS BANDITS | 1611.05763#31 | Learning to reinforcement learn | In recent years deep reinforcement learning (RL) systems have attained
superhuman performance in a number of challenging task domains. However, a
major limitation of such applications is their demand for massive amounts of
training data. A critical present objective is thus to develop deep RL methods
that can adapt rapidly to new tasks. In the present work we introduce a novel
approach to this challenge, which we refer to as deep meta-reinforcement
learning. Previous work has shown that recurrent networks can support
meta-learning in a fully supervised context. We extend this approach to the RL
setting. What emerges is a system that is trained using one RL algorithm, but
whose recurrent dynamics implement a second, quite separate RL procedure. This
second, learned RL algorithm can differ from the original one in arbitrary
ways. Importantly, because it is learned, it is configured to exploit structure
in the training domain. We unpack these points in a series of seven
proof-of-concept experiments, each of which examines a key aspect of deep
meta-RL. We consider prospects for extending and scaling up the approach, and
also point out some potentially important implications for neuroscience. | http://arxiv.org/pdf/1611.05763 | Jane X Wang, Zeb Kurth-Nelson, Dhruva Tirumala, Hubert Soyer, Joel Z Leibo, Remi Munos, Charles Blundell, Dharshan Kumaran, Matt Botvinick | cs.LG, cs.AI, stat.ML | 17 pages, 7 figures, 1 table | null | cs.LG | 20161117 | 20170123 | [
{
"id": "1611.01578"
},
{
"id": "1611.03824"
},
{
"id": "1611.03673"
},
{
"id": "1606.04474"
},
{
"id": "1611.02779"
},
{
"id": "1611.05397"
},
{
"id": "1602.02867"
},
{
"id": "1606.01885"
},
{
"id": "1604.00289"
}
] |
1611.05763 | 32 | # 3.1.4 RESTLESS BANDITS
In previous experiments we considered stationary problems where the agentâs actions yielded in- formation about task parameters that remained ï¬xed throughout each episode. Next, we consider a bandit problem in which reward probabilities change over the course of an episode, with different rates of change (volatilities) in different episodes. To perform well, the agent must not only track the best arm, but also infer the volatility of the episode and adjust its own learning rate accordingly. In such an environment, learning rates should be higher when the environment is changing rapidly, because past information becomes irrelevant more quickly (Behrens et al., 2007; Sutton and Barto, 1998).
We tested whether meta-RL would learn such a flexible RL policy using a two-armed Bernoulli bandit task with reward probabilities p; and 1-p;. The value of p; changed slowly in âlow volâ episodes and quickly in âhigh volâ episodes. The agent had no way of knowing which type of episode it was in, except for its reward history within the episode. Figur shows example âlow volâ and âhigh volâ episodes. Reward magnitude was fixed at 1, and episodes were 100 steps long. UCB and Thompson sampling were again implemented for comparison. The confidence bound term J xlog n ni | 1611.05763#32 | Learning to reinforcement learn | In recent years deep reinforcement learning (RL) systems have attained
superhuman performance in a number of challenging task domains. However, a
major limitation of such applications is their demand for massive amounts of
training data. A critical present objective is thus to develop deep RL methods
that can adapt rapidly to new tasks. In the present work we introduce a novel
approach to this challenge, which we refer to as deep meta-reinforcement
learning. Previous work has shown that recurrent networks can support
meta-learning in a fully supervised context. We extend this approach to the RL
setting. What emerges is a system that is trained using one RL algorithm, but
whose recurrent dynamics implement a second, quite separate RL procedure. This
second, learned RL algorithm can differ from the original one in arbitrary
ways. Importantly, because it is learned, it is configured to exploit structure
in the training domain. We unpack these points in a series of seven
proof-of-concept experiments, each of which examines a key aspect of deep
meta-RL. We consider prospects for extending and scaling up the approach, and
also point out some potentially important implications for neuroscience. | http://arxiv.org/pdf/1611.05763 | Jane X Wang, Zeb Kurth-Nelson, Dhruva Tirumala, Hubert Soyer, Joel Z Leibo, Remi Munos, Charles Blundell, Dharshan Kumaran, Matt Botvinick | cs.LG, cs.AI, stat.ML | 17 pages, 7 figures, 1 table | null | cs.LG | 20161117 | 20170123 | [
{
"id": "1611.01578"
},
{
"id": "1611.03824"
},
{
"id": "1611.03673"
},
{
"id": "1606.04474"
},
{
"id": "1611.02779"
},
{
"id": "1611.05397"
},
{
"id": "1602.02867"
},
{
"id": "1606.01885"
},
{
"id": "1604.00289"
}
] |
1611.05763 | 34 | As in the previous experiment, meta-RL achieved lower regret in test than Thompson sampling, UCB, or the Rescorla-Wagner (R-W) learning rule (Figure[4p;|Rescorla et al.|[1972) fixed learning rate (a=0.5). To test whether the agent adjusted its effective learning rate to match environments with different volatility levels, we fit R-W models to the agentâs behavior, concatenating episodes into blocks of 10, where each block consisted of only âlow volâ or only âhigh volâ episodes. We considered four different models encompassing different combinations of three parameters: learning rate a, softmax inverse temperature 3, and a lapse rate ⬠to account for unexplained choice variance not related to estimated value{Economides et al.| (2015). Model âbâ included only 8, âabâ included a and 3, âbeâ included 3 and e, and âabeâ included all three. All parameters were estimated separately on each block of 10 episodes. In models where ⬠and a were not free, they were fixed to 0 and 0.5, respectively. Model comparison by Bayesian Information Criterion (BIC) indicated | 1611.05763#34 | Learning to reinforcement learn | In recent years deep reinforcement learning (RL) systems have attained
superhuman performance in a number of challenging task domains. However, a
major limitation of such applications is their demand for massive amounts of
training data. A critical present objective is thus to develop deep RL methods
that can adapt rapidly to new tasks. In the present work we introduce a novel
approach to this challenge, which we refer to as deep meta-reinforcement
learning. Previous work has shown that recurrent networks can support
meta-learning in a fully supervised context. We extend this approach to the RL
setting. What emerges is a system that is trained using one RL algorithm, but
whose recurrent dynamics implement a second, quite separate RL procedure. This
second, learned RL algorithm can differ from the original one in arbitrary
ways. Importantly, because it is learned, it is configured to exploit structure
in the training domain. We unpack these points in a series of seven
proof-of-concept experiments, each of which examines a key aspect of deep
meta-RL. We consider prospects for extending and scaling up the approach, and
also point out some potentially important implications for neuroscience. | http://arxiv.org/pdf/1611.05763 | Jane X Wang, Zeb Kurth-Nelson, Dhruva Tirumala, Hubert Soyer, Joel Z Leibo, Remi Munos, Charles Blundell, Dharshan Kumaran, Matt Botvinick | cs.LG, cs.AI, stat.ML | 17 pages, 7 figures, 1 table | null | cs.LG | 20161117 | 20170123 | [
{
"id": "1611.01578"
},
{
"id": "1611.03824"
},
{
"id": "1611.03673"
},
{
"id": "1606.04474"
},
{
"id": "1611.02779"
},
{
"id": "1611.05397"
},
{
"id": "1602.02867"
},
{
"id": "1606.01885"
},
{
"id": "1604.00289"
}
] |
1611.05763 | 35 | In models where ⬠and a were not free, they were fixed to 0 and 0.5, respectively. Model comparison by Bayesian Information Criterion (BIC) indicated that meta-RLâs behavior was better described by a model with different learning rates for each block than a model with a fixed learning rate across blocks. As a control, we performed the same model comparison on the behavior produced by the best R-W agent, finding no benefit of allowing different learning rates across episodes (models âabeâ and âabâ vs âbeâ and âbâ; Figure[4p-d). In these models, the parameter estimates for meta-RLâs behavior were strongly related to the volatility of the episodes, indicating that meta-RL adjusted its learning rate to the volatility of the episode, whereas model fitting the R-W behavior simply recovered the fixed parameters (Figure/4p-f). | 1611.05763#35 | Learning to reinforcement learn | In recent years deep reinforcement learning (RL) systems have attained
superhuman performance in a number of challenging task domains. However, a
major limitation of such applications is their demand for massive amounts of
training data. A critical present objective is thus to develop deep RL methods
that can adapt rapidly to new tasks. In the present work we introduce a novel
approach to this challenge, which we refer to as deep meta-reinforcement
learning. Previous work has shown that recurrent networks can support
meta-learning in a fully supervised context. We extend this approach to the RL
setting. What emerges is a system that is trained using one RL algorithm, but
whose recurrent dynamics implement a second, quite separate RL procedure. This
second, learned RL algorithm can differ from the original one in arbitrary
ways. Importantly, because it is learned, it is configured to exploit structure
in the training domain. We unpack these points in a series of seven
proof-of-concept experiments, each of which examines a key aspect of deep
meta-RL. We consider prospects for extending and scaling up the approach, and
also point out some potentially important implications for neuroscience. | http://arxiv.org/pdf/1611.05763 | Jane X Wang, Zeb Kurth-Nelson, Dhruva Tirumala, Hubert Soyer, Joel Z Leibo, Remi Munos, Charles Blundell, Dharshan Kumaran, Matt Botvinick | cs.LG, cs.AI, stat.ML | 17 pages, 7 figures, 1 table | null | cs.LG | 20161117 | 20170123 | [
{
"id": "1611.01578"
},
{
"id": "1611.03824"
},
{
"id": "1611.03673"
},
{
"id": "1606.04474"
},
{
"id": "1611.02779"
},
{
"id": "1611.05397"
},
{
"id": "1602.02867"
},
{
"id": "1606.01885"
},
{
"id": "1604.00289"
}
] |
1611.05763 | 37 | (a) LSTM, low vol (c) (e) 0.75 AAT RN EET TH R-W RW âTr 50 ee ee ee ~ meen R-W, low vol Ss 40 RRR âODER ART OEY Ke 0.75" $s © + low vol episodes kee) a0 0400 100 ee OG â_ 26d 0 oO s 2 30 « high vol episodes LSTM, high vol av g Ly (re pepsin i Ap aie ee s i imi hl, ek Ld Led "10 R-W, high vol - true p ey LP ee Sa EER Rap ET RACED) o action s 0 â uly! VEL |) \ yl feedback! oo fo y Vt Ly Ly] be Qala: dot a Jets es eas als foes Ms aaa 22 6 0 20°40 1,00 80100 © 9 02 04 06 ste| (b) P (a) LsTM (f) LsTM 50 20 - â LSTM A2c a se best R-W. & 40 a = = Thompson Sf L o ucs be) P _ 30 . o os 3 @ os 5 ao & = 20 Ss 2 é SS 10 s Ry i 0 20 40 60 so 100 © 8 S$ 8 * 0 02 04 06 step © model alpha | 1611.05763#37 | Learning to reinforcement learn | In recent years deep reinforcement learning (RL) systems have attained
superhuman performance in a number of challenging task domains. However, a
major limitation of such applications is their demand for massive amounts of
training data. A critical present objective is thus to develop deep RL methods
that can adapt rapidly to new tasks. In the present work we introduce a novel
approach to this challenge, which we refer to as deep meta-reinforcement
learning. Previous work has shown that recurrent networks can support
meta-learning in a fully supervised context. We extend this approach to the RL
setting. What emerges is a system that is trained using one RL algorithm, but
whose recurrent dynamics implement a second, quite separate RL procedure. This
second, learned RL algorithm can differ from the original one in arbitrary
ways. Importantly, because it is learned, it is configured to exploit structure
in the training domain. We unpack these points in a series of seven
proof-of-concept experiments, each of which examines a key aspect of deep
meta-RL. We consider prospects for extending and scaling up the approach, and
also point out some potentially important implications for neuroscience. | http://arxiv.org/pdf/1611.05763 | Jane X Wang, Zeb Kurth-Nelson, Dhruva Tirumala, Hubert Soyer, Joel Z Leibo, Remi Munos, Charles Blundell, Dharshan Kumaran, Matt Botvinick | cs.LG, cs.AI, stat.ML | 17 pages, 7 figures, 1 table | null | cs.LG | 20161117 | 20170123 | [
{
"id": "1611.01578"
},
{
"id": "1611.03824"
},
{
"id": "1611.03673"
},
{
"id": "1606.04474"
},
{
"id": "1611.02779"
},
{
"id": "1611.05397"
},
{
"id": "1602.02867"
},
{
"id": "1606.01885"
},
{
"id": "1604.00289"
}
] |
1611.05763 | 38 | Figure 4: Learned RL procedure adapts its own learning rate to the environment. (a) Agents were trained on two-armed bandits with perfectly anti-correlated Bernoulli reward probabilities, p1 and 1-p1. Two example episodes are shown. p1 changed within an episode (solid black line), with a fast Poisson jump rate in âhigh volâ episodes and a slow rate in âlow volâ episodes. (b) The trained LSTM agent outperformed UCB, Thompson sampling, and a Rescorla-Wagner (R-W) learner with ï¬xed learning rate α=0.5 (selected for being optimal on average in this distribution of environments). (c,d) We ï¬t R-W models by maximum likelihood both to the behavior of R-W (as a control) and to the behavior of LSTM. Models including a learning rate that could vary between episodes (âabâ and âabeâ) outperformed models without these free parameters on LSTMâs data, but not on R-Wâs data. Addition of a lapse parameter further improved model ï¬ts on LSTMâs data (âbeâ and | 1611.05763#38 | Learning to reinforcement learn | In recent years deep reinforcement learning (RL) systems have attained
superhuman performance in a number of challenging task domains. However, a
major limitation of such applications is their demand for massive amounts of
training data. A critical present objective is thus to develop deep RL methods
that can adapt rapidly to new tasks. In the present work we introduce a novel
approach to this challenge, which we refer to as deep meta-reinforcement
learning. Previous work has shown that recurrent networks can support
meta-learning in a fully supervised context. We extend this approach to the RL
setting. What emerges is a system that is trained using one RL algorithm, but
whose recurrent dynamics implement a second, quite separate RL procedure. This
second, learned RL algorithm can differ from the original one in arbitrary
ways. Importantly, because it is learned, it is configured to exploit structure
in the training domain. We unpack these points in a series of seven
proof-of-concept experiments, each of which examines a key aspect of deep
meta-RL. We consider prospects for extending and scaling up the approach, and
also point out some potentially important implications for neuroscience. | http://arxiv.org/pdf/1611.05763 | Jane X Wang, Zeb Kurth-Nelson, Dhruva Tirumala, Hubert Soyer, Joel Z Leibo, Remi Munos, Charles Blundell, Dharshan Kumaran, Matt Botvinick | cs.LG, cs.AI, stat.ML | 17 pages, 7 figures, 1 table | null | cs.LG | 20161117 | 20170123 | [
{
"id": "1611.01578"
},
{
"id": "1611.03824"
},
{
"id": "1611.03673"
},
{
"id": "1606.04474"
},
{
"id": "1611.02779"
},
{
"id": "1611.05397"
},
{
"id": "1602.02867"
},
{
"id": "1606.01885"
},
{
"id": "1604.00289"
}
] |
1611.05763 | 40 | 3.2 MARKOV DECISION PROBLEMS
The foregoing experiments focused on bandit tasks in which actions do not affect the taskâs underlying state. We turn now to MDPs where actions do inï¬uence state. We begin with a task derived from the neuroscience literature and then turn to a task, originally studied in the context of animal learning, which requires learning of abstract task structure. As in the previous experiments, our focus is on examining how meta-RL adapts to invariances in task structure. We wrap up by reviewing an experiment recently reported in a related paper (Mirowski et al., 2016), which demonstrates how meta-RL can scale to large-scale navigation tasks with rich visual inputs.
3.2.1 THE âTWO-STEP TASKâ | 1611.05763#40 | Learning to reinforcement learn | In recent years deep reinforcement learning (RL) systems have attained
superhuman performance in a number of challenging task domains. However, a
major limitation of such applications is their demand for massive amounts of
training data. A critical present objective is thus to develop deep RL methods
that can adapt rapidly to new tasks. In the present work we introduce a novel
approach to this challenge, which we refer to as deep meta-reinforcement
learning. Previous work has shown that recurrent networks can support
meta-learning in a fully supervised context. We extend this approach to the RL
setting. What emerges is a system that is trained using one RL algorithm, but
whose recurrent dynamics implement a second, quite separate RL procedure. This
second, learned RL algorithm can differ from the original one in arbitrary
ways. Importantly, because it is learned, it is configured to exploit structure
in the training domain. We unpack these points in a series of seven
proof-of-concept experiments, each of which examines a key aspect of deep
meta-RL. We consider prospects for extending and scaling up the approach, and
also point out some potentially important implications for neuroscience. | http://arxiv.org/pdf/1611.05763 | Jane X Wang, Zeb Kurth-Nelson, Dhruva Tirumala, Hubert Soyer, Joel Z Leibo, Remi Munos, Charles Blundell, Dharshan Kumaran, Matt Botvinick | cs.LG, cs.AI, stat.ML | 17 pages, 7 figures, 1 table | null | cs.LG | 20161117 | 20170123 | [
{
"id": "1611.01578"
},
{
"id": "1611.03824"
},
{
"id": "1611.03673"
},
{
"id": "1606.04474"
},
{
"id": "1611.02779"
},
{
"id": "1611.05397"
},
{
"id": "1602.02867"
},
{
"id": "1606.01885"
},
{
"id": "1604.00289"
}
] |
1611.05763 | 41 | 3.2.1 THE âTWO-STEP TASKâ
Here we examine meta-RL in a setting that has been widely used in the neuroscience literature to distinguish the contribution of different systems viewed to support decision making (Daw et al., 2005). Speciï¬cally, this paradigm â known as the âtwo-step taskâ (Daw et al., 2011) â was developed to dissociate a model-free system that caches values of actions in states (e.g. TD(1) Q-learning; see Sutton and Barto, 1998), from a model-based system which learns an internal model of the environment and evaluates the value of actions at the time of decision-making through look-ahead planning (Daw et al., 2005). Our interest was in whether meta-RL would give rise to behavior emulating a model-based strategy, despite the use of a model-free algorithm (in this case A2C) to train the system weights.
9 | 1611.05763#41 | Learning to reinforcement learn | In recent years deep reinforcement learning (RL) systems have attained
superhuman performance in a number of challenging task domains. However, a
major limitation of such applications is their demand for massive amounts of
training data. A critical present objective is thus to develop deep RL methods
that can adapt rapidly to new tasks. In the present work we introduce a novel
approach to this challenge, which we refer to as deep meta-reinforcement
learning. Previous work has shown that recurrent networks can support
meta-learning in a fully supervised context. We extend this approach to the RL
setting. What emerges is a system that is trained using one RL algorithm, but
whose recurrent dynamics implement a second, quite separate RL procedure. This
second, learned RL algorithm can differ from the original one in arbitrary
ways. Importantly, because it is learned, it is configured to exploit structure
in the training domain. We unpack these points in a series of seven
proof-of-concept experiments, each of which examines a key aspect of deep
meta-RL. We consider prospects for extending and scaling up the approach, and
also point out some potentially important implications for neuroscience. | http://arxiv.org/pdf/1611.05763 | Jane X Wang, Zeb Kurth-Nelson, Dhruva Tirumala, Hubert Soyer, Joel Z Leibo, Remi Munos, Charles Blundell, Dharshan Kumaran, Matt Botvinick | cs.LG, cs.AI, stat.ML | 17 pages, 7 figures, 1 table | null | cs.LG | 20161117 | 20170123 | [
{
"id": "1611.01578"
},
{
"id": "1611.03824"
},
{
"id": "1611.03673"
},
{
"id": "1606.04474"
},
{
"id": "1611.02779"
},
{
"id": "1611.05397"
},
{
"id": "1602.02867"
},
{
"id": "1606.01885"
},
{
"id": "1604.00289"
}
] |
1611.05763 | 42 | 9
We used a modiï¬ed version of the two-step task, designed to bolster the utility of model-based over model-free control (see Kool et al., 2016). The taskâs structure is diagrammed in Figure 5a. From the ï¬rst-stage state S1, action a1 leads to second-stage states S2 and S3 with probability 0.75 and 0.25, respectively, while action a2 leads to S2 and S3 with probabilities 0.25 and 0.75. One second-stage state yielded a reward of 1.0 with probability 0.9 (and otherwise zero); the other yielded the same reward with probability 0.1. The identity of the higher-valued state was assigned randomly for each episode. Thus, the expected values for the two ï¬rst-stage actions were either ra = 0.9 and rb = 0.1, or ra = 0.1 and rb = 0.9. All three states were represented by one-hot vectors, with the transition model held constant across episodes: i.e. only the expected value of the second stage states changed from episode to episode. | 1611.05763#42 | Learning to reinforcement learn | In recent years deep reinforcement learning (RL) systems have attained
superhuman performance in a number of challenging task domains. However, a
major limitation of such applications is their demand for massive amounts of
training data. A critical present objective is thus to develop deep RL methods
that can adapt rapidly to new tasks. In the present work we introduce a novel
approach to this challenge, which we refer to as deep meta-reinforcement
learning. Previous work has shown that recurrent networks can support
meta-learning in a fully supervised context. We extend this approach to the RL
setting. What emerges is a system that is trained using one RL algorithm, but
whose recurrent dynamics implement a second, quite separate RL procedure. This
second, learned RL algorithm can differ from the original one in arbitrary
ways. Importantly, because it is learned, it is configured to exploit structure
in the training domain. We unpack these points in a series of seven
proof-of-concept experiments, each of which examines a key aspect of deep
meta-RL. We consider prospects for extending and scaling up the approach, and
also point out some potentially important implications for neuroscience. | http://arxiv.org/pdf/1611.05763 | Jane X Wang, Zeb Kurth-Nelson, Dhruva Tirumala, Hubert Soyer, Joel Z Leibo, Remi Munos, Charles Blundell, Dharshan Kumaran, Matt Botvinick | cs.LG, cs.AI, stat.ML | 17 pages, 7 figures, 1 table | null | cs.LG | 20161117 | 20170123 | [
{
"id": "1611.01578"
},
{
"id": "1611.03824"
},
{
"id": "1611.03673"
},
{
"id": "1606.04474"
},
{
"id": "1611.02779"
},
{
"id": "1611.05397"
},
{
"id": "1602.02867"
},
{
"id": "1606.01885"
},
{
"id": "1604.00289"
}
] |
1611.05763 | 43 | We applied the conventional analysis used in the neuroscience literature to dissociate model-free from model-based control (Daw et al., 2011). This focuses on the âstay probability,â that is, the probability with which a ï¬rst-stage action is selected at trial t + 1 following a second-stage reward at trial t, as a function of whether trial t involved a common transition (e.g. action a1 at state S1 led to S2) or rare transition (action a2 at state S1 led to S3). Under the standard interpretation (see Daw et al., 2011), model-free control â à la TD(1) â predicts that there should be a main effect of reward: First-stage actions will tend to be repeated if followed by reward, regardless of transition type, and such actions will tend not to be repeated (choice switch) if followed by non-reward (Figure 5b). In contrast, model-based control predicts an interaction between the reward and transition type, reï¬ecting a more goal-directed strategy, which takes the transition structure into account. Intuitively, if you receive a second-stage reward (e.g. at S2) following a | 1611.05763#43 | Learning to reinforcement learn | In recent years deep reinforcement learning (RL) systems have attained
superhuman performance in a number of challenging task domains. However, a
major limitation of such applications is their demand for massive amounts of
training data. A critical present objective is thus to develop deep RL methods
that can adapt rapidly to new tasks. In the present work we introduce a novel
approach to this challenge, which we refer to as deep meta-reinforcement
learning. Previous work has shown that recurrent networks can support
meta-learning in a fully supervised context. We extend this approach to the RL
setting. What emerges is a system that is trained using one RL algorithm, but
whose recurrent dynamics implement a second, quite separate RL procedure. This
second, learned RL algorithm can differ from the original one in arbitrary
ways. Importantly, because it is learned, it is configured to exploit structure
in the training domain. We unpack these points in a series of seven
proof-of-concept experiments, each of which examines a key aspect of deep
meta-RL. We consider prospects for extending and scaling up the approach, and
also point out some potentially important implications for neuroscience. | http://arxiv.org/pdf/1611.05763 | Jane X Wang, Zeb Kurth-Nelson, Dhruva Tirumala, Hubert Soyer, Joel Z Leibo, Remi Munos, Charles Blundell, Dharshan Kumaran, Matt Botvinick | cs.LG, cs.AI, stat.ML | 17 pages, 7 figures, 1 table | null | cs.LG | 20161117 | 20170123 | [
{
"id": "1611.01578"
},
{
"id": "1611.03824"
},
{
"id": "1611.03673"
},
{
"id": "1606.04474"
},
{
"id": "1611.02779"
},
{
"id": "1611.05397"
},
{
"id": "1602.02867"
},
{
"id": "1606.01885"
},
{
"id": "1604.00289"
}
] |
1611.05763 | 45 | The results of the stay-probability analysis performed on the agentâs choices show a pattern conven- tionally interpreted as implying the operation of model-based control (Figure 5c). As in previous experiments, when reward information was withheld at the level of network input, performance was at chance levels.
If interpreted following standard practice in neuroscience, the behavior of the model in this experiment reï¬ects a surprising effect: training with model-free RL gives rise to behavior reï¬ecting model-based control. We hasten to note that different interpretations of the observed pattern of behavior are available (Akam et al., 2015), a point to which we will return below. However, notwithstanding this caveat, the results of the present experiment provide a further illustration of the point that the learning procedure that emerges from meta-RL can differ starkly from the original RL algorithm used to train the network weights, and takes a form that exploits consistent task structure.
# 3.2.2 LEARNING ABSTRACT TASK STRUCTURE | 1611.05763#45 | Learning to reinforcement learn | In recent years deep reinforcement learning (RL) systems have attained
superhuman performance in a number of challenging task domains. However, a
major limitation of such applications is their demand for massive amounts of
training data. A critical present objective is thus to develop deep RL methods
that can adapt rapidly to new tasks. In the present work we introduce a novel
approach to this challenge, which we refer to as deep meta-reinforcement
learning. Previous work has shown that recurrent networks can support
meta-learning in a fully supervised context. We extend this approach to the RL
setting. What emerges is a system that is trained using one RL algorithm, but
whose recurrent dynamics implement a second, quite separate RL procedure. This
second, learned RL algorithm can differ from the original one in arbitrary
ways. Importantly, because it is learned, it is configured to exploit structure
in the training domain. We unpack these points in a series of seven
proof-of-concept experiments, each of which examines a key aspect of deep
meta-RL. We consider prospects for extending and scaling up the approach, and
also point out some potentially important implications for neuroscience. | http://arxiv.org/pdf/1611.05763 | Jane X Wang, Zeb Kurth-Nelson, Dhruva Tirumala, Hubert Soyer, Joel Z Leibo, Remi Munos, Charles Blundell, Dharshan Kumaran, Matt Botvinick | cs.LG, cs.AI, stat.ML | 17 pages, 7 figures, 1 table | null | cs.LG | 20161117 | 20170123 | [
{
"id": "1611.01578"
},
{
"id": "1611.03824"
},
{
"id": "1611.03673"
},
{
"id": "1606.04474"
},
{
"id": "1611.02779"
},
{
"id": "1611.05397"
},
{
"id": "1602.02867"
},
{
"id": "1606.01885"
},
{
"id": "1604.00289"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.