doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1602.07261 | 10 | components while keeping the rest of the network stable. Not simplifying earlier choices resulted in networks that looked more complicated that they needed to be. In our newer experiments, for Inception-v4 we decided to shed this unnecessary baggage and made uniform choices for the Inception blocks for each grid size. Plase refer to Figure 9 for the large scale structure of the Inception-v4 net- work and Figures 3, 4, 5, 6, 7 and 8 for the detailed struc- ture of its components. All the convolutions not marked with âVâ in the ï¬gures are same-padded meaning that their output grid matches the size of their input. Convolutions marked with âVâ are valid padded, meaning that input patch of each unit is fully contained in the previous layer and the grid size of the output activation map is reduced accord- ingly. | 1602.07261#10 | Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning | Very deep convolutional networks have been central to the largest advances in
image recognition performance in recent years. One example is the Inception
architecture that has been shown to achieve very good performance at relatively
low computational cost. Recently, the introduction of residual connections in
conjunction with a more traditional architecture has yielded state-of-the-art
performance in the 2015 ILSVRC challenge; its performance was similar to the
latest generation Inception-v3 network. This raises the question of whether
there are any benefit in combining the Inception architecture with residual
connections. Here we give clear empirical evidence that training with residual
connections accelerates the training of Inception networks significantly. There
is also some evidence of residual Inception networks outperforming similarly
expensive Inception networks without residual connections by a thin margin. We
also present several new streamlined architectures for both residual and
non-residual Inception networks. These variations improve the single-frame
recognition performance on the ILSVRC 2012 classification task significantly.
We further demonstrate how proper activation scaling stabilizes the training of
very wide residual Inception networks. With an ensemble of three residual and
one Inception-v4, we achieve 3.08 percent top-5 error on the test set of the
ImageNet classification (CLS) challenge | http://arxiv.org/pdf/1602.07261 | Christian Szegedy, Sergey Ioffe, Vincent Vanhoucke, Alex Alemi | cs.CV | null | null | cs.CV | 20160223 | 20160823 | [
{
"id": "1512.00567"
},
{
"id": "1512.03385"
}
] |
1602.07261 | 11 | # 3.2. Residual Inception Blocks
For the residual versions of the Inception networks, we use cheaper Inception blocks than the original Inception. Each Inception block is followed by ï¬lter-expansion layer (1 à 1 convolution without activation) which is used for scaling up the dimensionality of the ï¬lter bank before the addition to match the depth of the input. This is needed to compensate for the dimensionality reduction induced by the Inception block.
We tried several versions of the residual version of In- ception. Only two of them are detailed here. The ï¬rst one âInception-ResNet-v1â roughly the computational cost of Inception-v3, while âInception-ResNet-v2â matches the raw cost of the newly introduced Inception-v4 network. See Figure 15 for the large scale structure of both varianets. (However, the step time of Inception-v4 proved to be signif- icantly slower in practice, probably due to the larger number of layers.) | 1602.07261#11 | Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning | Very deep convolutional networks have been central to the largest advances in
image recognition performance in recent years. One example is the Inception
architecture that has been shown to achieve very good performance at relatively
low computational cost. Recently, the introduction of residual connections in
conjunction with a more traditional architecture has yielded state-of-the-art
performance in the 2015 ILSVRC challenge; its performance was similar to the
latest generation Inception-v3 network. This raises the question of whether
there are any benefit in combining the Inception architecture with residual
connections. Here we give clear empirical evidence that training with residual
connections accelerates the training of Inception networks significantly. There
is also some evidence of residual Inception networks outperforming similarly
expensive Inception networks without residual connections by a thin margin. We
also present several new streamlined architectures for both residual and
non-residual Inception networks. These variations improve the single-frame
recognition performance on the ILSVRC 2012 classification task significantly.
We further demonstrate how proper activation scaling stabilizes the training of
very wide residual Inception networks. With an ensemble of three residual and
one Inception-v4, we achieve 3.08 percent top-5 error on the test set of the
ImageNet classification (CLS) challenge | http://arxiv.org/pdf/1602.07261 | Christian Szegedy, Sergey Ioffe, Vincent Vanhoucke, Alex Alemi | cs.CV | null | null | cs.CV | 20160223 | 20160823 | [
{
"id": "1512.00567"
},
{
"id": "1512.03385"
}
] |
1602.07261 | 12 | Another small technical difference between our resid- ual and non-residual Inception variants is that in the case of Inception-ResNet, we used batch-normalization only on top of the traditional layers, but not on top of the summa- tions. It is reasonable to expect that a thorough use of batch- normalization should be advantageous, but we wanted to keep each model replica trainable on a single GPU. It turned out that the memory footprint of layers with large activa- tion size was consuming disproportionate amount of GPU- memory. By omitting the batch-normalization on top of those layers, we were able to increase the overall number of Inception blocks substantially. We hope that with bet- ter utilization of computing resources, making this trade-off will become unecessary. | 1602.07261#12 | Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning | Very deep convolutional networks have been central to the largest advances in
image recognition performance in recent years. One example is the Inception
architecture that has been shown to achieve very good performance at relatively
low computational cost. Recently, the introduction of residual connections in
conjunction with a more traditional architecture has yielded state-of-the-art
performance in the 2015 ILSVRC challenge; its performance was similar to the
latest generation Inception-v3 network. This raises the question of whether
there are any benefit in combining the Inception architecture with residual
connections. Here we give clear empirical evidence that training with residual
connections accelerates the training of Inception networks significantly. There
is also some evidence of residual Inception networks outperforming similarly
expensive Inception networks without residual connections by a thin margin. We
also present several new streamlined architectures for both residual and
non-residual Inception networks. These variations improve the single-frame
recognition performance on the ILSVRC 2012 classification task significantly.
We further demonstrate how proper activation scaling stabilizes the training of
very wide residual Inception networks. With an ensemble of three residual and
one Inception-v4, we achieve 3.08 percent top-5 error on the test set of the
ImageNet classification (CLS) challenge | http://arxiv.org/pdf/1602.07261 | Christian Szegedy, Sergey Ioffe, Vincent Vanhoucke, Alex Alemi | cs.CV | null | null | cs.CV | 20160223 | 20160823 | [
{
"id": "1512.00567"
},
{
"id": "1512.03385"
}
] |
1602.07261 | 13 | Filter concat | ssx35xsea oO 3x3 Conv MaxPool (192 V) (stride=2 V) ââ Filter concat ee 3x3 Conv (96 V) t 1x7 Conv 3x3 Conv (64) (96 V) t { 7x1 Conv 64 1x1 Conv ( + ) (64) TAx71x192 1x1 Conv (64) Filter concat â7sx73x160 eee 3x3 MaxPool 3x3 Conv (stride 2 V) (96 stride 2 V) ââ 3x3 Conv (64) f 3x3 Conv (32 V) i 3x3 Conv (32 stride 2 V) a 1A7x147x64 147x147%32 149x149x32
Input (299x299x3)
299x299x3
Figure 3. The schema for stem of the pure Inception-v4 and Inception-ResNet-v2 networks. This is the input part of those net- works. Cf. Figures 9 and 15
Filter concat 3x3 Conv (96) | a 4x1 Conv 3x3 Conv 3x3 Conv (98) (96) (96) i âAvg Poolin: tka Com 1x1 Conv 4x1 Conv ° ° (96) (64) (64) + Filter concat | 1602.07261#13 | Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning | Very deep convolutional networks have been central to the largest advances in
image recognition performance in recent years. One example is the Inception
architecture that has been shown to achieve very good performance at relatively
low computational cost. Recently, the introduction of residual connections in
conjunction with a more traditional architecture has yielded state-of-the-art
performance in the 2015 ILSVRC challenge; its performance was similar to the
latest generation Inception-v3 network. This raises the question of whether
there are any benefit in combining the Inception architecture with residual
connections. Here we give clear empirical evidence that training with residual
connections accelerates the training of Inception networks significantly. There
is also some evidence of residual Inception networks outperforming similarly
expensive Inception networks without residual connections by a thin margin. We
also present several new streamlined architectures for both residual and
non-residual Inception networks. These variations improve the single-frame
recognition performance on the ILSVRC 2012 classification task significantly.
We further demonstrate how proper activation scaling stabilizes the training of
very wide residual Inception networks. With an ensemble of three residual and
one Inception-v4, we achieve 3.08 percent top-5 error on the test set of the
ImageNet classification (CLS) challenge | http://arxiv.org/pdf/1602.07261 | Christian Szegedy, Sergey Ioffe, Vincent Vanhoucke, Alex Alemi | cs.CV | null | null | cs.CV | 20160223 | 20160823 | [
{
"id": "1512.00567"
},
{
"id": "1512.03385"
}
] |
1602.07261 | 14 | Figure 4. The schema for 35 Ã 35 grid modules of the pure Inception-v4 network. This is the Inception-A block of Figure 9.
Filter concat 7x1 Conv (256) t 1x7 Conv 4x1 Conv prconv (224) (128) (256) ' 7x1 Conv 1x1 Conv 1x7 Conv (224) (384) (224) t 1x7 Conv 1x1 Conv (192) Avg Pooling 2) t 1x1 Conv (192) Filter concat
Figure 5. The schema for 17 Ã 17 grid modules of the pure Inception-v4 network. This is the Inception-B block of Figure 9.
Filter concat 3x1 Conv (256) 4x3 Conv 4x1 Conv (256) (256) 1x3 Conv 3x1 Conv (256) (256) 3x1 Conv 4x1 Conv Ge) (256) il | 4x3 Conv (448) t 1x1 Conv (384) (384) | 1x1 Conv Avg Pooling | Filter concat
Figure 6. The schema for 8Ã8 grid modules of the pure Inception- v4 network. This is the Inception-C block of Figure 9. | 1602.07261#14 | Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning | Very deep convolutional networks have been central to the largest advances in
image recognition performance in recent years. One example is the Inception
architecture that has been shown to achieve very good performance at relatively
low computational cost. Recently, the introduction of residual connections in
conjunction with a more traditional architecture has yielded state-of-the-art
performance in the 2015 ILSVRC challenge; its performance was similar to the
latest generation Inception-v3 network. This raises the question of whether
there are any benefit in combining the Inception architecture with residual
connections. Here we give clear empirical evidence that training with residual
connections accelerates the training of Inception networks significantly. There
is also some evidence of residual Inception networks outperforming similarly
expensive Inception networks without residual connections by a thin margin. We
also present several new streamlined architectures for both residual and
non-residual Inception networks. These variations improve the single-frame
recognition performance on the ILSVRC 2012 classification task significantly.
We further demonstrate how proper activation scaling stabilizes the training of
very wide residual Inception networks. With an ensemble of three residual and
one Inception-v4, we achieve 3.08 percent top-5 error on the test set of the
ImageNet classification (CLS) challenge | http://arxiv.org/pdf/1602.07261 | Christian Szegedy, Sergey Ioffe, Vincent Vanhoucke, Alex Alemi | cs.CV | null | null | cs.CV | 20160223 | 20160823 | [
{
"id": "1512.00567"
},
{
"id": "1512.03385"
}
] |
1602.07261 | 15 | Figure 6. The schema for 8Ã8 grid modules of the pure Inception- v4 network. This is the Inception-C block of Figure 9.
Filter concat 3x3 Conv (m stride 2 V) ry 3x3 MaxPool (stride 2 V) 3x3 Conv 3x3 Conv (n stride 2 V) (I) ry 1x1 Conv (k) Filter concat
Figure 7. The schema for 35 à 35 to 17 à 17 reduction module. Different variants of this blocks (with various number of ï¬lters) are used in Figure 9, and 15 in each of the new Inception(-v4, - ResNet-v1, -ResNet-v2) variants presented in this paper. The k, l, m, n numbers represent ï¬lter bank sizes which can be looked up in Table 1.
Filter concat 3x3 Conv 20 stride 2 V) 3x3 Conv (820 stride 2V) (192 stride 2 V) t i 7x1 Conv 3x3 MaxPool (320) (stride 2 V) t 1x7 Conv 1x1 Conv (256) (192) 1 cy 1x1 Conv (256) Filter concat | 1602.07261#15 | Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning | Very deep convolutional networks have been central to the largest advances in
image recognition performance in recent years. One example is the Inception
architecture that has been shown to achieve very good performance at relatively
low computational cost. Recently, the introduction of residual connections in
conjunction with a more traditional architecture has yielded state-of-the-art
performance in the 2015 ILSVRC challenge; its performance was similar to the
latest generation Inception-v3 network. This raises the question of whether
there are any benefit in combining the Inception architecture with residual
connections. Here we give clear empirical evidence that training with residual
connections accelerates the training of Inception networks significantly. There
is also some evidence of residual Inception networks outperforming similarly
expensive Inception networks without residual connections by a thin margin. We
also present several new streamlined architectures for both residual and
non-residual Inception networks. These variations improve the single-frame
recognition performance on the ILSVRC 2012 classification task significantly.
We further demonstrate how proper activation scaling stabilizes the training of
very wide residual Inception networks. With an ensemble of three residual and
one Inception-v4, we achieve 3.08 percent top-5 error on the test set of the
ImageNet classification (CLS) challenge | http://arxiv.org/pdf/1602.07261 | Christian Szegedy, Sergey Ioffe, Vincent Vanhoucke, Alex Alemi | cs.CV | null | null | cs.CV | 20160223 | 20160823 | [
{
"id": "1512.00567"
},
{
"id": "1512.03385"
}
] |
1602.07261 | 16 | Figure 8. The schema for 17 Ã 17 to 8 Ã 8 grid-reduction mod- ule. This is the reduction module used by the pure Inception-v4 network in Figure 9.
Softmax Output: 1000 Dropout (keep 0.8) Avarage Pooling Output: 1536 Output: 1536 3 x Inception-C Reduction-B Output: 8x8x1536 Output: 8x8x1536, 7 x Inception-B Output: 1717x1024 Reduction-A Output: 1717x1024 4x Inception-A Stem Input (299x299x3) Output: 35x35x384 Output: 35x35x384 299x299%3
Figure 9. The overall schema of the Inception-v4 network. For the detailed modules, please refer to Figures 3, 4, 5, 6, 7 and 8 for the detailed structure of the various components.
Relu activation 1x1 Conv (256 Linear) at 3x3 Conv (32) 1x1 Conv + (32) 3x3 Conv 3x3 Conv (32) (32) 1x1 Conv 1x1 Conv (32) (32) Relu activation
Figure 10. The schema for 35 Ã 35 grid (Inception-ResNet-A) module of Inception-ResNet-v1 network.
# Relu | 1602.07261#16 | Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning | Very deep convolutional networks have been central to the largest advances in
image recognition performance in recent years. One example is the Inception
architecture that has been shown to achieve very good performance at relatively
low computational cost. Recently, the introduction of residual connections in
conjunction with a more traditional architecture has yielded state-of-the-art
performance in the 2015 ILSVRC challenge; its performance was similar to the
latest generation Inception-v3 network. This raises the question of whether
there are any benefit in combining the Inception architecture with residual
connections. Here we give clear empirical evidence that training with residual
connections accelerates the training of Inception networks significantly. There
is also some evidence of residual Inception networks outperforming similarly
expensive Inception networks without residual connections by a thin margin. We
also present several new streamlined architectures for both residual and
non-residual Inception networks. These variations improve the single-frame
recognition performance on the ILSVRC 2012 classification task significantly.
We further demonstrate how proper activation scaling stabilizes the training of
very wide residual Inception networks. With an ensemble of three residual and
one Inception-v4, we achieve 3.08 percent top-5 error on the test set of the
ImageNet classification (CLS) challenge | http://arxiv.org/pdf/1602.07261 | Christian Szegedy, Sergey Ioffe, Vincent Vanhoucke, Alex Alemi | cs.CV | null | null | cs.CV | 20160223 | 20160823 | [
{
"id": "1512.00567"
},
{
"id": "1512.03385"
}
] |
1602.07261 | 17 | Figure 10. The schema for 35 Ã 35 grid (Inception-ResNet-A) module of Inception-ResNet-v1 network.
# Relu
activation 1x1 Conv (896 Linear) 7x1 Conv (128) 1x1 Conv 1x7 Conv (128) (128) 1x1 Conv (128) activation
# Relu
Figure 11. The schema for 17 Ã 17 grid (Inception-ResNet-B) module of Inception-ResNet-v1 network.
Filter concat ZN Conv (256 stride 2 V) 3x3 Conv 3x3 Conv t (384 stride 2V) | (256 stride 2 V) 3x3 MaxPool 3x3 Conv (stride 2 V) i (256) NN 1x1 Conv 1x1 Conv t (9) (256) 1x1 Conv (256) Previous Layer
Figure 12. âReduction-Bâ 17 Ã 17 to 8 Ã 8 grid-reduction module. This module used by the smaller Inception-ResNet-v1 network in Figure 15.
# Relu
activation activation 1x1 Conv (1792 Linear) 3x1 Conv (192) 1x1 Conv 1x3 Conv (192) (192) ry 1x1 Conv (192)
# Relu | 1602.07261#17 | Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning | Very deep convolutional networks have been central to the largest advances in
image recognition performance in recent years. One example is the Inception
architecture that has been shown to achieve very good performance at relatively
low computational cost. Recently, the introduction of residual connections in
conjunction with a more traditional architecture has yielded state-of-the-art
performance in the 2015 ILSVRC challenge; its performance was similar to the
latest generation Inception-v3 network. This raises the question of whether
there are any benefit in combining the Inception architecture with residual
connections. Here we give clear empirical evidence that training with residual
connections accelerates the training of Inception networks significantly. There
is also some evidence of residual Inception networks outperforming similarly
expensive Inception networks without residual connections by a thin margin. We
also present several new streamlined architectures for both residual and
non-residual Inception networks. These variations improve the single-frame
recognition performance on the ILSVRC 2012 classification task significantly.
We further demonstrate how proper activation scaling stabilizes the training of
very wide residual Inception networks. With an ensemble of three residual and
one Inception-v4, we achieve 3.08 percent top-5 error on the test set of the
ImageNet classification (CLS) challenge | http://arxiv.org/pdf/1602.07261 | Christian Szegedy, Sergey Ioffe, Vincent Vanhoucke, Alex Alemi | cs.CV | null | null | cs.CV | 20160223 | 20160823 | [
{
"id": "1512.00567"
},
{
"id": "1512.03385"
}
] |
1602.07261 | 18 | # Relu
Figure 13. The schema for 8Ã8 grid (Inception-ResNet-C) module of Inception-ResNet-v1 network.
3x3 Conv (256 stride 2 V) + 3x3 Conv (192 V) 3x3 MaxPool (stride 2 V) ry 3x3 Conv (64) ry 3x3 Conv (32 V) ry 3x3 Conv (32 stride 2 V) Input (299x299x3) 35x35x256 71x71x192 73x73x80 73x73x64 147x147x64 147x147x32 149x149x32 299x299x3
Figure 14. The stem of the Inception-ResNet-v1 network.
Softmax Dropout (keep 0.8) Average Pooling 5 x Inception-resnet-C Reduction-B ' 40x Inception-resnet-B Reduction-A 5 x Inception-resnet-A Stem Input (299x299x3) Output 1000 Output 1792 Output: 1792 Output: ex8x1702 (Output Bxax1792 Output 173174898 Output 173174898 Output: 351351256 Output: 351351256 2091200 | 1602.07261#18 | Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning | Very deep convolutional networks have been central to the largest advances in
image recognition performance in recent years. One example is the Inception
architecture that has been shown to achieve very good performance at relatively
low computational cost. Recently, the introduction of residual connections in
conjunction with a more traditional architecture has yielded state-of-the-art
performance in the 2015 ILSVRC challenge; its performance was similar to the
latest generation Inception-v3 network. This raises the question of whether
there are any benefit in combining the Inception architecture with residual
connections. Here we give clear empirical evidence that training with residual
connections accelerates the training of Inception networks significantly. There
is also some evidence of residual Inception networks outperforming similarly
expensive Inception networks without residual connections by a thin margin. We
also present several new streamlined architectures for both residual and
non-residual Inception networks. These variations improve the single-frame
recognition performance on the ILSVRC 2012 classification task significantly.
We further demonstrate how proper activation scaling stabilizes the training of
very wide residual Inception networks. With an ensemble of three residual and
one Inception-v4, we achieve 3.08 percent top-5 error on the test set of the
ImageNet classification (CLS) challenge | http://arxiv.org/pdf/1602.07261 | Christian Szegedy, Sergey Ioffe, Vincent Vanhoucke, Alex Alemi | cs.CV | null | null | cs.CV | 20160223 | 20160823 | [
{
"id": "1512.00567"
},
{
"id": "1512.03385"
}
] |
1602.07261 | 19 | Figure 15. Schema for Inception-ResNet-v1 and Inception- ResNet-v2 networks. This schema applies to both networks but the underlying components differ. Inception-ResNet-v1 uses the blocks as described in Figures 14, 10, 7, 11, 12 and 13. Inception- ResNet-v2 uses the blocks as described in Figures 3, 16, 7,17, 18 and 19. The output sizes in the diagram refer to the activation vector tensor shapes of Inception-ResNet-v1.
Relu activation 1x1 Conv (384 Linear) aa 1x1 Conv (32) Relu activation 3x3 Conv (64) 3x3 Conv 3x3 Conv (32) (48) 1x1 Conv 1x1 Conv (32) (32)
Figure 16. The schema for 35 Ã 35 grid (Inception-ResNet-A) module of the Inception-ResNet-v2 network.
activation 1x1 Conv (1154 Linear) 7x1 Conv (192) + 1x1 Conv (192) 1x7 Conv Relu activation (160) 1x1 Conv (128)
# Relu
Figure 17. The schema for 17 Ã 17 grid (Inception-ResNet-B) module of the Inception-ResNet-v2 network. | 1602.07261#19 | Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning | Very deep convolutional networks have been central to the largest advances in
image recognition performance in recent years. One example is the Inception
architecture that has been shown to achieve very good performance at relatively
low computational cost. Recently, the introduction of residual connections in
conjunction with a more traditional architecture has yielded state-of-the-art
performance in the 2015 ILSVRC challenge; its performance was similar to the
latest generation Inception-v3 network. This raises the question of whether
there are any benefit in combining the Inception architecture with residual
connections. Here we give clear empirical evidence that training with residual
connections accelerates the training of Inception networks significantly. There
is also some evidence of residual Inception networks outperforming similarly
expensive Inception networks without residual connections by a thin margin. We
also present several new streamlined architectures for both residual and
non-residual Inception networks. These variations improve the single-frame
recognition performance on the ILSVRC 2012 classification task significantly.
We further demonstrate how proper activation scaling stabilizes the training of
very wide residual Inception networks. With an ensemble of three residual and
one Inception-v4, we achieve 3.08 percent top-5 error on the test set of the
ImageNet classification (CLS) challenge | http://arxiv.org/pdf/1602.07261 | Christian Szegedy, Sergey Ioffe, Vincent Vanhoucke, Alex Alemi | cs.CV | null | null | cs.CV | 20160223 | 20160823 | [
{
"id": "1512.00567"
},
{
"id": "1512.03385"
}
] |
1602.07261 | 20 | # Relu
Figure 17. The schema for 17 Ã 17 grid (Inception-ResNet-B) module of the Inception-ResNet-v2 network.
Filter concat dN Conv (320 stride 2 V) 3x3 Conv 3x3 Conv (384 stride 2V) | (288 stride 2 V) 3x3 MaxPool 3x3 Conv (stride 2 V) I | (288) 1x1 Conv 1x1 Conv ii (53) (256) 1x1 Conv (256) Previous Layer
Figure 18. The schema for 17 Ã 17 to 8 Ã 8 grid-reduction mod- ule. Reduction-B module used by the wider Inception-ResNet-v1 network in Figure 15.
# Relu
activation activation 1x1 Conv (2048 Linear) 3x1 Conv (256) 1x1 Conv 1x3 Conv (192) (224) 1x1 Conv (192)
# Relu
Figure 19. The schema for 8Ã8 grid (Inception-ResNet-C) module of the Inception-ResNet-v2 network.
Network Inception-v4 Inception-ResNet-v1 Inception-ResNet-v2 k 192 192 256 l 224 192 256 m 256 256 384 n 384 384 384 | 1602.07261#20 | Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning | Very deep convolutional networks have been central to the largest advances in
image recognition performance in recent years. One example is the Inception
architecture that has been shown to achieve very good performance at relatively
low computational cost. Recently, the introduction of residual connections in
conjunction with a more traditional architecture has yielded state-of-the-art
performance in the 2015 ILSVRC challenge; its performance was similar to the
latest generation Inception-v3 network. This raises the question of whether
there are any benefit in combining the Inception architecture with residual
connections. Here we give clear empirical evidence that training with residual
connections accelerates the training of Inception networks significantly. There
is also some evidence of residual Inception networks outperforming similarly
expensive Inception networks without residual connections by a thin margin. We
also present several new streamlined architectures for both residual and
non-residual Inception networks. These variations improve the single-frame
recognition performance on the ILSVRC 2012 classification task significantly.
We further demonstrate how proper activation scaling stabilizes the training of
very wide residual Inception networks. With an ensemble of three residual and
one Inception-v4, we achieve 3.08 percent top-5 error on the test set of the
ImageNet classification (CLS) challenge | http://arxiv.org/pdf/1602.07261 | Christian Szegedy, Sergey Ioffe, Vincent Vanhoucke, Alex Alemi | cs.CV | null | null | cs.CV | 20160223 | 20160823 | [
{
"id": "1512.00567"
},
{
"id": "1512.03385"
}
] |
1602.07261 | 21 | Network Inception-v4 Inception-ResNet-v1 Inception-ResNet-v2 k 192 192 256 l 224 192 256 m 256 256 384 n 384 384 384
Table 1. The number of ï¬lters of the Reduction-A module for the three Inception variants presented in this paper. The four numbers in the colums of the paper parametrize the four convolutions of Figure 7
Relu activation + Activation Scaling f Inception Relu activation
Figure 20. The general schema for scaling combined Inception- resnet moduels. We expect that the same idea is useful in the gen- eral resnet case, where instead of the Inception block an arbitrary subnetwork is used. The scaling block just scales the last linear activations by a suitable constant, typically around 0.1.
# 3.3. Scaling of the Residuals
Also we found that if the number of ï¬lters exceeded 1000, the residual variants started to exhibit instabilities and the network has just âdiedâ early in the training, meaning that the last layer before the average pooling started to pro- duce only zeros after a few tens of thousands of iterations. This could not be prevented, neither by lowering the learn- ing rate, nor by adding an extra batch-normalization to this layer. | 1602.07261#21 | Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning | Very deep convolutional networks have been central to the largest advances in
image recognition performance in recent years. One example is the Inception
architecture that has been shown to achieve very good performance at relatively
low computational cost. Recently, the introduction of residual connections in
conjunction with a more traditional architecture has yielded state-of-the-art
performance in the 2015 ILSVRC challenge; its performance was similar to the
latest generation Inception-v3 network. This raises the question of whether
there are any benefit in combining the Inception architecture with residual
connections. Here we give clear empirical evidence that training with residual
connections accelerates the training of Inception networks significantly. There
is also some evidence of residual Inception networks outperforming similarly
expensive Inception networks without residual connections by a thin margin. We
also present several new streamlined architectures for both residual and
non-residual Inception networks. These variations improve the single-frame
recognition performance on the ILSVRC 2012 classification task significantly.
We further demonstrate how proper activation scaling stabilizes the training of
very wide residual Inception networks. With an ensemble of three residual and
one Inception-v4, we achieve 3.08 percent top-5 error on the test set of the
ImageNet classification (CLS) challenge | http://arxiv.org/pdf/1602.07261 | Christian Szegedy, Sergey Ioffe, Vincent Vanhoucke, Alex Alemi | cs.CV | null | null | cs.CV | 20160223 | 20160823 | [
{
"id": "1512.00567"
},
{
"id": "1512.03385"
}
] |
1602.07261 | 22 | We found that scaling down the residuals before adding them to the previous layer activation seemed to stabilize the training. In general we picked some scaling factors between 0.1 and 0.3 to scale the residuals before their being added to the accumulated layer activations (cf. Figure 20).
A similar instability was observed by He et al. in [5] in the case of very deep residual networks and they suggested a two-phase training where the ï¬rst âwarm-upâ phase is done with very low learning rate, followed by a second phase with high learning rata. We found that if the number of ï¬lters is very high, then even a very low (0.00001) learning rate is not sufï¬cient to cope with the instabilities and the training with high learning rate had a chance to destroy its effects. We found it much more reliable to just scale the residuals.
Even where the scaling was not strictly necessary, it never seemed to harm the ï¬nal accuracy, but it helped to stabilize the training.
# 4. Training Methodology
We have trained our networks with stochastic gradient utilizing the TensorFlow [1] distributed machine learning system using 20 replicas running each on a NVidia Kepler GPU. Our earlier experiments used momentum [13] with a decay of 0.9, while our best models were achieved using | 1602.07261#22 | Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning | Very deep convolutional networks have been central to the largest advances in
image recognition performance in recent years. One example is the Inception
architecture that has been shown to achieve very good performance at relatively
low computational cost. Recently, the introduction of residual connections in
conjunction with a more traditional architecture has yielded state-of-the-art
performance in the 2015 ILSVRC challenge; its performance was similar to the
latest generation Inception-v3 network. This raises the question of whether
there are any benefit in combining the Inception architecture with residual
connections. Here we give clear empirical evidence that training with residual
connections accelerates the training of Inception networks significantly. There
is also some evidence of residual Inception networks outperforming similarly
expensive Inception networks without residual connections by a thin margin. We
also present several new streamlined architectures for both residual and
non-residual Inception networks. These variations improve the single-frame
recognition performance on the ILSVRC 2012 classification task significantly.
We further demonstrate how proper activation scaling stabilizes the training of
very wide residual Inception networks. With an ensemble of three residual and
one Inception-v4, we achieve 3.08 percent top-5 error on the test set of the
ImageNet classification (CLS) challenge | http://arxiv.org/pdf/1602.07261 | Christian Szegedy, Sergey Ioffe, Vincent Vanhoucke, Alex Alemi | cs.CV | null | null | cs.CV | 20160223 | 20160823 | [
{
"id": "1512.00567"
},
{
"id": "1512.03385"
}
] |
1602.07261 | 23 | at <== inception-v3 a â_inception-resnet-v1 âbo 0 60 80 100 100 a0 160 180 200 Epoch
Figure 21. Top-1 error evolution during training of pure Inception- v3 vs a residual network of similar computational cost. The eval- uation is measured on a single crop on the non-blacklist images of the ILSVRC-2012 validation set. The residual model was train- ing much faster, but reached slightly worse ï¬nal accuracy than the traditional Inception-v3.
RMSProp with decay of 0.9 and ⬠= 1.0. We used a learning rate of 0.045, decayed every two epochs using an exponential rate of 0.94. Model evaluations are performed using a running average of the parameters computed over time.
# 5. Experimental Results | 1602.07261#23 | Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning | Very deep convolutional networks have been central to the largest advances in
image recognition performance in recent years. One example is the Inception
architecture that has been shown to achieve very good performance at relatively
low computational cost. Recently, the introduction of residual connections in
conjunction with a more traditional architecture has yielded state-of-the-art
performance in the 2015 ILSVRC challenge; its performance was similar to the
latest generation Inception-v3 network. This raises the question of whether
there are any benefit in combining the Inception architecture with residual
connections. Here we give clear empirical evidence that training with residual
connections accelerates the training of Inception networks significantly. There
is also some evidence of residual Inception networks outperforming similarly
expensive Inception networks without residual connections by a thin margin. We
also present several new streamlined architectures for both residual and
non-residual Inception networks. These variations improve the single-frame
recognition performance on the ILSVRC 2012 classification task significantly.
We further demonstrate how proper activation scaling stabilizes the training of
very wide residual Inception networks. With an ensemble of three residual and
one Inception-v4, we achieve 3.08 percent top-5 error on the test set of the
ImageNet classification (CLS) challenge | http://arxiv.org/pdf/1602.07261 | Christian Szegedy, Sergey Ioffe, Vincent Vanhoucke, Alex Alemi | cs.CV | null | null | cs.CV | 20160223 | 20160823 | [
{
"id": "1512.00567"
},
{
"id": "1512.03385"
}
] |
1602.07261 | 24 | # 5. Experimental Results
First we observe the top-1 and top-5 validation-error evo- lution of the four variants during training. After the exper- iment was conducted, we have found that our continuous evaluation was conducted on a subset of the validation set which omitted about 1700 blacklisted entities due to poor bounding boxes. It turned out that the omission should have been only performed for the CLSLOC benchmark, but yields somewhat incomparable (more optimistic) numbers when compared to other reports including some earlier re- ports by our team. The difference is about 0.3% for top-1 error and about 0.15% for the top-5 error. However, since the differences are consistent, we think the comparison be- tween the curves is a fair one. | 1602.07261#24 | Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning | Very deep convolutional networks have been central to the largest advances in
image recognition performance in recent years. One example is the Inception
architecture that has been shown to achieve very good performance at relatively
low computational cost. Recently, the introduction of residual connections in
conjunction with a more traditional architecture has yielded state-of-the-art
performance in the 2015 ILSVRC challenge; its performance was similar to the
latest generation Inception-v3 network. This raises the question of whether
there are any benefit in combining the Inception architecture with residual
connections. Here we give clear empirical evidence that training with residual
connections accelerates the training of Inception networks significantly. There
is also some evidence of residual Inception networks outperforming similarly
expensive Inception networks without residual connections by a thin margin. We
also present several new streamlined architectures for both residual and
non-residual Inception networks. These variations improve the single-frame
recognition performance on the ILSVRC 2012 classification task significantly.
We further demonstrate how proper activation scaling stabilizes the training of
very wide residual Inception networks. With an ensemble of three residual and
one Inception-v4, we achieve 3.08 percent top-5 error on the test set of the
ImageNet classification (CLS) challenge | http://arxiv.org/pdf/1602.07261 | Christian Szegedy, Sergey Ioffe, Vincent Vanhoucke, Alex Alemi | cs.CV | null | null | cs.CV | 20160223 | 20160823 | [
{
"id": "1512.00567"
},
{
"id": "1512.03385"
}
] |
1602.07261 | 25 | On the other hand, we have rerun our multi-crop and en- semble results on the complete validation set consisting of 50000 images. Also the ï¬nal ensemble result was also per- formed on the test set and sent to the ILSVRC test server for validation to verify that our tuning did not result in an over-ï¬tting. We would like to stress that this ï¬nal validation was done only once and we have submitted our results only twice in the last year: once for the BN-Inception paper and later during the ILSVR-2015 CLSLOC competition, so we believe that the test set numbers constitute a true estimate of the generalization capabilities of our model.
Finally, we present some comparisons, between various versions of Inception and Inception-ResNet. The models Inception-v3 and Inception-v4 are deep convolutional net49] === inoeption-v3 3a â_inception-resnetw1 ag " 5 â i 5 H . , 1 17 a0 30 cy 700 a0 140 160 0200 Epoch | 1602.07261#25 | Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning | Very deep convolutional networks have been central to the largest advances in
image recognition performance in recent years. One example is the Inception
architecture that has been shown to achieve very good performance at relatively
low computational cost. Recently, the introduction of residual connections in
conjunction with a more traditional architecture has yielded state-of-the-art
performance in the 2015 ILSVRC challenge; its performance was similar to the
latest generation Inception-v3 network. This raises the question of whether
there are any benefit in combining the Inception architecture with residual
connections. Here we give clear empirical evidence that training with residual
connections accelerates the training of Inception networks significantly. There
is also some evidence of residual Inception networks outperforming similarly
expensive Inception networks without residual connections by a thin margin. We
also present several new streamlined architectures for both residual and
non-residual Inception networks. These variations improve the single-frame
recognition performance on the ILSVRC 2012 classification task significantly.
We further demonstrate how proper activation scaling stabilizes the training of
very wide residual Inception networks. With an ensemble of three residual and
one Inception-v4, we achieve 3.08 percent top-5 error on the test set of the
ImageNet classification (CLS) challenge | http://arxiv.org/pdf/1602.07261 | Christian Szegedy, Sergey Ioffe, Vincent Vanhoucke, Alex Alemi | cs.CV | null | null | cs.CV | 20160223 | 20160823 | [
{
"id": "1512.00567"
},
{
"id": "1512.03385"
}
] |
1602.07261 | 26 | Figure 22. Top-5 error evolution during training of pure Inception- v3 vs a residual Inception of similar computational cost. The eval- uation is measured on a single crop on the non-blacklist images of the ILSVRC-2012 validation set. The residual version has trained much faster and reached slightly better ï¬nal recall on the valida- tion set.
ba === inception-v4 bo 40 @ 80 100 10 cc) Teo Epoch
Figure 23. Top-1 error evolution during training of pure Inception- v3 vs a residual Inception of similar computational cost. The eval- uation is measured on a single crop on the non-blacklist images of the ILSVRC-2012 validation set. The residual version was train- ing much faster and reached slightly better ï¬nal accuracy than the traditional Inception-v4.
Network BN-Inception [6] Inception-v3 [15] Inception-ResNet-v1 Inception-v4 Inception-ResNet-v2 Top-1 Error Top-5 Error 25.2% 21.2% 21.3% 20.0% 19.9% 7.8% 5.6% 5.5% 5.0% 4.9%
Table 2. Single crop - single model experimental results. Reported on the non-blacklisted subset of the validation set of ILSVRC 2012. | 1602.07261#26 | Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning | Very deep convolutional networks have been central to the largest advances in
image recognition performance in recent years. One example is the Inception
architecture that has been shown to achieve very good performance at relatively
low computational cost. Recently, the introduction of residual connections in
conjunction with a more traditional architecture has yielded state-of-the-art
performance in the 2015 ILSVRC challenge; its performance was similar to the
latest generation Inception-v3 network. This raises the question of whether
there are any benefit in combining the Inception architecture with residual
connections. Here we give clear empirical evidence that training with residual
connections accelerates the training of Inception networks significantly. There
is also some evidence of residual Inception networks outperforming similarly
expensive Inception networks without residual connections by a thin margin. We
also present several new streamlined architectures for both residual and
non-residual Inception networks. These variations improve the single-frame
recognition performance on the ILSVRC 2012 classification task significantly.
We further demonstrate how proper activation scaling stabilizes the training of
very wide residual Inception networks. With an ensemble of three residual and
one Inception-v4, we achieve 3.08 percent top-5 error on the test set of the
ImageNet classification (CLS) challenge | http://arxiv.org/pdf/1602.07261 | Christian Szegedy, Sergey Ioffe, Vincent Vanhoucke, Alex Alemi | cs.CV | null | null | cs.CV | 20160223 | 20160823 | [
{
"id": "1512.00567"
},
{
"id": "1512.03385"
}
] |
1602.07261 | 27 | Table 2. Single crop - single model experimental results. Reported on the non-blacklisted subset of the validation set of ILSVRC 2012.
works not utilizing residual connections while Inception- ResNet-v1 and Inception-ResNet-v2 are Inception style net- works that utilize residual connections instead of ï¬lter con- catenation.
Table 2 shows the single-model, single crop top-1 and top-5 error of the various architectures on the validation set.
Error (09-5) % 4 <== inception-v4 â_ inception-resnet-v2 40 6 0 80 100 10 a0 Â¥60 pach
Figure 24. Top-5 error evolution during training of pure Inception- v4 vs a residual Inception of similar computational cost. The eval- uation is measured on a single crop on the non-blacklist images of the ILSVRC-2012 validation set. The residual version trained faster and reached slightly better ï¬nal recall on the validation set.
45 inception-v4 4. inception-resnet-v2 a8 inception 30) inception-resnet-v1 280 7 0 a 700 i ao 700 Epoch | 1602.07261#27 | Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning | Very deep convolutional networks have been central to the largest advances in
image recognition performance in recent years. One example is the Inception
architecture that has been shown to achieve very good performance at relatively
low computational cost. Recently, the introduction of residual connections in
conjunction with a more traditional architecture has yielded state-of-the-art
performance in the 2015 ILSVRC challenge; its performance was similar to the
latest generation Inception-v3 network. This raises the question of whether
there are any benefit in combining the Inception architecture with residual
connections. Here we give clear empirical evidence that training with residual
connections accelerates the training of Inception networks significantly. There
is also some evidence of residual Inception networks outperforming similarly
expensive Inception networks without residual connections by a thin margin. We
also present several new streamlined architectures for both residual and
non-residual Inception networks. These variations improve the single-frame
recognition performance on the ILSVRC 2012 classification task significantly.
We further demonstrate how proper activation scaling stabilizes the training of
very wide residual Inception networks. With an ensemble of three residual and
one Inception-v4, we achieve 3.08 percent top-5 error on the test set of the
ImageNet classification (CLS) challenge | http://arxiv.org/pdf/1602.07261 | Christian Szegedy, Sergey Ioffe, Vincent Vanhoucke, Alex Alemi | cs.CV | null | null | cs.CV | 20160223 | 20160823 | [
{
"id": "1512.00567"
},
{
"id": "1512.03385"
}
] |
1602.07261 | 28 | 45 inception-v4 4. inception-resnet-v2 a8 inception 30) inception-resnet-v1 280 7 0 a 700 i ao 700 Epoch
Figure 25. Top-5 error evolution of all four models (single model, single crop). Showing the improvement due to larger model size. Although the residual version converges faster, the ï¬nal accuracy seems to mainly depend on the model size.
inception inception-resnet-v1 Ey co 30 cy 700 or) a0 70 Epoch
Figure 26. Top-1 error evolution of all four models (single model, single crop). This paints a similar picture as the top-5 evaluation.
Table 3 shows the performance of the various models with a small number of crops: 10 crops for ResNet as was reported in [5]), for the Inception variants, we have used the 12 crops evaluation as as described in [14].
Network ResNet-151 [5] Inception-v3 [15] Inception-ResNet-v1 Inception-v4 Inception-ResNet-v2 Crops Top-1 Error Top-5 Error 10 12 12 12 12 21.4% 19.8% 19.8% 18.7% 18.7% 5.7% 4.6% 4.6% 4.2% 4.1% | 1602.07261#28 | Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning | Very deep convolutional networks have been central to the largest advances in
image recognition performance in recent years. One example is the Inception
architecture that has been shown to achieve very good performance at relatively
low computational cost. Recently, the introduction of residual connections in
conjunction with a more traditional architecture has yielded state-of-the-art
performance in the 2015 ILSVRC challenge; its performance was similar to the
latest generation Inception-v3 network. This raises the question of whether
there are any benefit in combining the Inception architecture with residual
connections. Here we give clear empirical evidence that training with residual
connections accelerates the training of Inception networks significantly. There
is also some evidence of residual Inception networks outperforming similarly
expensive Inception networks without residual connections by a thin margin. We
also present several new streamlined architectures for both residual and
non-residual Inception networks. These variations improve the single-frame
recognition performance on the ILSVRC 2012 classification task significantly.
We further demonstrate how proper activation scaling stabilizes the training of
very wide residual Inception networks. With an ensemble of three residual and
one Inception-v4, we achieve 3.08 percent top-5 error on the test set of the
ImageNet classification (CLS) challenge | http://arxiv.org/pdf/1602.07261 | Christian Szegedy, Sergey Ioffe, Vincent Vanhoucke, Alex Alemi | cs.CV | null | null | cs.CV | 20160223 | 20160823 | [
{
"id": "1512.00567"
},
{
"id": "1512.03385"
}
] |
1602.07261 | 29 | Table 3. 10/12 crops evaluations - single model experimental re- sults. Reported on the all 50000 images of the validation set of ILSVRC 2012.
Network ResNet-151 [5] Inception-v3 [15] Inception-ResNet-v1 Inception-v4 Inception-ResNet-v2 Crops Top-1 Error Top-5 Error dense 144 144 144 144 19.4% 18.9% 18.8% 17.7% 17.8% 4.5% 4.3% 4.3% 3.8% 3.7%
Table 4. 144 crops evaluations - single model experimental results. Reported on the all 50000 images of the validation set of ILSVRC 2012.
Network ResNet-151 [5] Inception-v3 [15] 6 4 â 17.3% 3.6% 3.6% Inception-v4 + 3Ã Inception-ResNet-v2 4 16.5% 3.1%
# Models Top-1 Error Top-5 Error | 1602.07261#29 | Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning | Very deep convolutional networks have been central to the largest advances in
image recognition performance in recent years. One example is the Inception
architecture that has been shown to achieve very good performance at relatively
low computational cost. Recently, the introduction of residual connections in
conjunction with a more traditional architecture has yielded state-of-the-art
performance in the 2015 ILSVRC challenge; its performance was similar to the
latest generation Inception-v3 network. This raises the question of whether
there are any benefit in combining the Inception architecture with residual
connections. Here we give clear empirical evidence that training with residual
connections accelerates the training of Inception networks significantly. There
is also some evidence of residual Inception networks outperforming similarly
expensive Inception networks without residual connections by a thin margin. We
also present several new streamlined architectures for both residual and
non-residual Inception networks. These variations improve the single-frame
recognition performance on the ILSVRC 2012 classification task significantly.
We further demonstrate how proper activation scaling stabilizes the training of
very wide residual Inception networks. With an ensemble of three residual and
one Inception-v4, we achieve 3.08 percent top-5 error on the test set of the
ImageNet classification (CLS) challenge | http://arxiv.org/pdf/1602.07261 | Christian Szegedy, Sergey Ioffe, Vincent Vanhoucke, Alex Alemi | cs.CV | null | null | cs.CV | 20160223 | 20160823 | [
{
"id": "1512.00567"
},
{
"id": "1512.03385"
}
] |
1602.07261 | 30 | # Models Top-1 Error Top-5 Error
Table 5. Ensemble results with 144 crops/dense evaluation. Re- ported on the all 50000 images of the validation set of ILSVRC 2012. For Inception-v4(+Residual), the ensemble consists of one pure Inception-v4 and three Inception-ResNet-v2 models and were evaluated both on the validation and on the test-set. The test-set performance was 3.08% top-5 error verifying that we donât over- ï¬t on the validation set.
Table 4 shows the single model performance of the var- ious models using. For residual network the dense evalua- tion result is reported from [5]. For the inception networks, the 144 crops strategy was used as described in [14].
Table 5 compares ensemble results. For the pure resid- ual network the 6 models dense evaluation result is reported from [5]. For the inception networks 4 models were ensem- bled using the 144 crops strategy as described in [14].
# 6. Conclusions
We have presented three new network architectures in detail:
⢠Inception-ResNet-v1: a hybrid Inception version that to Inception-v3 has a similar computational cost from [15]. | 1602.07261#30 | Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning | Very deep convolutional networks have been central to the largest advances in
image recognition performance in recent years. One example is the Inception
architecture that has been shown to achieve very good performance at relatively
low computational cost. Recently, the introduction of residual connections in
conjunction with a more traditional architecture has yielded state-of-the-art
performance in the 2015 ILSVRC challenge; its performance was similar to the
latest generation Inception-v3 network. This raises the question of whether
there are any benefit in combining the Inception architecture with residual
connections. Here we give clear empirical evidence that training with residual
connections accelerates the training of Inception networks significantly. There
is also some evidence of residual Inception networks outperforming similarly
expensive Inception networks without residual connections by a thin margin. We
also present several new streamlined architectures for both residual and
non-residual Inception networks. These variations improve the single-frame
recognition performance on the ILSVRC 2012 classification task significantly.
We further demonstrate how proper activation scaling stabilizes the training of
very wide residual Inception networks. With an ensemble of three residual and
one Inception-v4, we achieve 3.08 percent top-5 error on the test set of the
ImageNet classification (CLS) challenge | http://arxiv.org/pdf/1602.07261 | Christian Szegedy, Sergey Ioffe, Vincent Vanhoucke, Alex Alemi | cs.CV | null | null | cs.CV | 20160223 | 20160823 | [
{
"id": "1512.00567"
},
{
"id": "1512.03385"
}
] |
1602.07261 | 31 | ⢠Inception-ResNet-v1: a hybrid Inception version that to Inception-v3 has a similar computational cost from [15].
⢠Inception-ResNet-v2: a costlier hybrid Inception ver- sion with signiï¬cantly improved recognition perfor- mance.
⢠Inception-v4: a pure Inception variant without residual connections with roughly the same recognition perfor- mance as Inception-ResNet-v2.
We studied how the introduction of residual connections leads to dramatically improved training speed for the Incep- tion architecture. Also our latest models (with and without residual connections) outperform all our previous networks, just by virtue of the increased model size.
# References | 1602.07261#31 | Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning | Very deep convolutional networks have been central to the largest advances in
image recognition performance in recent years. One example is the Inception
architecture that has been shown to achieve very good performance at relatively
low computational cost. Recently, the introduction of residual connections in
conjunction with a more traditional architecture has yielded state-of-the-art
performance in the 2015 ILSVRC challenge; its performance was similar to the
latest generation Inception-v3 network. This raises the question of whether
there are any benefit in combining the Inception architecture with residual
connections. Here we give clear empirical evidence that training with residual
connections accelerates the training of Inception networks significantly. There
is also some evidence of residual Inception networks outperforming similarly
expensive Inception networks without residual connections by a thin margin. We
also present several new streamlined architectures for both residual and
non-residual Inception networks. These variations improve the single-frame
recognition performance on the ILSVRC 2012 classification task significantly.
We further demonstrate how proper activation scaling stabilizes the training of
very wide residual Inception networks. With an ensemble of three residual and
one Inception-v4, we achieve 3.08 percent top-5 error on the test set of the
ImageNet classification (CLS) challenge | http://arxiv.org/pdf/1602.07261 | Christian Szegedy, Sergey Ioffe, Vincent Vanhoucke, Alex Alemi | cs.CV | null | null | cs.CV | 20160223 | 20160823 | [
{
"id": "1512.00567"
},
{
"id": "1512.03385"
}
] |
1602.07261 | 32 | # References
[1] M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin, S. Ghe- mawat, I. Goodfellow, A. Harp, G. Irving, M. Isard, Y. Jia, R. Jozefowicz, L. Kaiser, M. Kudlur, J. Levenberg, D. Man´e, R. Monga, S. Moore, D. Murray, C. Olah, M. Schuster, J. Shlens, B. Steiner, I. Sutskever, K. Talwar, P. Tucker, V. Vanhoucke, V. Vasudevan, F. Vi´egas, O. Vinyals, P. War- den, M. Wattenberg, M. Wicke, Y. Yu, and X. Zheng. Tensor- Flow: Large-scale machine learning on heterogeneous sys- tems, 2015. Software available from tensorï¬ow.org. | 1602.07261#32 | Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning | Very deep convolutional networks have been central to the largest advances in
image recognition performance in recent years. One example is the Inception
architecture that has been shown to achieve very good performance at relatively
low computational cost. Recently, the introduction of residual connections in
conjunction with a more traditional architecture has yielded state-of-the-art
performance in the 2015 ILSVRC challenge; its performance was similar to the
latest generation Inception-v3 network. This raises the question of whether
there are any benefit in combining the Inception architecture with residual
connections. Here we give clear empirical evidence that training with residual
connections accelerates the training of Inception networks significantly. There
is also some evidence of residual Inception networks outperforming similarly
expensive Inception networks without residual connections by a thin margin. We
also present several new streamlined architectures for both residual and
non-residual Inception networks. These variations improve the single-frame
recognition performance on the ILSVRC 2012 classification task significantly.
We further demonstrate how proper activation scaling stabilizes the training of
very wide residual Inception networks. With an ensemble of three residual and
one Inception-v4, we achieve 3.08 percent top-5 error on the test set of the
ImageNet classification (CLS) challenge | http://arxiv.org/pdf/1602.07261 | Christian Szegedy, Sergey Ioffe, Vincent Vanhoucke, Alex Alemi | cs.CV | null | null | cs.CV | 20160223 | 20160823 | [
{
"id": "1512.00567"
},
{
"id": "1512.03385"
}
] |
1602.07261 | 33 | [2] J. Dean, G. Corrado, R. Monga, K. Chen, M. Devin, M. Mao, A. Senior, P. Tucker, K. Yang, Q. V. Le, et al. Large scale dis- tributed deep networks. In Advances in Neural Information Processing Systems, pages 1223â1231, 2012.
[3] C. Dong, C. C. Loy, K. He, and X. Tang. Learning a deep convolutional network for image super-resolution. In Com- puter VisionâECCV 2014, pages 184â199. Springer, 2014. [4] R. Girshick, J. Donahue, T. Darrell, and J. Malik. Rich fea- ture hierarchies for accurate object detection and semantic In Proceedings of the IEEE Conference on segmentation. Computer Vision and Pattern Recognition (CVPR), 2014. [5] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learn- ing for image recognition. arXiv preprint arXiv:1512.03385, 2015.
[6] S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of The 32nd International Conference on Ma- chine Learning, pages 448â456, 2015. | 1602.07261#33 | Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning | Very deep convolutional networks have been central to the largest advances in
image recognition performance in recent years. One example is the Inception
architecture that has been shown to achieve very good performance at relatively
low computational cost. Recently, the introduction of residual connections in
conjunction with a more traditional architecture has yielded state-of-the-art
performance in the 2015 ILSVRC challenge; its performance was similar to the
latest generation Inception-v3 network. This raises the question of whether
there are any benefit in combining the Inception architecture with residual
connections. Here we give clear empirical evidence that training with residual
connections accelerates the training of Inception networks significantly. There
is also some evidence of residual Inception networks outperforming similarly
expensive Inception networks without residual connections by a thin margin. We
also present several new streamlined architectures for both residual and
non-residual Inception networks. These variations improve the single-frame
recognition performance on the ILSVRC 2012 classification task significantly.
We further demonstrate how proper activation scaling stabilizes the training of
very wide residual Inception networks. With an ensemble of three residual and
one Inception-v4, we achieve 3.08 percent top-5 error on the test set of the
ImageNet classification (CLS) challenge | http://arxiv.org/pdf/1602.07261 | Christian Szegedy, Sergey Ioffe, Vincent Vanhoucke, Alex Alemi | cs.CV | null | null | cs.CV | 20160223 | 20160823 | [
{
"id": "1512.00567"
},
{
"id": "1512.03385"
}
] |
1602.07261 | 34 | [7] A. Karpathy, G. Toderici, S. Shetty, T. Leung, R. Sukthankar, and L. Fei-Fei. Large-scale video classiï¬cation with con- In Computer Vision and Pat- volutional neural networks. tern Recognition (CVPR), 2014 IEEE Conference on, pages 1725â1732. IEEE, 2014.
Imagenet classiï¬cation with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097â1105, 2012.
[9] M. Lin, Q. Chen, and S. Yan. Network in network. arXiv preprint arXiv:1312.4400, 2013.
[10] J. Long, E. Shelhamer, and T. Darrell. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recogni- tion, pages 3431â3440, 2015.
[11] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein,
et al. 2014. Imagenet large scale visual recognition challenge. | 1602.07261#34 | Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning | Very deep convolutional networks have been central to the largest advances in
image recognition performance in recent years. One example is the Inception
architecture that has been shown to achieve very good performance at relatively
low computational cost. Recently, the introduction of residual connections in
conjunction with a more traditional architecture has yielded state-of-the-art
performance in the 2015 ILSVRC challenge; its performance was similar to the
latest generation Inception-v3 network. This raises the question of whether
there are any benefit in combining the Inception architecture with residual
connections. Here we give clear empirical evidence that training with residual
connections accelerates the training of Inception networks significantly. There
is also some evidence of residual Inception networks outperforming similarly
expensive Inception networks without residual connections by a thin margin. We
also present several new streamlined architectures for both residual and
non-residual Inception networks. These variations improve the single-frame
recognition performance on the ILSVRC 2012 classification task significantly.
We further demonstrate how proper activation scaling stabilizes the training of
very wide residual Inception networks. With an ensemble of three residual and
one Inception-v4, we achieve 3.08 percent top-5 error on the test set of the
ImageNet classification (CLS) challenge | http://arxiv.org/pdf/1602.07261 | Christian Szegedy, Sergey Ioffe, Vincent Vanhoucke, Alex Alemi | cs.CV | null | null | cs.CV | 20160223 | 20160823 | [
{
"id": "1512.00567"
},
{
"id": "1512.03385"
}
] |
1602.07261 | 35 | et al. 2014. Imagenet large scale visual recognition challenge.
[12] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
[13] I. Sutskever, J. Martens, G. Dahl, and G. Hinton. On the importance of initialization and momentum in deep learning. In Proceedings of the 30th International Conference on Ma- chine Learning (ICML-13), volume 28, pages 1139â1147. JMLR Workshop and Conference Proceedings, May 2013.
[14] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1â9, 2015.
[15] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna. Rethinking the inception architecture for computer vision. arXiv preprint arXiv:1512.00567, 2015. | 1602.07261#35 | Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning | Very deep convolutional networks have been central to the largest advances in
image recognition performance in recent years. One example is the Inception
architecture that has been shown to achieve very good performance at relatively
low computational cost. Recently, the introduction of residual connections in
conjunction with a more traditional architecture has yielded state-of-the-art
performance in the 2015 ILSVRC challenge; its performance was similar to the
latest generation Inception-v3 network. This raises the question of whether
there are any benefit in combining the Inception architecture with residual
connections. Here we give clear empirical evidence that training with residual
connections accelerates the training of Inception networks significantly. There
is also some evidence of residual Inception networks outperforming similarly
expensive Inception networks without residual connections by a thin margin. We
also present several new streamlined architectures for both residual and
non-residual Inception networks. These variations improve the single-frame
recognition performance on the ILSVRC 2012 classification task significantly.
We further demonstrate how proper activation scaling stabilizes the training of
very wide residual Inception networks. With an ensemble of three residual and
one Inception-v4, we achieve 3.08 percent top-5 error on the test set of the
ImageNet classification (CLS) challenge | http://arxiv.org/pdf/1602.07261 | Christian Szegedy, Sergey Ioffe, Vincent Vanhoucke, Alex Alemi | cs.CV | null | null | cs.CV | 20160223 | 20160823 | [
{
"id": "1512.00567"
},
{
"id": "1512.03385"
}
] |
1602.07261 | 36 | [16] T. Tieleman and G. Hinton. Divide the gradient by a run- ning average of its recent magnitude. COURSERA: Neural Networks for Machine Learning, 4, 2012. Accessed: 2015- 11-05.
[17] A. Toshev and C. Szegedy. Deeppose: Human pose estima- tion via deep neural networks. In Computer Vision and Pat- tern Recognition (CVPR), 2014 IEEE Conference on, pages 1653â1660. IEEE, 2014.
[18] N. Wang and D.-Y. Yeung. Learning a deep compact image In Advances in Neural representation for visual tracking. Information Processing Systems, pages 809â817, 2013. | 1602.07261#36 | Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning | Very deep convolutional networks have been central to the largest advances in
image recognition performance in recent years. One example is the Inception
architecture that has been shown to achieve very good performance at relatively
low computational cost. Recently, the introduction of residual connections in
conjunction with a more traditional architecture has yielded state-of-the-art
performance in the 2015 ILSVRC challenge; its performance was similar to the
latest generation Inception-v3 network. This raises the question of whether
there are any benefit in combining the Inception architecture with residual
connections. Here we give clear empirical evidence that training with residual
connections accelerates the training of Inception networks significantly. There
is also some evidence of residual Inception networks outperforming similarly
expensive Inception networks without residual connections by a thin margin. We
also present several new streamlined architectures for both residual and
non-residual Inception networks. These variations improve the single-frame
recognition performance on the ILSVRC 2012 classification task significantly.
We further demonstrate how proper activation scaling stabilizes the training of
very wide residual Inception networks. With an ensemble of three residual and
one Inception-v4, we achieve 3.08 percent top-5 error on the test set of the
ImageNet classification (CLS) challenge | http://arxiv.org/pdf/1602.07261 | Christian Szegedy, Sergey Ioffe, Vincent Vanhoucke, Alex Alemi | cs.CV | null | null | cs.CV | 20160223 | 20160823 | [
{
"id": "1512.00567"
},
{
"id": "1512.03385"
}
] |
1602.02867 | 0 | 7 1 0 2
r a M 0 2 ] I A . s c [
4 v 7 6 8 2 0 . 2 0 6 1 : v i X r a
# Value Iteration Networks
Aviv Tamar, Yi Wu, Garrett Thomas, Sergey Levine, and Pieter Abbeel
Dept. of Electrical Engineering and Computer Sciences, UC Berkeley
# Abstract
We introduce the value iteration network (VIN): a fully differentiable neural net- work with a âplanning moduleâ embedded within. VINs can learn to plan, and are suitable for predicting outcomes that involve planning-based reasoning, such as policies for reinforcement learning. Key to our approach is a novel differentiable approximation of the value-iteration algorithm, which can be represented as a con- volutional neural network, and trained end-to-end using standard backpropagation. We evaluate VIN based policies on discrete and continuous path-planning domains, and on a natural-language based search task. We show that by learning an explicit planning computation, VIN policies generalize better to new, unseen domains.
# 1 Introduction | 1602.02867#0 | Value Iteration Networks | We introduce the value iteration network (VIN): a fully differentiable neural
network with a `planning module' embedded within. VINs can learn to plan, and
are suitable for predicting outcomes that involve planning-based reasoning,
such as policies for reinforcement learning. Key to our approach is a novel
differentiable approximation of the value-iteration algorithm, which can be
represented as a convolutional neural network, and trained end-to-end using
standard backpropagation. We evaluate VIN based policies on discrete and
continuous path-planning domains, and on a natural-language based search task.
We show that by learning an explicit planning computation, VIN policies
generalize better to new, unseen domains. | http://arxiv.org/pdf/1602.02867 | Aviv Tamar, Yi Wu, Garrett Thomas, Sergey Levine, Pieter Abbeel | cs.AI, cs.LG, cs.NE, stat.ML | Fixed missing table values | Advances in Neural Information Processing Systems 29 pages
2154--2162, 2016 | cs.AI | 20160209 | 20170320 | [
{
"id": "1602.02261"
},
{
"id": "1602.01783"
},
{
"id": "1604.06778"
},
{
"id": "1604.07095"
}
] |
1602.02867 | 1 | # 1 Introduction
Over the last decade, deep convolutional neural networks (CNNs) have revolutionized supervised learning for tasks such as object recognition, action recognition, and semantic segmentation [3, 15, 6, 19]. Recently, CNNs have been applied to reinforcement learning (RL) tasks with visual observations such as Atari games [21], robotic manipulation [18], and imitation learning (IL) [9]. In these tasks, a neural network (NN) is trained to represent a policy â a mapping from an observation of the systemâs state to an action, with the goal of representing a control strategy that has good long-term behavior, typically quantiï¬ed as the minimization of a sequence of time-dependent costs.
The sequential nature of decision making in RL is inherently different than the one-step decisions in supervised learning, and in general requires some form of planning [2]. However, most recent deep RL works [21, 18, 9] employed NN architectures that are very similar to the standard networks used in supervised learning tasks, which typically consist of CNNs for feature extraction, and fully connected layers that map the features to a probability distribution over actions. Such networks are inherently reactive, and in particular, lack explicit planning computation. The success of reactive policies in sequential problems is due to the learning algorithm, which essentially trains a reactive policy to select actions that have good long-term consequences in its training domain. | 1602.02867#1 | Value Iteration Networks | We introduce the value iteration network (VIN): a fully differentiable neural
network with a `planning module' embedded within. VINs can learn to plan, and
are suitable for predicting outcomes that involve planning-based reasoning,
such as policies for reinforcement learning. Key to our approach is a novel
differentiable approximation of the value-iteration algorithm, which can be
represented as a convolutional neural network, and trained end-to-end using
standard backpropagation. We evaluate VIN based policies on discrete and
continuous path-planning domains, and on a natural-language based search task.
We show that by learning an explicit planning computation, VIN policies
generalize better to new, unseen domains. | http://arxiv.org/pdf/1602.02867 | Aviv Tamar, Yi Wu, Garrett Thomas, Sergey Levine, Pieter Abbeel | cs.AI, cs.LG, cs.NE, stat.ML | Fixed missing table values | Advances in Neural Information Processing Systems 29 pages
2154--2162, 2016 | cs.AI | 20160209 | 20170320 | [
{
"id": "1602.02261"
},
{
"id": "1602.01783"
},
{
"id": "1604.06778"
},
{
"id": "1604.07095"
}
] |
1602.02867 | 2 | To understand why planning can nevertheless be an important ingredient in a policy, consider the grid-world navigation task depicted in Figure 1 (left), in which the agent can observe a map of its domain, and is required to navigate between some obstacles to a target position. One hopes that after training a policy to solve several instances of this problem with different obstacle conï¬gurations, the policy would generalize to solve a different, unseen domain, as in Figure 1 (right). However, as we show in our experiments, while standard CNN-based networks can be easily trained to solve a set of such maps, they do not generalize well to new tasks outside this set, because they do not understand the goal-directed nature of the behavior. This observation suggests that the computation learned by reactive policies is different from planning, which is required to solve a new task1.
1In principle, with enough training data that covers all possible task conï¬gurations, and a rich enough policy representation, a reactive policy can learn to map each task to its optimal policy. In practice, this is often too expensive, and we offer a more data-efï¬cient approach by exploiting a ï¬exible prior about the planning computation underlying the behavior.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. | 1602.02867#2 | Value Iteration Networks | We introduce the value iteration network (VIN): a fully differentiable neural
network with a `planning module' embedded within. VINs can learn to plan, and
are suitable for predicting outcomes that involve planning-based reasoning,
such as policies for reinforcement learning. Key to our approach is a novel
differentiable approximation of the value-iteration algorithm, which can be
represented as a convolutional neural network, and trained end-to-end using
standard backpropagation. We evaluate VIN based policies on discrete and
continuous path-planning domains, and on a natural-language based search task.
We show that by learning an explicit planning computation, VIN policies
generalize better to new, unseen domains. | http://arxiv.org/pdf/1602.02867 | Aviv Tamar, Yi Wu, Garrett Thomas, Sergey Levine, Pieter Abbeel | cs.AI, cs.LG, cs.NE, stat.ML | Fixed missing table values | Advances in Neural Information Processing Systems 29 pages
2154--2162, 2016 | cs.AI | 20160209 | 20170320 | [
{
"id": "1602.02261"
},
{
"id": "1602.01783"
},
{
"id": "1604.06778"
},
{
"id": "1604.07095"
}
] |
1602.02867 | 3 | 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
In this work, we propose a NN-based policy that can effectively learn to plan. Our model, termed a value-iteration network (VIN), has a differen- tiable âplanning programâ embedded within the NN structure.
=
. a] a al
= . a] a al
The key to our approach is an observation that the classic value-iteration (VI) planning algo- rithm [1, 2] may be represented by a speciï¬c Figure 1: Two instances of a grid-world domain. type of CNN. By embedding such a VI network Task is to move to the goal between the obstacles. module inside a standard feed-forward classiï¬- cation network, we obtain a NN model that can learn the parameters of a planning computation that yields useful predictions. The VI block is differentiable, and the whole network can be trained using standard backpropagation. This makes our policy simple to train using standard RL and IL algorithms, and straightforward to integrate with NNs for perception and control. | 1602.02867#3 | Value Iteration Networks | We introduce the value iteration network (VIN): a fully differentiable neural
network with a `planning module' embedded within. VINs can learn to plan, and
are suitable for predicting outcomes that involve planning-based reasoning,
such as policies for reinforcement learning. Key to our approach is a novel
differentiable approximation of the value-iteration algorithm, which can be
represented as a convolutional neural network, and trained end-to-end using
standard backpropagation. We evaluate VIN based policies on discrete and
continuous path-planning domains, and on a natural-language based search task.
We show that by learning an explicit planning computation, VIN policies
generalize better to new, unseen domains. | http://arxiv.org/pdf/1602.02867 | Aviv Tamar, Yi Wu, Garrett Thomas, Sergey Levine, Pieter Abbeel | cs.AI, cs.LG, cs.NE, stat.ML | Fixed missing table values | Advances in Neural Information Processing Systems 29 pages
2154--2162, 2016 | cs.AI | 20160209 | 20170320 | [
{
"id": "1602.02261"
},
{
"id": "1602.01783"
},
{
"id": "1604.06778"
},
{
"id": "1604.07095"
}
] |
1602.02867 | 4 | Connections between planning algorithms and recurrent NNs were previously explored by Ilin et al. [12]. Our work builds on related ideas, but results in a more broadly applicable policy representation. Our approach is different from model-based RL [25, 4], which requires system identiï¬cation to map the observations to a dynamics model, which is then solved for a policy. In many applications, including robotic manipulation and locomotion, accurate system identiï¬cation is difï¬cult, and modelling errors can severely degrade the policy performance. In such domains, a model-free approach is often preferred [18]. Since a VIN is just a NN policy, it can be trained model free, without requiring explicit system identiï¬cation. In addition, the effects of modelling errors in VINs can be mitigated by training the network end-to-end, similarly to the methods in [13, 11].
We demonstrate the effectiveness of VINs within standard RL and IL algorithms in various problems, among which require visual perception, continuous control, and also natural language based decision making in the WebNav challenge [23]. After training, the policy learns to map an observation to a planning computation relevant for the task, and generate action predictions based on the resulting plan. As we demonstrate, this leads to policies that generalize better to new, unseen, task instances. 2 Background | 1602.02867#4 | Value Iteration Networks | We introduce the value iteration network (VIN): a fully differentiable neural
network with a `planning module' embedded within. VINs can learn to plan, and
are suitable for predicting outcomes that involve planning-based reasoning,
such as policies for reinforcement learning. Key to our approach is a novel
differentiable approximation of the value-iteration algorithm, which can be
represented as a convolutional neural network, and trained end-to-end using
standard backpropagation. We evaluate VIN based policies on discrete and
continuous path-planning domains, and on a natural-language based search task.
We show that by learning an explicit planning computation, VIN policies
generalize better to new, unseen domains. | http://arxiv.org/pdf/1602.02867 | Aviv Tamar, Yi Wu, Garrett Thomas, Sergey Levine, Pieter Abbeel | cs.AI, cs.LG, cs.NE, stat.ML | Fixed missing table values | Advances in Neural Information Processing Systems 29 pages
2154--2162, 2016 | cs.AI | 20160209 | 20170320 | [
{
"id": "1602.02261"
},
{
"id": "1602.01783"
},
{
"id": "1604.06778"
},
{
"id": "1604.07095"
}
] |
1602.02867 | 6 | Value Iteration: A standard model for sequential decision making and planning is the Markov decision process (MDP) (1) 2]. An MDP M consists of states s ⬠S, actions a ⬠A, a reward function R(s, a), and a transition kernel P(sâ|s, a) that encodes the probability of the next state given the current state and action. A policy 7(a|s) prescribes an action distribution for each state. The goal in an MDP is to find a policy that obtains high rewards in the long term. Formally, the value Vâ¢(s) of a state under policy 7 is the expected discounted sum of rewards when starting from that state and executing policy 7, V"(s) = E* [7/9 y'r(sz, a4)| 80 = 8], where 7 ⬠(0, 1) is a discount factor, and Eâ denotes an expectation over trajectories of states and actions (so, a0, 81,41 -..), in which actions are selected according to 7, and states evolve according to the transition kernel P(sâ|s, a). The optimal value function V*(s) = max, V(s) is the maximal long-term return possible from a state. | 1602.02867#6 | Value Iteration Networks | We introduce the value iteration network (VIN): a fully differentiable neural
network with a `planning module' embedded within. VINs can learn to plan, and
are suitable for predicting outcomes that involve planning-based reasoning,
such as policies for reinforcement learning. Key to our approach is a novel
differentiable approximation of the value-iteration algorithm, which can be
represented as a convolutional neural network, and trained end-to-end using
standard backpropagation. We evaluate VIN based policies on discrete and
continuous path-planning domains, and on a natural-language based search task.
We show that by learning an explicit planning computation, VIN policies
generalize better to new, unseen domains. | http://arxiv.org/pdf/1602.02867 | Aviv Tamar, Yi Wu, Garrett Thomas, Sergey Levine, Pieter Abbeel | cs.AI, cs.LG, cs.NE, stat.ML | Fixed missing table values | Advances in Neural Information Processing Systems 29 pages
2154--2162, 2016 | cs.AI | 20160209 | 20170320 | [
{
"id": "1602.02261"
},
{
"id": "1602.01783"
},
{
"id": "1604.06778"
},
{
"id": "1604.07095"
}
] |
1602.02867 | 9 | Vr4i(s) = maxeQn(s,a) Vs, where Q,(s,a) = R(s,a) +730, P(s'|s,a)Vn(sâ). (1) It is well known that the value function V,, in VI converges as n â oo to V*, from which an optimal policy may be derived as 7*(s) = arg max, Q.0(s, a). Convolutional Neural Networks (CNNs) are NNs with a particular architecture that has proved useful for computer vision, among other domains {15}. A CNN is comprised of stacked convolution and max-pooling layers. The input to each convolution layer is a 3- dimensional signal X, typically, an image with | channels, m horizontal pixels, and n verti- cal pixels, and its output fis a I/-channel convolution of the image with kernels W1,..., w', hy ij =O (Si Wit Xuwrâia'-a) , where o is some scalar activation function. A max-pooling layer selects, for each channel / and pixel i, j in h, the maximum value among its neighbors N (i, 7), pinazpool "ij = | 1602.02867#9 | Value Iteration Networks | We introduce the value iteration network (VIN): a fully differentiable neural
network with a `planning module' embedded within. VINs can learn to plan, and
are suitable for predicting outcomes that involve planning-based reasoning,
such as policies for reinforcement learning. Key to our approach is a novel
differentiable approximation of the value-iteration algorithm, which can be
represented as a convolutional neural network, and trained end-to-end using
standard backpropagation. We evaluate VIN based policies on discrete and
continuous path-planning domains, and on a natural-language based search task.
We show that by learning an explicit planning computation, VIN policies
generalize better to new, unseen domains. | http://arxiv.org/pdf/1602.02867 | Aviv Tamar, Yi Wu, Garrett Thomas, Sergey Levine, Pieter Abbeel | cs.AI, cs.LG, cs.NE, stat.ML | Fixed missing table values | Advances in Neural Information Processing Systems 29 pages
2154--2162, 2016 | cs.AI | 20160209 | 20170320 | [
{
"id": "1602.02261"
},
{
"id": "1602.01783"
},
{
"id": "1604.06778"
},
{
"id": "1604.07095"
}
] |
1602.02867 | 12 | patch around pixel i, 7. After max-pooling, the image is down-sampled by a constant factor d, com- monly 2 or 4, resulting in an output signal with â channels, m/d horizontal pixels, and n/d vertical pixels. CNNs are typically trained using stochastic gradient descent (SGD), with backpropagation for computing gradients. Reinforcement Learning and Imitation Learning: In MDPs where the state space is very large or continuous, or when the MDP transitions or rewards are not known in advance, planning algorithms cannot be applied. In these cases, a policy can be learned from either expert supervision â IL, or by trial and error â RL. While the learning algorithms in both cases are different, the policy representations â which are the focus of this work â are similar. Additionally, most state-of-the-art algorithms such as [2: are agnostic to the policy representation, and only require it to be differentiable, for performing gradient descent on some algorithm-specific loss function. Therefore, in this paper we do not commit to a specific learning algorithm, and only consider the policy. Let ¢(s) denote an observation for state s. The policy is specified as a parametrized function | 1602.02867#12 | Value Iteration Networks | We introduce the value iteration network (VIN): a fully differentiable neural
network with a `planning module' embedded within. VINs can learn to plan, and
are suitable for predicting outcomes that involve planning-based reasoning,
such as policies for reinforcement learning. Key to our approach is a novel
differentiable approximation of the value-iteration algorithm, which can be
represented as a convolutional neural network, and trained end-to-end using
standard backpropagation. We evaluate VIN based policies on discrete and
continuous path-planning domains, and on a natural-language based search task.
We show that by learning an explicit planning computation, VIN policies
generalize better to new, unseen domains. | http://arxiv.org/pdf/1602.02867 | Aviv Tamar, Yi Wu, Garrett Thomas, Sergey Levine, Pieter Abbeel | cs.AI, cs.LG, cs.NE, stat.ML | Fixed missing table values | Advances in Neural Information Processing Systems 29 pages
2154--2162, 2016 | cs.AI | 20160209 | 20170320 | [
{
"id": "1602.02261"
},
{
"id": "1602.01783"
},
{
"id": "1604.06778"
},
{
"id": "1604.07095"
}
] |
1602.02867 | 13 | commit to a specific learning algorithm, and only consider the policy. Let ¢(s) denote an observation for state s. The policy is specified as a parametrized function tt9(a|(s)) mapping observations to a probability over actions, where @ are the policy parameters. For example, the policy could be represented as a neural network, with @ denoting the network weights. The goal is to tune the parameters such that the policy behaves well in the sense that to(a|@(s)) & x*(alb(s)), where 7* is the optimal policy for the MDP, as defined in Section[2] In IL, a dataset of N_ state observations and corresponding optimal actions {9(s'), a" ~ x*(0(s'))} 4 yn is generated by an expert. Learning a policy then becomes an instance of supervised learning [24] /9]. In RL, the optimal action is not available, but instead, the agent can act in the world and observe the rewards and state transitions its actions effect. RL algorithms such as in use these observations to improve the value of the policy. 3 The Value Iteration Network Model In this section we introduce a general policy representation that embeds an explicit planning module. As stated | 1602.02867#13 | Value Iteration Networks | We introduce the value iteration network (VIN): a fully differentiable neural
network with a `planning module' embedded within. VINs can learn to plan, and
are suitable for predicting outcomes that involve planning-based reasoning,
such as policies for reinforcement learning. Key to our approach is a novel
differentiable approximation of the value-iteration algorithm, which can be
represented as a convolutional neural network, and trained end-to-end using
standard backpropagation. We evaluate VIN based policies on discrete and
continuous path-planning domains, and on a natural-language based search task.
We show that by learning an explicit planning computation, VIN policies
generalize better to new, unseen domains. | http://arxiv.org/pdf/1602.02867 | Aviv Tamar, Yi Wu, Garrett Thomas, Sergey Levine, Pieter Abbeel | cs.AI, cs.LG, cs.NE, stat.ML | Fixed missing table values | Advances in Neural Information Processing Systems 29 pages
2154--2162, 2016 | cs.AI | 20160209 | 20170320 | [
{
"id": "1602.02261"
},
{
"id": "1602.01783"
},
{
"id": "1604.06778"
},
{
"id": "1604.07095"
}
] |
1602.02867 | 15 | Let M denote the MDP of the domain for which we design our policy 7. We assume that there is some unknown MDP M such that the optimal plan in M contains useful information about the optimal policy in the original task /. However, we emphasize that we do not assume to know M in advance. Our idea is to equip the policy with the ability to learn and solve M, and to add the solution of M as an element in the policy 7. We hypothesize that this will lead to a policy that automatically learns a useful M to plan on. We denote by 5 ⬠S,a ⬠A, R(5,a), and P(8â|8, @) the states, actions, rewards, and transitions in M. To facilitate a connection between M and M, we let R and P depend on the observation in M, namely, R = fr((s)) and P = fp(¢(s)), and we will later learn the functions fr and fp as a part of the policy learning process. For example, in the grid-world domain described above, we can let MW have the same state and action spaces as the true grid-world MM. The reward function fz can map an image of the domain to a high reward at the goal, and negative reward near an | 1602.02867#15 | Value Iteration Networks | We introduce the value iteration network (VIN): a fully differentiable neural
network with a `planning module' embedded within. VINs can learn to plan, and
are suitable for predicting outcomes that involve planning-based reasoning,
such as policies for reinforcement learning. Key to our approach is a novel
differentiable approximation of the value-iteration algorithm, which can be
represented as a convolutional neural network, and trained end-to-end using
standard backpropagation. We evaluate VIN based policies on discrete and
continuous path-planning domains, and on a natural-language based search task.
We show that by learning an explicit planning computation, VIN policies
generalize better to new, unseen domains. | http://arxiv.org/pdf/1602.02867 | Aviv Tamar, Yi Wu, Garrett Thomas, Sergey Levine, Pieter Abbeel | cs.AI, cs.LG, cs.NE, stat.ML | Fixed missing table values | Advances in Neural Information Processing Systems 29 pages
2154--2162, 2016 | cs.AI | 20160209 | 20170320 | [
{
"id": "1602.02261"
},
{
"id": "1602.01783"
},
{
"id": "1604.06778"
},
{
"id": "1604.07095"
}
] |
1602.02867 | 16 | and action spaces as the true grid-world MM. The reward function fz can map an image of the domain to a high reward at the goal, and negative reward near an obstacle, while fp can encode deterministic movements in the grid-world that do not depend on the observation. While these rewards and transitions are not necessarily the true rewards and transitions in the task, an optimal plan in M will still follow a trajectory that avoids obstacles and reaches the goal, similarly to the optimal plan in /. Once an MDP I has been specified, any standard planning algorithm can be used to obtain the value function V*. In the next section, we shall show that using a particular implementation of VI for planning has the advantage of being differentiable, and simple to implement within a NN framework. In this section however, we focus on how to use the planning result V* within the NN policy 7. Our approach is based on two important observations. The first is that the vector of values V*(s) Vs encodes all the information about the optimal plan in /. Thus, adding the vector V* as additional features to the policy 7 is sufficient for extracting information about the optimal plan in /. However, an additional property of V* is that the optimal decision | 1602.02867#16 | Value Iteration Networks | We introduce the value iteration network (VIN): a fully differentiable neural
network with a `planning module' embedded within. VINs can learn to plan, and
are suitable for predicting outcomes that involve planning-based reasoning,
such as policies for reinforcement learning. Key to our approach is a novel
differentiable approximation of the value-iteration algorithm, which can be
represented as a convolutional neural network, and trained end-to-end using
standard backpropagation. We evaluate VIN based policies on discrete and
continuous path-planning domains, and on a natural-language based search task.
We show that by learning an explicit planning computation, VIN policies
generalize better to new, unseen domains. | http://arxiv.org/pdf/1602.02867 | Aviv Tamar, Yi Wu, Garrett Thomas, Sergey Levine, Pieter Abbeel | cs.AI, cs.LG, cs.NE, stat.ML | Fixed missing table values | Advances in Neural Information Processing Systems 29 pages
2154--2162, 2016 | cs.AI | 20160209 | 20170320 | [
{
"id": "1602.02261"
},
{
"id": "1602.01783"
},
{
"id": "1604.06778"
},
{
"id": "1604.07095"
}
] |
1602.02867 | 17 | V* as additional features to the policy 7 is sufficient for extracting information about the optimal plan in /. However, an additional property of V* is that the optimal decision 7*(3) at a state 5 can depend only on a subset of the values of V*, since 7*(3) = arg max, R(8,a) + y 05 P(3'|8,a)V*(8â). Therefore, if the MDP has a local connectivity structure, such as in the grid-world example above, the states for which P(8â|5,@) > 0 is a small subset of S. In NN terminology, this is a form of attention [32], in the sense that for a given label prediction (action), only a subset of the input features (value function) is relevant. Attention is known to improve learning performance by reducing the effective number of network parameters during learning. Therefore, the second element in our network is an attention module that outputs a vector of (attention | 1602.02867#17 | Value Iteration Networks | We introduce the value iteration network (VIN): a fully differentiable neural
network with a `planning module' embedded within. VINs can learn to plan, and
are suitable for predicting outcomes that involve planning-based reasoning,
such as policies for reinforcement learning. Key to our approach is a novel
differentiable approximation of the value-iteration algorithm, which can be
represented as a convolutional neural network, and trained end-to-end using
standard backpropagation. We evaluate VIN based policies on discrete and
continuous path-planning domains, and on a natural-language based search task.
We show that by learning an explicit planning computation, VIN policies
generalize better to new, unseen domains. | http://arxiv.org/pdf/1602.02867 | Aviv Tamar, Yi Wu, Garrett Thomas, Sergey Levine, Pieter Abbeel | cs.AI, cs.LG, cs.NE, stat.ML | Fixed missing table values | Advances in Neural Information Processing Systems 29 pages
2154--2162, 2016 | cs.AI | 20160209 | 20170320 | [
{
"id": "1602.02261"
},
{
"id": "1602.01783"
},
{
"id": "1604.06778"
},
{
"id": "1604.07095"
}
] |
1602.02867 | 18 | 3
modulated) values Ï(s). Finally, the vector Ï(s) is added as additional features to a reactive policy Ïre(a|Ï(s), Ï(s)). The full network architecture is depicted in Figure 2 (left). Returning to our grid-world example, at a particular state s, the reactive policy only needs to query the values of the states neighboring s in order to select the correct action. Thus, the attention module in this case could return a Ï(s) vector with a subset of ¯V â for these neighboring states.
Value Iteration Network VI Module ViModule R _JPlanon ve New Value p. |Mop M| Vv Observation] | =| rrr Bf fee -O (s) >| Attention Reactive Policy Hre(alo(s), Y(s)) K recurrence
Value Iteration Network ViModule R _JPlanon ve p. |Mop M| Observation] | =| (s) >| Attention Reactive Policy Hre(alo(s), Y(s))
Figure 2: Planning-based NN models. Left: a general policy representation that adds value function features from a planner to a reactive policy. Right: VI module â a CNN representation of VI algorithm. | 1602.02867#18 | Value Iteration Networks | We introduce the value iteration network (VIN): a fully differentiable neural
network with a `planning module' embedded within. VINs can learn to plan, and
are suitable for predicting outcomes that involve planning-based reasoning,
such as policies for reinforcement learning. Key to our approach is a novel
differentiable approximation of the value-iteration algorithm, which can be
represented as a convolutional neural network, and trained end-to-end using
standard backpropagation. We evaluate VIN based policies on discrete and
continuous path-planning domains, and on a natural-language based search task.
We show that by learning an explicit planning computation, VIN policies
generalize better to new, unseen domains. | http://arxiv.org/pdf/1602.02867 | Aviv Tamar, Yi Wu, Garrett Thomas, Sergey Levine, Pieter Abbeel | cs.AI, cs.LG, cs.NE, stat.ML | Fixed missing table values | Advances in Neural Information Processing Systems 29 pages
2154--2162, 2016 | cs.AI | 20160209 | 20170320 | [
{
"id": "1602.02261"
},
{
"id": "1602.01783"
},
{
"id": "1604.06778"
},
{
"id": "1604.07095"
}
] |
1602.02867 | 19 | Let θ denote all the parameters of the policy, namely, the parameters of fR, fP , and Ïre, and note that Ï(s) is in fact a function of Ï(s). Therefore, the policy can be written in the form Ïθ(a|Ï(s)), similarly to the standard policy form (cf. Section 2). If we could back-propagate through this function, then potentially we could train the policy using standard RL and IL algorithms, just like any other standard policy representation. While it is easy to design functions fR and fP that are differentiable (and we provide several examples in our experiments), back-propagating the gradient through the planning algorithm is not trivial. In the following, we propose a novel interpretation of an approximate VI algorithm as a particular form of a CNN. This allows us to conveniently treat the planning module as just another NN, and by back-propagating through it, we can train the whole policy end-to-end.
# 3.1 The VI Module | 1602.02867#19 | Value Iteration Networks | We introduce the value iteration network (VIN): a fully differentiable neural
network with a `planning module' embedded within. VINs can learn to plan, and
are suitable for predicting outcomes that involve planning-based reasoning,
such as policies for reinforcement learning. Key to our approach is a novel
differentiable approximation of the value-iteration algorithm, which can be
represented as a convolutional neural network, and trained end-to-end using
standard backpropagation. We evaluate VIN based policies on discrete and
continuous path-planning domains, and on a natural-language based search task.
We show that by learning an explicit planning computation, VIN policies
generalize better to new, unseen domains. | http://arxiv.org/pdf/1602.02867 | Aviv Tamar, Yi Wu, Garrett Thomas, Sergey Levine, Pieter Abbeel | cs.AI, cs.LG, cs.NE, stat.ML | Fixed missing table values | Advances in Neural Information Processing Systems 29 pages
2154--2162, 2016 | cs.AI | 20160209 | 20170320 | [
{
"id": "1602.02261"
},
{
"id": "1602.01783"
},
{
"id": "1604.06778"
},
{
"id": "1604.07095"
}
] |
1602.02867 | 20 | # 3.1 The VI Module
We now introduce the VI module â a NN that encodes a differentiable planning computation. Our starting point is the VI algorithm (1). Our main observation is that each iteration of VI may be seen as passing the previous value function Vn and reward function R through a convolution layer and max-pooling layer. In this analogy, each channel in the convolution layer corresponds to the Q-function for a speciï¬c action, and convolution kernel weights correspond to the discounted transition probabilities. Thus by recurrently applying a convolution layer K times, K iterations of VI are effectively performed. | 1602.02867#20 | Value Iteration Networks | We introduce the value iteration network (VIN): a fully differentiable neural
network with a `planning module' embedded within. VINs can learn to plan, and
are suitable for predicting outcomes that involve planning-based reasoning,
such as policies for reinforcement learning. Key to our approach is a novel
differentiable approximation of the value-iteration algorithm, which can be
represented as a convolutional neural network, and trained end-to-end using
standard backpropagation. We evaluate VIN based policies on discrete and
continuous path-planning domains, and on a natural-language based search task.
We show that by learning an explicit planning computation, VIN policies
generalize better to new, unseen domains. | http://arxiv.org/pdf/1602.02867 | Aviv Tamar, Yi Wu, Garrett Thomas, Sergey Levine, Pieter Abbeel | cs.AI, cs.LG, cs.NE, stat.ML | Fixed missing table values | Advances in Neural Information Processing Systems 29 pages
2154--2162, 2016 | cs.AI | 20160209 | 20170320 | [
{
"id": "1602.02261"
},
{
"id": "1602.01783"
},
{
"id": "1604.06778"
},
{
"id": "1604.07095"
}
] |
1602.02867 | 21 | Following this idea, we propose the VI network module, as depicted in Figure2B. The inputs to the VI module is a âreward imageâ R of dimensions 1, m,n, where here, for the purpose of clarity, we follow the CNN formulation and explicitly assume that the state space S maps to a 2-dimensional grid. However, our approach can be extended to general discrete state spaces, for example, a graph, as we report in the WikiNav experiment in Seton] T The reward is fed into a convolutional layer Q with A channels and a linear activation function, Q =u. j we, yh, i/âijââj- Each channel in this layer corresponds to Q(5, @) for a particular action @. This layer is then max-pooled along the actions channel to produce the next-iteration value function layer V, V; j = max, Q(a,i,j). The next-iteration value function layer V is then stacked with the reward R, and fed back into the convolutional layer and max-pooling layer K times, to perform K iterations of value iteration. | 1602.02867#21 | Value Iteration Networks | We introduce the value iteration network (VIN): a fully differentiable neural
network with a `planning module' embedded within. VINs can learn to plan, and
are suitable for predicting outcomes that involve planning-based reasoning,
such as policies for reinforcement learning. Key to our approach is a novel
differentiable approximation of the value-iteration algorithm, which can be
represented as a convolutional neural network, and trained end-to-end using
standard backpropagation. We evaluate VIN based policies on discrete and
continuous path-planning domains, and on a natural-language based search task.
We show that by learning an explicit planning computation, VIN policies
generalize better to new, unseen domains. | http://arxiv.org/pdf/1602.02867 | Aviv Tamar, Yi Wu, Garrett Thomas, Sergey Levine, Pieter Abbeel | cs.AI, cs.LG, cs.NE, stat.ML | Fixed missing table values | Advances in Neural Information Processing Systems 29 pages
2154--2162, 2016 | cs.AI | 20160209 | 20170320 | [
{
"id": "1602.02261"
},
{
"id": "1602.01783"
},
{
"id": "1604.06778"
},
{
"id": "1604.07095"
}
] |
1602.02867 | 22 | The VI module is simply a NN architecture that has the capability of performing an approximate VI computation. Nevertheless, representing VI in this form makes learning the MDP parameters and reward function natural â by backpropagating through the network, similarly to a standard CNN. VI modules can also be composed hierarchically, by treating the value of one VI module as additional input to another VI module. We further report on this idea in the supplementary material.
# 3.2 Value Iteration Networks
We now have all the ingredients for a differentiable planning-based policy, which we term a value iteration network (VIN). The VIN is based on the general planning-based policy deï¬ned above, with the VI module as the planning algorithm. In order to implement a VIN, one has to specify the state
4 | 1602.02867#22 | Value Iteration Networks | We introduce the value iteration network (VIN): a fully differentiable neural
network with a `planning module' embedded within. VINs can learn to plan, and
are suitable for predicting outcomes that involve planning-based reasoning,
such as policies for reinforcement learning. Key to our approach is a novel
differentiable approximation of the value-iteration algorithm, which can be
represented as a convolutional neural network, and trained end-to-end using
standard backpropagation. We evaluate VIN based policies on discrete and
continuous path-planning domains, and on a natural-language based search task.
We show that by learning an explicit planning computation, VIN policies
generalize better to new, unseen domains. | http://arxiv.org/pdf/1602.02867 | Aviv Tamar, Yi Wu, Garrett Thomas, Sergey Levine, Pieter Abbeel | cs.AI, cs.LG, cs.NE, stat.ML | Fixed missing table values | Advances in Neural Information Processing Systems 29 pages
2154--2162, 2016 | cs.AI | 20160209 | 20170320 | [
{
"id": "1602.02261"
},
{
"id": "1602.01783"
},
{
"id": "1604.06778"
},
{
"id": "1604.07095"
}
] |
1602.02867 | 23 | 4
and action spaces for the planning module ¯S and ¯A, the reward and transition functions fR and fP , and the attention function; we refer to this as the VIN design. For some tasks, as we show in our experiments, it is relatively straightforward to select a suitable design, while other tasks may require more thought. However, we emphasize an important point: the reward, transitions, and attention can be deï¬ned by parametric functions, and trained with the whole policy2. Thus, a rough design can be speciï¬ed, and then ï¬ne-tuned by end-to-end training.
Once a VIN design is chosen, implementing the VIN is straightforward, as it is simply a form of a CNN. The networks in our experiments all required only several lines of Theano [28] code. In the next section, we evaluate VIN policies on various domains, showing that by learning to plan, they achieve a better generalization capability.
4 Experiments In this section we evaluate VINs as policy representations on various domains. Additional experiments investigating RL and hierarchical VINs, as well as technical implementation details are discussed in the supplementary material. Source code is available at https://github.com/avivt/VIN. Our goal in these experiments is to investigate the following questions:
1. Can VINs effectively learn a planning computation using standard RL and IL algorithms? | 1602.02867#23 | Value Iteration Networks | We introduce the value iteration network (VIN): a fully differentiable neural
network with a `planning module' embedded within. VINs can learn to plan, and
are suitable for predicting outcomes that involve planning-based reasoning,
such as policies for reinforcement learning. Key to our approach is a novel
differentiable approximation of the value-iteration algorithm, which can be
represented as a convolutional neural network, and trained end-to-end using
standard backpropagation. We evaluate VIN based policies on discrete and
continuous path-planning domains, and on a natural-language based search task.
We show that by learning an explicit planning computation, VIN policies
generalize better to new, unseen domains. | http://arxiv.org/pdf/1602.02867 | Aviv Tamar, Yi Wu, Garrett Thomas, Sergey Levine, Pieter Abbeel | cs.AI, cs.LG, cs.NE, stat.ML | Fixed missing table values | Advances in Neural Information Processing Systems 29 pages
2154--2162, 2016 | cs.AI | 20160209 | 20170320 | [
{
"id": "1602.02261"
},
{
"id": "1602.01783"
},
{
"id": "1604.06778"
},
{
"id": "1604.07095"
}
] |
1602.02867 | 24 | 1. Can VINs effectively learn a planning computation using standard RL and IL algorithms?
2. Does the planning computation learned by VINs make them better than reactive policies at generalizing to new domains?
An additional goal is to point out several ideas for designing VINs for various tasks. While this is not an exhaustive list that ï¬ts all domains, we hope that it will motivate creative designs in future work. | 1602.02867#24 | Value Iteration Networks | We introduce the value iteration network (VIN): a fully differentiable neural
network with a `planning module' embedded within. VINs can learn to plan, and
are suitable for predicting outcomes that involve planning-based reasoning,
such as policies for reinforcement learning. Key to our approach is a novel
differentiable approximation of the value-iteration algorithm, which can be
represented as a convolutional neural network, and trained end-to-end using
standard backpropagation. We evaluate VIN based policies on discrete and
continuous path-planning domains, and on a natural-language based search task.
We show that by learning an explicit planning computation, VIN policies
generalize better to new, unseen domains. | http://arxiv.org/pdf/1602.02867 | Aviv Tamar, Yi Wu, Garrett Thomas, Sergey Levine, Pieter Abbeel | cs.AI, cs.LG, cs.NE, stat.ML | Fixed missing table values | Advances in Neural Information Processing Systems 29 pages
2154--2162, 2016 | cs.AI | 20160209 | 20170320 | [
{
"id": "1602.02261"
},
{
"id": "1602.01783"
},
{
"id": "1604.06778"
},
{
"id": "1604.07095"
}
] |
1602.02867 | 25 | 4.1 Grid-World Domain Our ï¬rst experiment domain is a synthetic grid-world with randomly placed obstacles, in which the observation includes the position of the agent, and also an image of the map of obstacles and goal position. Figure 3 shows two random instances of such a grid-world of size 16 à 16. We conjecture that by learning the optimal policy for several instances of this domain, a VIN policy would learn the planning computation required to solve a new, unseen, task. In such a simple domain, an optimal policy can easily be calculated using exact VI. Note, however, that here we are interested in evaluating whether a NN policy, trained using RL or IL, can learn to plan. In the following results, policies were trained using IL, by standard supervised learning from demonstrations of the optimal policy. In the supplementary material, we report additional RL experiments that show similar ï¬ndings. We design a VIN for this task following the guidelines described above, where the planning MDP ¯M is a grid-world, similar to the true MDP. The reward mapping fR is a CNN mapping the image input to a reward map in the grid-world. Thus, fR should potentially learn to | 1602.02867#25 | Value Iteration Networks | We introduce the value iteration network (VIN): a fully differentiable neural
network with a `planning module' embedded within. VINs can learn to plan, and
are suitable for predicting outcomes that involve planning-based reasoning,
such as policies for reinforcement learning. Key to our approach is a novel
differentiable approximation of the value-iteration algorithm, which can be
represented as a convolutional neural network, and trained end-to-end using
standard backpropagation. We evaluate VIN based policies on discrete and
continuous path-planning domains, and on a natural-language based search task.
We show that by learning an explicit planning computation, VIN policies
generalize better to new, unseen domains. | http://arxiv.org/pdf/1602.02867 | Aviv Tamar, Yi Wu, Garrett Thomas, Sergey Levine, Pieter Abbeel | cs.AI, cs.LG, cs.NE, stat.ML | Fixed missing table values | Advances in Neural Information Processing Systems 29 pages
2154--2162, 2016 | cs.AI | 20160209 | 20170320 | [
{
"id": "1602.02261"
},
{
"id": "1602.01783"
},
{
"id": "1604.06778"
},
{
"id": "1604.07095"
}
] |
1602.02867 | 26 | the true MDP. The reward mapping fR is a CNN mapping the image input to a reward map in the grid-world. Thus, fR should potentially learn to discriminate between obstacles, non-obstacles and the goal, and assign a suitable reward to each. The transitions ¯P were deï¬ned as 3 à 3 convolution kernels in the VI block, exploiting the fact that transitions in the grid-world are local3. The recurrence K was chosen in proportion to the grid-world size, to ensure that information can ï¬ow from the goal state to any other state. For the attention module, we chose a trivial approach that selects the ¯Q values in the VI block for the current state, i.e., Ï(s) = ¯Q(s, ·). The ï¬nal reactive policy is a fully connected network that maps Ï(s) to a probability over actions. We compare VINs to the following NN reactive policies: CNN network: We devised a CNN-based reactive policy inspired by the recent impressive results of DQN [21], with 5 convolution layers, and a fully connected output. While the network in [21] was trained to predict Q values, our network outputs a | 1602.02867#26 | Value Iteration Networks | We introduce the value iteration network (VIN): a fully differentiable neural
network with a `planning module' embedded within. VINs can learn to plan, and
are suitable for predicting outcomes that involve planning-based reasoning,
such as policies for reinforcement learning. Key to our approach is a novel
differentiable approximation of the value-iteration algorithm, which can be
represented as a convolutional neural network, and trained end-to-end using
standard backpropagation. We evaluate VIN based policies on discrete and
continuous path-planning domains, and on a natural-language based search task.
We show that by learning an explicit planning computation, VIN policies
generalize better to new, unseen domains. | http://arxiv.org/pdf/1602.02867 | Aviv Tamar, Yi Wu, Garrett Thomas, Sergey Levine, Pieter Abbeel | cs.AI, cs.LG, cs.NE, stat.ML | Fixed missing table values | Advances in Neural Information Processing Systems 29 pages
2154--2162, 2016 | cs.AI | 20160209 | 20170320 | [
{
"id": "1602.02261"
},
{
"id": "1602.01783"
},
{
"id": "1604.06778"
},
{
"id": "1604.07095"
}
] |
1602.02867 | 27 | of DQN [21], with 5 convolution layers, and a fully connected output. While the network in [21] was trained to predict Q values, our network outputs a probability over actions. These terms are related, since Ïâ(s) = arg maxa Q(s, a). Fully Convolutional Network (FCN): The problem setting for this domain is similar to semantic segmentation [19], in which each pixel in the image is assigned a semantic label (the action in our case). We therefore devised an FCN inspired by a state-of-the-art semantic segmentation algorithm [19], with 3 convolution layers, where the ï¬rst layer has a ï¬lter that spans the whole image, to properly convey information from the goal to every other state. In Table 1 we present the average 0 â 1 prediction loss of each model, evaluated on a held-out test-set of maps with random obstacles, goals, and initial states, for different problem sizes. In addition, for each map, a full trajectory from the initial state was predicted, by iteratively rolling-out the next-states | 1602.02867#27 | Value Iteration Networks | We introduce the value iteration network (VIN): a fully differentiable neural
network with a `planning module' embedded within. VINs can learn to plan, and
are suitable for predicting outcomes that involve planning-based reasoning,
such as policies for reinforcement learning. Key to our approach is a novel
differentiable approximation of the value-iteration algorithm, which can be
represented as a convolutional neural network, and trained end-to-end using
standard backpropagation. We evaluate VIN based policies on discrete and
continuous path-planning domains, and on a natural-language based search task.
We show that by learning an explicit planning computation, VIN policies
generalize better to new, unseen domains. | http://arxiv.org/pdf/1602.02867 | Aviv Tamar, Yi Wu, Garrett Thomas, Sergey Levine, Pieter Abbeel | cs.AI, cs.LG, cs.NE, stat.ML | Fixed missing table values | Advances in Neural Information Processing Systems 29 pages
2154--2162, 2016 | cs.AI | 20160209 | 20170320 | [
{
"id": "1602.02261"
},
{
"id": "1602.01783"
},
{
"id": "1604.06778"
},
{
"id": "1604.07095"
}
] |
1602.02867 | 29 | Shortest path â*â Predicted path â*â Fredicted path Shortest path
Shortest path â*â Fredicted path
â*â Predicted path Shortest path
Figure 3: Grid-world domains (best viewed in color). A,B: Two random instances of the 28 à 28 synthetic gridworld, with the VIN-predicted trajectories and ground-truth shortest paths between random start and goal positions. C: An image of the Mars domain, with points of elevation sharper than 10⦠colored in red. These points were calculated from a matching image of elevation data (not shown), and were not available to the learning algorithm. Note the difï¬culty of distinguishing between obstacles and non-obstacles. D: The VIN-predicted (purple line with cross markers), and the shortest-path ground truth (blue line) trajectories between between random start and goal positions. | 1602.02867#29 | Value Iteration Networks | We introduce the value iteration network (VIN): a fully differentiable neural
network with a `planning module' embedded within. VINs can learn to plan, and
are suitable for predicting outcomes that involve planning-based reasoning,
such as policies for reinforcement learning. Key to our approach is a novel
differentiable approximation of the value-iteration algorithm, which can be
represented as a convolutional neural network, and trained end-to-end using
standard backpropagation. We evaluate VIN based policies on discrete and
continuous path-planning domains, and on a natural-language based search task.
We show that by learning an explicit planning computation, VIN policies
generalize better to new, unseen domains. | http://arxiv.org/pdf/1602.02867 | Aviv Tamar, Yi Wu, Garrett Thomas, Sergey Levine, Pieter Abbeel | cs.AI, cs.LG, cs.NE, stat.ML | Fixed missing table values | Advances in Neural Information Processing Systems 29 pages
2154--2162, 2016 | cs.AI | 20160209 | 20170320 | [
{
"id": "1602.02261"
},
{
"id": "1602.01783"
},
{
"id": "1604.06778"
},
{
"id": "1604.07095"
}
] |
1602.02867 | 30 | Domain 8 Ã 8 16 Ã 16 28 Ã 28 Prediction loss 0.004 0.05 0.11 VIN Success rate Traj. diff. 99.6% 0.001 99.3% 0.089 97% 0.086 Pred. loss 0.02 0.10 0.13 CNN Succ. rate Traj. diff. 97.9% 0.006 87.6% 0.06 74.2% 0.078 Pred. loss 0.01 0.07 0.09 FCN Succ. rate Traj. diff. 97.3% 0.004 88.3% 0.05 76.6% 0.08
Table 1: Performance on grid-world domain. Top: comparison with reactive policies. For all domain sizes, VIN networks signiï¬cantly outperform standard reactive networks. Note that the performance gap increases dramatically with problem size. | 1602.02867#30 | Value Iteration Networks | We introduce the value iteration network (VIN): a fully differentiable neural
network with a `planning module' embedded within. VINs can learn to plan, and
are suitable for predicting outcomes that involve planning-based reasoning,
such as policies for reinforcement learning. Key to our approach is a novel
differentiable approximation of the value-iteration algorithm, which can be
represented as a convolutional neural network, and trained end-to-end using
standard backpropagation. We evaluate VIN based policies on discrete and
continuous path-planning domains, and on a natural-language based search task.
We show that by learning an explicit planning computation, VIN policies
generalize better to new, unseen domains. | http://arxiv.org/pdf/1602.02867 | Aviv Tamar, Yi Wu, Garrett Thomas, Sergey Levine, Pieter Abbeel | cs.AI, cs.LG, cs.NE, stat.ML | Fixed missing table values | Advances in Neural Information Processing Systems 29 pages
2154--2162, 2016 | cs.AI | 20160209 | 20170320 | [
{
"id": "1602.02261"
},
{
"id": "1602.01783"
},
{
"id": "1604.06778"
},
{
"id": "1604.07095"
}
] |
1602.02867 | 31 | predicted by the network. A trajectory was said to succeed if it reached the goal without hitting obstacles. For each trajectory that succeeded, we also measured its difference in length from the optimal trajectory. The average difference and the average success rate are reported in Table 1. Clearly, VIN policies generalize to domains outside the training set. A visualization of the reward mapping fR (see supplementary material) shows that it is negative at obstacles, positive at the goal, and a small negative constant otherwise. The resulting value function has a gradient pointing towards a direction to the goal around obstacles, thus a useful planning computation was learned. VINs also signiï¬cantly outperform the reactive networks, and the performance gap increases dramatically with the problem size. Importantly, note that the prediction loss for the reactive policies is comparable to the VINs, although their success rate is signiï¬cantly worse. This shows that this is not a standard case of overï¬tting/underï¬tting of the reactive policies. Rather, VIN policies, by their VI structure, focus prediction errors on less important parts of the trajectory, | 1602.02867#31 | Value Iteration Networks | We introduce the value iteration network (VIN): a fully differentiable neural
network with a `planning module' embedded within. VINs can learn to plan, and
are suitable for predicting outcomes that involve planning-based reasoning,
such as policies for reinforcement learning. Key to our approach is a novel
differentiable approximation of the value-iteration algorithm, which can be
represented as a convolutional neural network, and trained end-to-end using
standard backpropagation. We evaluate VIN based policies on discrete and
continuous path-planning domains, and on a natural-language based search task.
We show that by learning an explicit planning computation, VIN policies
generalize better to new, unseen domains. | http://arxiv.org/pdf/1602.02867 | Aviv Tamar, Yi Wu, Garrett Thomas, Sergey Levine, Pieter Abbeel | cs.AI, cs.LG, cs.NE, stat.ML | Fixed missing table values | Advances in Neural Information Processing Systems 29 pages
2154--2162, 2016 | cs.AI | 20160209 | 20170320 | [
{
"id": "1602.02261"
},
{
"id": "1602.01783"
},
{
"id": "1604.06778"
},
{
"id": "1604.07095"
}
] |
1602.02867 | 32 | of the reactive policies. Rather, VIN policies, by their VI structure, focus prediction errors on less important parts of the trajectory, while reactive policies do not make this distinction, and learn the easily predictable parts of the trajectory yet fail on the complete task. The VINs have an effective depth of K, which is larger than the depth of the reactive policies. One may wonder, whether any deep enough network would learn to plan. In principle, a CNN or FCN of depth K has the potential to perform the same computation as a VIN. However, it has much more parameters, requiring much more training data. We evaluate this by untying the weights in the K recurrent layers in the VIN. Our results, reported in the supplementary material, show that untying the weights degrades performance, with a stronger effect for smaller sizes of training data. | 1602.02867#32 | Value Iteration Networks | We introduce the value iteration network (VIN): a fully differentiable neural
network with a `planning module' embedded within. VINs can learn to plan, and
are suitable for predicting outcomes that involve planning-based reasoning,
such as policies for reinforcement learning. Key to our approach is a novel
differentiable approximation of the value-iteration algorithm, which can be
represented as a convolutional neural network, and trained end-to-end using
standard backpropagation. We evaluate VIN based policies on discrete and
continuous path-planning domains, and on a natural-language based search task.
We show that by learning an explicit planning computation, VIN policies
generalize better to new, unseen domains. | http://arxiv.org/pdf/1602.02867 | Aviv Tamar, Yi Wu, Garrett Thomas, Sergey Levine, Pieter Abbeel | cs.AI, cs.LG, cs.NE, stat.ML | Fixed missing table values | Advances in Neural Information Processing Systems 29 pages
2154--2162, 2016 | cs.AI | 20160209 | 20170320 | [
{
"id": "1602.02261"
},
{
"id": "1602.01783"
},
{
"id": "1604.06778"
},
{
"id": "1604.07095"
}
] |
1602.02867 | 33 | 4.2 Mars Rover Navigation In this experiment we show that VINs can learn to plan from natural image input. We demonstrate this on path-planning from overhead terrain images of a Mars landscape. Each domain is represented by a 128 à 128 image patch, on which we deï¬ned a 16 à 16 grid-world, where each state was considered an obstacle if the terrain in its corresponding 8 à 8 image patch contained an elevation angle of 10 degrees or more, evaluated using an external elevation data base. An example of the domain and terrain image is depicted in Figure 3. The MDP for shortest-path planning in this case is similar to the grid-world domain of Section 4.1, and the VIN design was similar, only with a deeper CNN in the reward mapping fR for processing the image. The policy was trained to predict the shortest-path directly from the terrain image. We emphasize that the elevation data is not part of the input, and must be inferred (if needed) from the terrain image.
6 | 1602.02867#33 | Value Iteration Networks | We introduce the value iteration network (VIN): a fully differentiable neural
network with a `planning module' embedded within. VINs can learn to plan, and
are suitable for predicting outcomes that involve planning-based reasoning,
such as policies for reinforcement learning. Key to our approach is a novel
differentiable approximation of the value-iteration algorithm, which can be
represented as a convolutional neural network, and trained end-to-end using
standard backpropagation. We evaluate VIN based policies on discrete and
continuous path-planning domains, and on a natural-language based search task.
We show that by learning an explicit planning computation, VIN policies
generalize better to new, unseen domains. | http://arxiv.org/pdf/1602.02867 | Aviv Tamar, Yi Wu, Garrett Thomas, Sergey Levine, Pieter Abbeel | cs.AI, cs.LG, cs.NE, stat.ML | Fixed missing table values | Advances in Neural Information Processing Systems 29 pages
2154--2162, 2016 | cs.AI | 20160209 | 20170320 | [
{
"id": "1602.02261"
},
{
"id": "1602.01783"
},
{
"id": "1604.06778"
},
{
"id": "1604.07095"
}
] |
1602.02867 | 34 | 6
After training, VIN achieved a success rate of 84.8%. To put this rate in context, we compare with the best performance achievable without access to the elevation data, which is 90.3%. To make this comparison, we trained a CNN to classify whether an 8 à 8 patch is an obstacle or not. This classiï¬er was trained using the same image data as the VIN network, but its labels were the true obstacle classiï¬cations from the elevation map (we reiterate that the VIN did not have access to these ground-truth obstacle labels during training or testing). The success rate of planner that uses the obstacle map generated by this classiï¬er from the raw image is 90.3%, showing that obstacle identiï¬cation from the raw image is indeed challenging. Thus, the success rate of the VIN, which was trained without any obstacle labels, and had to âï¬gure outâ the planning process is quite remarkable. | 1602.02867#34 | Value Iteration Networks | We introduce the value iteration network (VIN): a fully differentiable neural
network with a `planning module' embedded within. VINs can learn to plan, and
are suitable for predicting outcomes that involve planning-based reasoning,
such as policies for reinforcement learning. Key to our approach is a novel
differentiable approximation of the value-iteration algorithm, which can be
represented as a convolutional neural network, and trained end-to-end using
standard backpropagation. We evaluate VIN based policies on discrete and
continuous path-planning domains, and on a natural-language based search task.
We show that by learning an explicit planning computation, VIN policies
generalize better to new, unseen domains. | http://arxiv.org/pdf/1602.02867 | Aviv Tamar, Yi Wu, Garrett Thomas, Sergey Levine, Pieter Abbeel | cs.AI, cs.LG, cs.NE, stat.ML | Fixed missing table values | Advances in Neural Information Processing Systems 29 pages
2154--2162, 2016 | cs.AI | 20160209 | 20170320 | [
{
"id": "1602.02261"
},
{
"id": "1602.01783"
},
{
"id": "1604.06778"
},
{
"id": "1604.07095"
}
] |
1602.02867 | 35 | 4.3 Continuous Control We now consider a 2D path planning domain with continuous states and continuous actions, which cannot be solved using VI, and therefore a VIN cannot be naively applied. Instead, we will construct the VIN to perform âhigh-levelâ planning on a discrete, coarse, grid-world rep- resentation of the continuous domain. We shall show that a VIN can learn to plan such a âhigh- levelâ plan, and also exploit that plan within its âlow-levelâ continuous control policy. Moreover, the VIN policy results in better generalization than a reactive policy. Consider the domain in Figure 4. A red-colored particle needs to be navigated to a green goal us- ing horizontal and vertical forces. Gray-colored obstacles are randomly positioned in the domain, and apply an elastic force and friction when contacted. This domain presents a non-trivial control problem, as the agent needs to both plan a feasible trajectory between the obstacles (or use them to bounce off), but also control the particle (which has mass and inertia) to follow it. The state obser- vation consists of the particleâs continuous position and velocity, and a static 16 Ã 16 downscaled | 1602.02867#35 | Value Iteration Networks | We introduce the value iteration network (VIN): a fully differentiable neural
network with a `planning module' embedded within. VINs can learn to plan, and
are suitable for predicting outcomes that involve planning-based reasoning,
such as policies for reinforcement learning. Key to our approach is a novel
differentiable approximation of the value-iteration algorithm, which can be
represented as a convolutional neural network, and trained end-to-end using
standard backpropagation. We evaluate VIN based policies on discrete and
continuous path-planning domains, and on a natural-language based search task.
We show that by learning an explicit planning computation, VIN policies
generalize better to new, unseen domains. | http://arxiv.org/pdf/1602.02867 | Aviv Tamar, Yi Wu, Garrett Thomas, Sergey Levine, Pieter Abbeel | cs.AI, cs.LG, cs.NE, stat.ML | Fixed missing table values | Advances in Neural Information Processing Systems 29 pages
2154--2162, 2016 | cs.AI | 20160209 | 20170320 | [
{
"id": "1602.02261"
},
{
"id": "1602.01783"
},
{
"id": "1604.06778"
},
{
"id": "1604.07095"
}
] |
1602.02867 | 36 | and inertia) to follow it. The state obser- vation consists of the particleâs continuous position and velocity, and a static 16 à 16 downscaled image of the obstacles and goal position in the domain. In principle, such an observation is sufï¬cient to devise a ârough planâ for the particle to follow. As in our previous experiments, we investigate whether a policy trained on several instances of this domain with different start state, goal, and obstacle positions, would generalize to an unseen domain. For training we chose the guided policy search (GPS) algorithm with unknown dynamics [17], which is suitable for learning policies for continuous dynamics with contacts, and we used the publicly available GPS code [7], and Mujoco [30] for physical simulation. We generated 200 random training instances, and evaluate our performance on 40 different test instances from the same distribution. Our VIN design is similar to the grid-world cases, with some important modiï¬cations: the attention module selects a 5 à 5 patch of the value ¯V , centered around the current (discretized) position in the map. The ï¬nal reactive policy is a 3-layer fully connected network, with a | 1602.02867#36 | Value Iteration Networks | We introduce the value iteration network (VIN): a fully differentiable neural
network with a `planning module' embedded within. VINs can learn to plan, and
are suitable for predicting outcomes that involve planning-based reasoning,
such as policies for reinforcement learning. Key to our approach is a novel
differentiable approximation of the value-iteration algorithm, which can be
represented as a convolutional neural network, and trained end-to-end using
standard backpropagation. We evaluate VIN based policies on discrete and
continuous path-planning domains, and on a natural-language based search task.
We show that by learning an explicit planning computation, VIN policies
generalize better to new, unseen domains. | http://arxiv.org/pdf/1602.02867 | Aviv Tamar, Yi Wu, Garrett Thomas, Sergey Levine, Pieter Abbeel | cs.AI, cs.LG, cs.NE, stat.ML | Fixed missing table values | Advances in Neural Information Processing Systems 29 pages
2154--2162, 2016 | cs.AI | 20160209 | 20170320 | [
{
"id": "1602.02261"
},
{
"id": "1602.01783"
},
{
"id": "1604.06778"
},
{
"id": "1604.07095"
}
] |
1602.02867 | 37 | of the value ¯V , centered around the current (discretized) position in the map. The ï¬nal reactive policy is a 3-layer fully connected network, with a 2-dimensional continuous output for the controls. In addition, due to the limited number of training domains, we pre-trained the VIN with transition weights that correspond to discounted grid-world transitions. This is a reasonable prior for the weights in a 2-d task, and we emphasize that even with this initialization, the initial value function is meaningless, since the reward map fR is not yet learned. We compare with a CNN-based reactive policy inspired by the state-of-the-art results in [21, 20], with 2 CNN layers for image processing, followed by a 3-layer fully connected network similar to the VIN reactive policy. Figure 4 shows the performance of the trained policies, measured as the ï¬nal distance to the target. The VIN clearly outperforms the CNN on test domains. We also plot several trajectories of both policies on test domains, showing that VIN learned a more sensible generalization of the task. | 1602.02867#37 | Value Iteration Networks | We introduce the value iteration network (VIN): a fully differentiable neural
network with a `planning module' embedded within. VINs can learn to plan, and
are suitable for predicting outcomes that involve planning-based reasoning,
such as policies for reinforcement learning. Key to our approach is a novel
differentiable approximation of the value-iteration algorithm, which can be
represented as a convolutional neural network, and trained end-to-end using
standard backpropagation. We evaluate VIN based policies on discrete and
continuous path-planning domains, and on a natural-language based search task.
We show that by learning an explicit planning computation, VIN policies
generalize better to new, unseen domains. | http://arxiv.org/pdf/1602.02867 | Aviv Tamar, Yi Wu, Garrett Thomas, Sergey Levine, Pieter Abbeel | cs.AI, cs.LG, cs.NE, stat.ML | Fixed missing table values | Advances in Neural Information Processing Systems 29 pages
2154--2162, 2016 | cs.AI | 20160209 | 20170320 | [
{
"id": "1602.02261"
},
{
"id": "1602.01783"
},
{
"id": "1604.06778"
},
{
"id": "1604.07095"
}
] |
1602.02867 | 38 | gee age ae ae
gee age ae ne
4.4 WebNav Challenge In the previous experiments, the planning aspect of the task corresponded to 2D navigation. We now consider a more general domain: WebNav [23] â a language based search task on a graph. In WebNav [23], the agent needs to navigate the links of a website towards a goal web-page, specified by a short 4-sentence query. At each state s (web-page), the agent can observe average word- embedding features of the state ¢(s) and possible next states ¢(sâ) (linked pages), and the features of the query (q), and based on that has to select which link to follow. In [23], the search was performed
7 | 1602.02867#38 | Value Iteration Networks | We introduce the value iteration network (VIN): a fully differentiable neural
network with a `planning module' embedded within. VINs can learn to plan, and
are suitable for predicting outcomes that involve planning-based reasoning,
such as policies for reinforcement learning. Key to our approach is a novel
differentiable approximation of the value-iteration algorithm, which can be
represented as a convolutional neural network, and trained end-to-end using
standard backpropagation. We evaluate VIN based policies on discrete and
continuous path-planning domains, and on a natural-language based search task.
We show that by learning an explicit planning computation, VIN policies
generalize better to new, unseen domains. | http://arxiv.org/pdf/1602.02867 | Aviv Tamar, Yi Wu, Garrett Thomas, Sergey Levine, Pieter Abbeel | cs.AI, cs.LG, cs.NE, stat.ML | Fixed missing table values | Advances in Neural Information Processing Systems 29 pages
2154--2162, 2016 | cs.AI | 20160209 | 20170320 | [
{
"id": "1602.02261"
},
{
"id": "1602.01783"
},
{
"id": "1604.06778"
},
{
"id": "1604.07095"
}
] |
1602.02867 | 39 | on the Wikipedia website. Here, we report experiments on the âWikipedia for Schoolsâ website, a simplified Wikipedia designed for children, with over 6000 pages and at most 292 links per page. In [23], a NN-based policy was proposed, which first learns a NN mapping from (¢(s), ¢(q)) toa hidden state vector h. The action is then selected according to 7(sâ|¢(s), 6(q)) x exp (h' 6(sâ)). In essence, this policy is reactive, and relies on the word embedding features at each state to contain meaningful information about the path to the goal. Indeed, this property naturally holds for an encyclopedic website that is structured as a tree of categories, sub-categories, sub-sub-categories, etc. We sought to explore whether planning, based on a VIN, can lead to better performance in this task, with the intuition that a plan on a simplified model of the website can help guide the reactive policy in difficult queries. Therefore, we designed a VIN that plans on a small subset of the graph that contains only the Ist and 2nd level categories (< 3% of the graph), and their word-embedding features. Designing this | 1602.02867#39 | Value Iteration Networks | We introduce the value iteration network (VIN): a fully differentiable neural
network with a `planning module' embedded within. VINs can learn to plan, and
are suitable for predicting outcomes that involve planning-based reasoning,
such as policies for reinforcement learning. Key to our approach is a novel
differentiable approximation of the value-iteration algorithm, which can be
represented as a convolutional neural network, and trained end-to-end using
standard backpropagation. We evaluate VIN based policies on discrete and
continuous path-planning domains, and on a natural-language based search task.
We show that by learning an explicit planning computation, VIN policies
generalize better to new, unseen domains. | http://arxiv.org/pdf/1602.02867 | Aviv Tamar, Yi Wu, Garrett Thomas, Sergey Levine, Pieter Abbeel | cs.AI, cs.LG, cs.NE, stat.ML | Fixed missing table values | Advances in Neural Information Processing Systems 29 pages
2154--2162, 2016 | cs.AI | 20160209 | 20170320 | [
{
"id": "1602.02261"
},
{
"id": "1602.01783"
},
{
"id": "1604.06778"
},
{
"id": "1604.07095"
}
] |
1602.02867 | 40 | on a small subset of the graph that contains only the Ist and 2nd level categories (< 3% of the graph), and their word-embedding features. Designing this VIN requires a different approach from the grid-world VINs described earlier, where the most challenging aspect is to define a meaningful mapping between nodes in the true graph and nodes in the smaller VIN graph. For the reward mapping fr, we chose a weighted similarity measure between the query features ¢(q), and the features of nodes in the small graph $(3). Thus, intuitively, nodes that are similar to the query should have high reward. The transitions were fixed based on the graph connectivity of the smaller VIN graph, which is known, though different from the true graph. The attention module was also based on a weighted similarity measure between the features of the possible next states $(sâ) and the features of each node in the simplified graph ¢(5). The reactive policy part of the VIN was similar to the policy of described above. Note that by training such a VIN end-to-end, we are effectively learning how to exploit the small graph for doing better planning on the true, large graph. Both the VIN policy and the baseline reactive policy were trained by | 1602.02867#40 | Value Iteration Networks | We introduce the value iteration network (VIN): a fully differentiable neural
network with a `planning module' embedded within. VINs can learn to plan, and
are suitable for predicting outcomes that involve planning-based reasoning,
such as policies for reinforcement learning. Key to our approach is a novel
differentiable approximation of the value-iteration algorithm, which can be
represented as a convolutional neural network, and trained end-to-end using
standard backpropagation. We evaluate VIN based policies on discrete and
continuous path-planning domains, and on a natural-language based search task.
We show that by learning an explicit planning computation, VIN policies
generalize better to new, unseen domains. | http://arxiv.org/pdf/1602.02867 | Aviv Tamar, Yi Wu, Garrett Thomas, Sergey Levine, Pieter Abbeel | cs.AI, cs.LG, cs.NE, stat.ML | Fixed missing table values | Advances in Neural Information Processing Systems 29 pages
2154--2162, 2016 | cs.AI | 20160209 | 20170320 | [
{
"id": "1602.02261"
},
{
"id": "1602.01783"
},
{
"id": "1604.06778"
},
{
"id": "1604.07095"
}
] |
1602.02867 | 41 | we are effectively learning how to exploit the small graph for doing better planning on the true, large graph. Both the VIN policy and the baseline reactive policy were trained by supervised learning, on random trajectories that start from the root node of the graph. Similarly to [23], a policy is said to succeed a query if all the correct predictions along the path are within its top-4 predictions. After training, the VIN policy performed mildly better than the baseline on 2000 held-out test queries when starting from the root node, achieving 1030 successful runs vs. 1025 for the baseline. However, when we tested the policies on a harder task of starting from a random position in the graph, VINs significantly outperformed the baseline, achieving 346 successful runs vs. 304 for the baseline, out of 4000 test queries. These results confirm that indeed, when navigating a tree of categories from the root up, the features at each state contain meaningful information about the path to the goal, making a reactive policy sufficient. However, when starting the navigation from a different state, a reactive policy may fail to understand that it needs to first go back to the root and switch to a different branch in the tree. Our results indicate such a strategy can be better represented by a VIN. We | 1602.02867#41 | Value Iteration Networks | We introduce the value iteration network (VIN): a fully differentiable neural
network with a `planning module' embedded within. VINs can learn to plan, and
are suitable for predicting outcomes that involve planning-based reasoning,
such as policies for reinforcement learning. Key to our approach is a novel
differentiable approximation of the value-iteration algorithm, which can be
represented as a convolutional neural network, and trained end-to-end using
standard backpropagation. We evaluate VIN based policies on discrete and
continuous path-planning domains, and on a natural-language based search task.
We show that by learning an explicit planning computation, VIN policies
generalize better to new, unseen domains. | http://arxiv.org/pdf/1602.02867 | Aviv Tamar, Yi Wu, Garrett Thomas, Sergey Levine, Pieter Abbeel | cs.AI, cs.LG, cs.NE, stat.ML | Fixed missing table values | Advances in Neural Information Processing Systems 29 pages
2154--2162, 2016 | cs.AI | 20160209 | 20170320 | [
{
"id": "1602.02261"
},
{
"id": "1602.01783"
},
{
"id": "1604.06778"
},
{
"id": "1604.07095"
}
] |
1602.02867 | 43 | 5 Conclusion and Outlook The introduction of powerful and scalable RL methods has opened up a range of new problems for deep learning. However, few recent works investigate policy architectures that are speciï¬cally tailored for planning under uncertainty, and current RL theory and benchmarks rarely investigate the generalization properties of a trained policy [27, 21, 5]. This work takes a step in this direction, by exploring better generalizing policy representations. Our VIN policies learn an approximate planning computation relevant for solving the task, and we have shown that such a computation leads to better generalization in a diverse set of tasks, ranging from simple gridworlds that are amenable to value iteration, to continuous control, and even to navigation of Wikipedia links. In future work we intend to learn different planning computations, based on simulation [10], or optimal linear control [31], and combine them with reactive policies, to potentially develop RL solutions for task and motion planning [14].
# Acknowledgments
This research was funded in part by Siemens, by ONR through a PECASE award, by the Army Research Ofï¬ce through the MAST program, and by an NSF CAREER award (#1351028). A. T. was partially funded by the Viterbi Scholarship, Technion. Y. W. was partially funded by a DARPA PPAML program, contract FA8750-14-C-0011.
8
# References | 1602.02867#43 | Value Iteration Networks | We introduce the value iteration network (VIN): a fully differentiable neural
network with a `planning module' embedded within. VINs can learn to plan, and
are suitable for predicting outcomes that involve planning-based reasoning,
such as policies for reinforcement learning. Key to our approach is a novel
differentiable approximation of the value-iteration algorithm, which can be
represented as a convolutional neural network, and trained end-to-end using
standard backpropagation. We evaluate VIN based policies on discrete and
continuous path-planning domains, and on a natural-language based search task.
We show that by learning an explicit planning computation, VIN policies
generalize better to new, unseen domains. | http://arxiv.org/pdf/1602.02867 | Aviv Tamar, Yi Wu, Garrett Thomas, Sergey Levine, Pieter Abbeel | cs.AI, cs.LG, cs.NE, stat.ML | Fixed missing table values | Advances in Neural Information Processing Systems 29 pages
2154--2162, 2016 | cs.AI | 20160209 | 20170320 | [
{
"id": "1602.02261"
},
{
"id": "1602.01783"
},
{
"id": "1604.06778"
},
{
"id": "1604.07095"
}
] |
1602.02867 | 44 | 8
# References
[1] R. Bellman. Dynamic Programming. Princeton University Press, 1957. [2] D. Bertsekas. Dynamic Programming and Optimal Control, Vol II. Athena Scientiï¬c, 4th edition, 2012. [3] D. Ciresan, U. Meier, and J. Schmidhuber. Multi-column deep neural networks for image classiï¬cation. In
Computer Vision and Pattern Recognition, pages 3642â3649, 2012.
[4] M. Deisenroth and C. E. Rasmussen. Pilco: A model-based and data-efï¬cient approach to policy search. In ICML, 2011.
[5] Y. Duan, X. Chen, R. Houthooft, J. Schulman, and P. Abbeel. Benchmarking deep reinforcement learning for continuous control. arXiv preprint arXiv:1604.06778, 2016.
[6] C. Farabet, C. Couprie, L. Najman, and Y. LeCun. Learning hierarchical features for scene labeling. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(8):1915â1929, 2013. | 1602.02867#44 | Value Iteration Networks | We introduce the value iteration network (VIN): a fully differentiable neural
network with a `planning module' embedded within. VINs can learn to plan, and
are suitable for predicting outcomes that involve planning-based reasoning,
such as policies for reinforcement learning. Key to our approach is a novel
differentiable approximation of the value-iteration algorithm, which can be
represented as a convolutional neural network, and trained end-to-end using
standard backpropagation. We evaluate VIN based policies on discrete and
continuous path-planning domains, and on a natural-language based search task.
We show that by learning an explicit planning computation, VIN policies
generalize better to new, unseen domains. | http://arxiv.org/pdf/1602.02867 | Aviv Tamar, Yi Wu, Garrett Thomas, Sergey Levine, Pieter Abbeel | cs.AI, cs.LG, cs.NE, stat.ML | Fixed missing table values | Advances in Neural Information Processing Systems 29 pages
2154--2162, 2016 | cs.AI | 20160209 | 20170320 | [
{
"id": "1602.02261"
},
{
"id": "1602.01783"
},
{
"id": "1604.06778"
},
{
"id": "1604.07095"
}
] |
1602.02867 | 45 | [7] C. Finn, M. Zhang, J. Fu, X. Tan, Z. McCarthy, E. Scharff, and S. Levine. Guided policy search code implementation, 2016. Software available from rll.berkeley.edu/gps.
[8] K. Fukushima. Neural network model for a mechanism of pattern recognition unaffected by shift in position- neocognitron. Transactions of the IECE, J62-A(10):658â665, 1979.
[9] A. Giusti et al. A machine learning approach to visual perception of forest trails for mobile robots. IEEE Robotics and Automation Letters, 2016.
[10] X. Guo, S. Singh, H. Lee, R. L. Lewis, and X. Wang. Deep learning for real-time atari game play using ofï¬ine monte-carlo tree search planning. In NIPS, 2014.
[11] X. Guo, S. Singh, R. Lewis, and H. Lee. Deep learning for reward design to improve monte carlo tree search in atari games. arXiv:1604.07095, 2016.
[12] R. Ilin, R. Kozma, and P. J. Werbos. Efï¬cient learning in cellular simultaneous recurrent neural networks-the case of maze navigation problem. In ADPRL, 2007. | 1602.02867#45 | Value Iteration Networks | We introduce the value iteration network (VIN): a fully differentiable neural
network with a `planning module' embedded within. VINs can learn to plan, and
are suitable for predicting outcomes that involve planning-based reasoning,
such as policies for reinforcement learning. Key to our approach is a novel
differentiable approximation of the value-iteration algorithm, which can be
represented as a convolutional neural network, and trained end-to-end using
standard backpropagation. We evaluate VIN based policies on discrete and
continuous path-planning domains, and on a natural-language based search task.
We show that by learning an explicit planning computation, VIN policies
generalize better to new, unseen domains. | http://arxiv.org/pdf/1602.02867 | Aviv Tamar, Yi Wu, Garrett Thomas, Sergey Levine, Pieter Abbeel | cs.AI, cs.LG, cs.NE, stat.ML | Fixed missing table values | Advances in Neural Information Processing Systems 29 pages
2154--2162, 2016 | cs.AI | 20160209 | 20170320 | [
{
"id": "1602.02261"
},
{
"id": "1602.01783"
},
{
"id": "1604.06778"
},
{
"id": "1604.07095"
}
] |
1602.02867 | 46 | [13] J. Joseph, A. Geramifard, J. W. Roberts, J. P. How, and N. Roy. Reinforcement learning with misspeciï¬ed model classes. In ICRA, 2013.
[14] L. P. Kaelbling and T. Lozano-Pérez. Hierarchical task and motion planning in the now. International Conference on Robotics and Automation (ICRA), pages 1470â1477, 2011.
[15] A. Krizhevsky, I. Sutskever, and G. Hinton. Imagenet classiï¬cation with deep convolutional neural networks. In NIPS, 2012.
[16] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278â2324, 1998.
[17] S. Levine and P. Abbeel. Learning neural network policies with guided policy search under unknown dynamics. In NIPS, 2014.
[18] S. Levine, C. Finn, T. Darrell, and P. Abbeel. End-to-end training of deep visuomotor policies. JMLR, 17, 2016. | 1602.02867#46 | Value Iteration Networks | We introduce the value iteration network (VIN): a fully differentiable neural
network with a `planning module' embedded within. VINs can learn to plan, and
are suitable for predicting outcomes that involve planning-based reasoning,
such as policies for reinforcement learning. Key to our approach is a novel
differentiable approximation of the value-iteration algorithm, which can be
represented as a convolutional neural network, and trained end-to-end using
standard backpropagation. We evaluate VIN based policies on discrete and
continuous path-planning domains, and on a natural-language based search task.
We show that by learning an explicit planning computation, VIN policies
generalize better to new, unseen domains. | http://arxiv.org/pdf/1602.02867 | Aviv Tamar, Yi Wu, Garrett Thomas, Sergey Levine, Pieter Abbeel | cs.AI, cs.LG, cs.NE, stat.ML | Fixed missing table values | Advances in Neural Information Processing Systems 29 pages
2154--2162, 2016 | cs.AI | 20160209 | 20170320 | [
{
"id": "1602.02261"
},
{
"id": "1602.01783"
},
{
"id": "1604.06778"
},
{
"id": "1604.07095"
}
] |
1602.02867 | 47 | [19] J. Long, E. Shelhamer, and T. Darrell. Fully convolutional networks for semantic segmentation. In IEEE Conference on Computer Vision and Pattern Recognition, pages 3431â3440, 2015.
[20] V. Mnih, A. P. Badia, M. Mirza, A. Graves, T. Lillicrap, T. Harley, D. Silver, and K. Kavukcuoglu. Asynchronous methods for deep reinforcement learning. arXiv preprint arXiv:1602.01783, 2016. [21] V. Mnih, K. Kavukcuoglu, D. Silver, A. Rusu, J. Veness, M. Bellemare, A. Graves, M. Riedmiller, A. Fidjeland, G. Ostrovski, et al. Human-level control through deep reinforcement learning. Nature, 518(7540):529â533, 2015.
[22] G. Neu and C. Szepesvári. Apprenticeship learning using inverse reinforcement learning and gradient methods. In UAI, 2007.
[23] R. Nogueira and K. Cho. Webnav: A new large-scale task for natural language based sequential decision making. arXiv preprint arXiv:1602.02261, 2016. | 1602.02867#47 | Value Iteration Networks | We introduce the value iteration network (VIN): a fully differentiable neural
network with a `planning module' embedded within. VINs can learn to plan, and
are suitable for predicting outcomes that involve planning-based reasoning,
such as policies for reinforcement learning. Key to our approach is a novel
differentiable approximation of the value-iteration algorithm, which can be
represented as a convolutional neural network, and trained end-to-end using
standard backpropagation. We evaluate VIN based policies on discrete and
continuous path-planning domains, and on a natural-language based search task.
We show that by learning an explicit planning computation, VIN policies
generalize better to new, unseen domains. | http://arxiv.org/pdf/1602.02867 | Aviv Tamar, Yi Wu, Garrett Thomas, Sergey Levine, Pieter Abbeel | cs.AI, cs.LG, cs.NE, stat.ML | Fixed missing table values | Advances in Neural Information Processing Systems 29 pages
2154--2162, 2016 | cs.AI | 20160209 | 20170320 | [
{
"id": "1602.02261"
},
{
"id": "1602.01783"
},
{
"id": "1604.06778"
},
{
"id": "1604.07095"
}
] |
1602.02867 | 48 | [24] S. Ross, G. Gordon, and A. Bagnell. A reduction of imitation learning and structured prediction to no-regret online learning. In AISTATS, 2011.
[25] J. Schmidhuber. An on-line algorithm for dynamic reinforcement learning and planning in reactive environments. In International Joint Conference on Neural Networks. IEEE, 1990.
[26] J. Schulman, S. Levine, P. Abbeel, M. Jordan, and P. Moritz. Trust region policy optimization. In ICML, 2015.
[27] R. S. Sutton and A. G. Barto. Reinforcement learning: An introduction. MIT press, 1998. [28] Theano Development Team. Theano: A Python framework for fast computation of mathematical expressions. arXiv e-prints, abs/1605.02688, May 2016. | 1602.02867#48 | Value Iteration Networks | We introduce the value iteration network (VIN): a fully differentiable neural
network with a `planning module' embedded within. VINs can learn to plan, and
are suitable for predicting outcomes that involve planning-based reasoning,
such as policies for reinforcement learning. Key to our approach is a novel
differentiable approximation of the value-iteration algorithm, which can be
represented as a convolutional neural network, and trained end-to-end using
standard backpropagation. We evaluate VIN based policies on discrete and
continuous path-planning domains, and on a natural-language based search task.
We show that by learning an explicit planning computation, VIN policies
generalize better to new, unseen domains. | http://arxiv.org/pdf/1602.02867 | Aviv Tamar, Yi Wu, Garrett Thomas, Sergey Levine, Pieter Abbeel | cs.AI, cs.LG, cs.NE, stat.ML | Fixed missing table values | Advances in Neural Information Processing Systems 29 pages
2154--2162, 2016 | cs.AI | 20160209 | 20170320 | [
{
"id": "1602.02261"
},
{
"id": "1602.01783"
},
{
"id": "1604.06778"
},
{
"id": "1604.07095"
}
] |
1602.02867 | 49 | [29] T. Tieleman and G. Hinton. Lecture 6.5. COURSERA: Neural Networks for Machine Learning, 2012. [30] E. Todorov, T. Erez, and Y. Tassa. Mujoco: A physics engine for model-based control. In Intelligent Robots and Systems (IROS), 2012 IEEE/RSJ International Conference on, pages 5026â5033. IEEE, 2012. [31] M. Watter, J. Springenberg, J. Boedecker, and M. Riedmiller. Embed to control: A locally linear latent
dynamics model for control from raw images. In NIPS, 2015.
[32] K. Xu, J. Ba, R. Kiros, K. Cho, A. Courville, R. Salakhudinov, R. Zemel, and Y. Bengio. Show, attend and tell: Neural image caption generation with visual attention. In ICML, 2015.
9
# A Visualization of Learned Reward and Value
In Figure 5 we plot the learned reward and value function for the gridworld task. The learned reward is very negative at obstacles, very positive at goal, and a slightly negative constant otherwise. The resulting value function has a peak at the goal, and a gradient pointing towards a direction to the goal around obstacles. This plot clearly shows that the VI block learned a useful planning computation. | 1602.02867#49 | Value Iteration Networks | We introduce the value iteration network (VIN): a fully differentiable neural
network with a `planning module' embedded within. VINs can learn to plan, and
are suitable for predicting outcomes that involve planning-based reasoning,
such as policies for reinforcement learning. Key to our approach is a novel
differentiable approximation of the value-iteration algorithm, which can be
represented as a convolutional neural network, and trained end-to-end using
standard backpropagation. We evaluate VIN based policies on discrete and
continuous path-planning domains, and on a natural-language based search task.
We show that by learning an explicit planning computation, VIN policies
generalize better to new, unseen domains. | http://arxiv.org/pdf/1602.02867 | Aviv Tamar, Yi Wu, Garrett Thomas, Sergey Levine, Pieter Abbeel | cs.AI, cs.LG, cs.NE, stat.ML | Fixed missing table values | Advances in Neural Information Processing Systems 29 pages
2154--2162, 2016 | cs.AI | 20160209 | 20170320 | [
{
"id": "1602.02261"
},
{
"id": "1602.01783"
},
{
"id": "1604.06778"
},
{
"id": "1604.07095"
}
] |
1602.02867 | 50 | Learned Value Leamed Reward
Learned Value Leamed Reward
Figure 5: Visualization of learned reward and value function. Left: a sample domain. Center: learned reward fR for this domain. Right: resulting value function (in VI block) for this domain.
# B Weight Sharing
The VINs have an effective depth of K, which is larger than the depth of the reactive policies. One may wonder, whether any deep enough network would learn to plan. In principle, a CNN or FCN of depth K has the potential to perform the same computation as a VIN. However, it has much more parameters, requiring much more training data. We evaluate this by untying the weights in the K recurrent layers in the VIN. Our results, in Table 2 show that untying the weights degrades performance, with a stronger effect for smaller sizes of training data.
Training data 20% 50% 100% Pred. loss 0.06 0.05 0.05 VIN Succ. rate Traj. diff. 98.2% 0.106 99.4% 0.018 99.3% 0.089 VIN Untied Weights Traj. Succ. diff. rate 91.9% 0.094 95.2% 0.078 95.6% 0.068 Pred. loss 0.09 0.07 0.05 | 1602.02867#50 | Value Iteration Networks | We introduce the value iteration network (VIN): a fully differentiable neural
network with a `planning module' embedded within. VINs can learn to plan, and
are suitable for predicting outcomes that involve planning-based reasoning,
such as policies for reinforcement learning. Key to our approach is a novel
differentiable approximation of the value-iteration algorithm, which can be
represented as a convolutional neural network, and trained end-to-end using
standard backpropagation. We evaluate VIN based policies on discrete and
continuous path-planning domains, and on a natural-language based search task.
We show that by learning an explicit planning computation, VIN policies
generalize better to new, unseen domains. | http://arxiv.org/pdf/1602.02867 | Aviv Tamar, Yi Wu, Garrett Thomas, Sergey Levine, Pieter Abbeel | cs.AI, cs.LG, cs.NE, stat.ML | Fixed missing table values | Advances in Neural Information Processing Systems 29 pages
2154--2162, 2016 | cs.AI | 20160209 | 20170320 | [
{
"id": "1602.02261"
},
{
"id": "1602.01783"
},
{
"id": "1604.06778"
},
{
"id": "1604.07095"
}
] |
1602.02867 | 51 | Table 2: Performance on 16 Ã 16 grid-world domain. Evaluation of the effect of VI module shared weights relative to data size.
# C Gridworld with Reinforcement Learning
We demonstrate that the value iteration network can be trained using reinforcement learning methods and achieves favorable generalization properties as compared to standard convolutional neural networks (CNNs). The overall setup of the experiment is as follows: we train policies parameterized by VINs and policies parameterized by convolutional networks on the same set of randomly generated gridworld maps in the same way (described below) and then test their performance on a held-out set of test maps, which was generated in the same way as the set of training maps but is disjoint from the training set. The MDP is what one would expect of a gridworld environment â the states are the positions on the map; the actions are movements up, down, left, and right; the rewards are +1 for reaching the goal, â1 for falling into a hole, and â0.01 otherwise (to encourage the policy to ï¬nd the shortest path); the transitions are deterministic. Structure of the networks. The VINs used are similar to those described in the main body of the paper. After K value-iteration recurrences, we have approximate Q values for every state and action in the map. The attention selects only those for the current state, and these are converted to a
10 | 1602.02867#51 | Value Iteration Networks | We introduce the value iteration network (VIN): a fully differentiable neural
network with a `planning module' embedded within. VINs can learn to plan, and
are suitable for predicting outcomes that involve planning-based reasoning,
such as policies for reinforcement learning. Key to our approach is a novel
differentiable approximation of the value-iteration algorithm, which can be
represented as a convolutional neural network, and trained end-to-end using
standard backpropagation. We evaluate VIN based policies on discrete and
continuous path-planning domains, and on a natural-language based search task.
We show that by learning an explicit planning computation, VIN policies
generalize better to new, unseen domains. | http://arxiv.org/pdf/1602.02867 | Aviv Tamar, Yi Wu, Garrett Thomas, Sergey Levine, Pieter Abbeel | cs.AI, cs.LG, cs.NE, stat.ML | Fixed missing table values | Advances in Neural Information Processing Systems 29 pages
2154--2162, 2016 | cs.AI | 20160209 | 20170320 | [
{
"id": "1602.02261"
},
{
"id": "1602.01783"
},
{
"id": "1604.06778"
},
{
"id": "1604.07095"
}
] |
1602.02867 | 53 | probability distribution over actions using the softmax function. We use K = 10 for the 8 à 8 maps and K = 20 for the 16 à 16 maps. The convolutional networksâ structure was adapted to accommodate the size of the maps. For the 8Ã8 maps, we use 50 ï¬lters in the ï¬rst layer and then 100 ï¬lters in the second layer, all of size 3 à 3. Each of these layers is followed by a 2 à 2 max-pool. At the end we have a fully connected hidden layer with 100 hidden units, followed by a fully-connected layer to the (4) outputs, which are converted to probabilities using the softmax function. The network for the 16 à 16 maps is similar but uses three convolutional layers (with 50, 100, and 100 ï¬lters respectively), the ï¬rst two of which are 2 à 2 max-pooled, followed by two fully-connected hidden layers (200 and 100 hidden units respectively) before connecting to the outputs and performing softmax. Training with a curriculum. To ensure that the policies are not simply memorizing speciï¬c maps, we randomly select a map before each episode. But some maps are far more difï¬cult | 1602.02867#53 | Value Iteration Networks | We introduce the value iteration network (VIN): a fully differentiable neural
network with a `planning module' embedded within. VINs can learn to plan, and
are suitable for predicting outcomes that involve planning-based reasoning,
such as policies for reinforcement learning. Key to our approach is a novel
differentiable approximation of the value-iteration algorithm, which can be
represented as a convolutional neural network, and trained end-to-end using
standard backpropagation. We evaluate VIN based policies on discrete and
continuous path-planning domains, and on a natural-language based search task.
We show that by learning an explicit planning computation, VIN policies
generalize better to new, unseen domains. | http://arxiv.org/pdf/1602.02867 | Aviv Tamar, Yi Wu, Garrett Thomas, Sergey Levine, Pieter Abbeel | cs.AI, cs.LG, cs.NE, stat.ML | Fixed missing table values | Advances in Neural Information Processing Systems 29 pages
2154--2162, 2016 | cs.AI | 20160209 | 20170320 | [
{
"id": "1602.02261"
},
{
"id": "1602.01783"
},
{
"id": "1604.06778"
},
{
"id": "1604.07095"
}
] |
1602.02867 | 54 | policies are not simply memorizing speciï¬c maps, we randomly select a map before each episode. But some maps are far more difï¬cult than others, and the agent learns best when it stands a reasonable chance of reaching this goal. Thus we found it beneï¬cial to begin training on the easiest maps and then gradually progress to more difï¬cult maps. This is the idea of curriculum training. We consider curriculum training as a way to address the exploration problem. If a completely untrained agent is dropped into a very challenging map, it moves randomly and stands approximately zero chance of reaching the goal (and thus learning a useful reward). But even a random policy can consistently reach goals nearby and learn something useful in the process, e.g. to move toward the goal. Once the policy knows how to solve tasks of difï¬culty n, it can more easily learn to solve tasks of difï¬culty n + 1, as compared to a completely untrained policy. This strategy is well-aligned with how formal education is structured; you canât effectively learn calculus without knowing basic algebra. Not all environments have an obvious | 1602.02867#54 | Value Iteration Networks | We introduce the value iteration network (VIN): a fully differentiable neural
network with a `planning module' embedded within. VINs can learn to plan, and
are suitable for predicting outcomes that involve planning-based reasoning,
such as policies for reinforcement learning. Key to our approach is a novel
differentiable approximation of the value-iteration algorithm, which can be
represented as a convolutional neural network, and trained end-to-end using
standard backpropagation. We evaluate VIN based policies on discrete and
continuous path-planning domains, and on a natural-language based search task.
We show that by learning an explicit planning computation, VIN policies
generalize better to new, unseen domains. | http://arxiv.org/pdf/1602.02867 | Aviv Tamar, Yi Wu, Garrett Thomas, Sergey Levine, Pieter Abbeel | cs.AI, cs.LG, cs.NE, stat.ML | Fixed missing table values | Advances in Neural Information Processing Systems 29 pages
2154--2162, 2016 | cs.AI | 20160209 | 20170320 | [
{
"id": "1602.02261"
},
{
"id": "1602.01783"
},
{
"id": "1604.06778"
},
{
"id": "1604.07095"
}
] |
1602.02867 | 55 | strategy is well-aligned with how formal education is structured; you canât effectively learn calculus without knowing basic algebra. Not all environments have an obvious difï¬culty metric, but fortunately the gridworld task does. We deï¬ne the difï¬culty of a map as the length of the shortest path from the start state to the goal state. It is natural to start with difï¬culty 1 (the start state and goal state are adjacent) and ramp up the difï¬culty by one level once a certain threshold of âsuccessâ is reached. In our experiments we use the average discounted return to assess progress and increase the difï¬culty level from n to n + 1 when the average discounted return for an iteration exceeds 1 â n 35 . This rule was chosen empirically and takes into account the fact that higher difï¬culty levels are more difï¬cult to learn. All networks were trained using the trust region policy optimization (TRPO) [26] algorithm, using publicly available code in the RLLab benchmark [5]. Testing. When testing, we ignore the exact rewards and measure simply whether or not the agent reaches the | 1602.02867#55 | Value Iteration Networks | We introduce the value iteration network (VIN): a fully differentiable neural
network with a `planning module' embedded within. VINs can learn to plan, and
are suitable for predicting outcomes that involve planning-based reasoning,
such as policies for reinforcement learning. Key to our approach is a novel
differentiable approximation of the value-iteration algorithm, which can be
represented as a convolutional neural network, and trained end-to-end using
standard backpropagation. We evaluate VIN based policies on discrete and
continuous path-planning domains, and on a natural-language based search task.
We show that by learning an explicit planning computation, VIN policies
generalize better to new, unseen domains. | http://arxiv.org/pdf/1602.02867 | Aviv Tamar, Yi Wu, Garrett Thomas, Sergey Levine, Pieter Abbeel | cs.AI, cs.LG, cs.NE, stat.ML | Fixed missing table values | Advances in Neural Information Processing Systems 29 pages
2154--2162, 2016 | cs.AI | 20160209 | 20170320 | [
{
"id": "1602.02261"
},
{
"id": "1602.01783"
},
{
"id": "1604.06778"
},
{
"id": "1604.07095"
}
] |
1602.02867 | 56 | using publicly available code in the RLLab benchmark [5]. Testing. When testing, we ignore the exact rewards and measure simply whether or not the agent reaches the goal. For each map in the test set, we run an episode, noting if the policy succeeds in reaching the goal. The proportion of successful trials out of all the trials is reported for each network. (See Table 3.) On the 8 Ã 8 maps, we used the same number of training iterations on both types of networks to make the comparison as fair as possible. On the 16 Ã 16 maps, it became clear that the convolutional network was struggling, so we allowed it twice as many training iterations as the VIN, yet it still failed to achieve even a remotely similar level of performance on the test maps. (See left image of Figure 6.) We posit that this is because the VIN learns to plan, while the CNN simply follows a reactive policy. Though the CNN policy performs reasonably well on the smaller domains, it does not scale to larger domains, while the VIN does. (See right image of Figure 6.) | 1602.02867#56 | Value Iteration Networks | We introduce the value iteration network (VIN): a fully differentiable neural
network with a `planning module' embedded within. VINs can learn to plan, and
are suitable for predicting outcomes that involve planning-based reasoning,
such as policies for reinforcement learning. Key to our approach is a novel
differentiable approximation of the value-iteration algorithm, which can be
represented as a convolutional neural network, and trained end-to-end using
standard backpropagation. We evaluate VIN based policies on discrete and
continuous path-planning domains, and on a natural-language based search task.
We show that by learning an explicit planning computation, VIN policies
generalize better to new, unseen domains. | http://arxiv.org/pdf/1602.02867 | Aviv Tamar, Yi Wu, Garrett Thomas, Sergey Levine, Pieter Abbeel | cs.AI, cs.LG, cs.NE, stat.ML | Fixed missing table values | Advances in Neural Information Processing Systems 29 pages
2154--2162, 2016 | cs.AI | 20160209 | 20170320 | [
{
"id": "1602.02261"
},
{
"id": "1602.01783"
},
{
"id": "1604.06778"
},
{
"id": "1604.07095"
}
] |
1602.02867 | 57 | # D Technical Details for Experiments
We report the full technical details used for training our networks.
11
â VIN â_cnn ; Ss 5 i Success rate on test set \ \ soo ood 18003000250 3000 Training epochs (1000 transitions)
VIN i } I | i i o 2 7 G 7 Fr 2 oiicuty
â VIN VIN â_cnn ; i } I Ss | i i 5 i Success rate on test set \ \ soo ood 18003000250 3000 o 2 7 G 7 Fr 2 Training epochs (1000 transitions) oiicuty
Figure 6: RL results â performance of VIN and CNN on 16 à 16 test maps. Left: Performance on all maps as a function of amount of training. Right: Success rate on test maps of increasing difï¬culty.
# D.1 Grid-world Domain | 1602.02867#57 | Value Iteration Networks | We introduce the value iteration network (VIN): a fully differentiable neural
network with a `planning module' embedded within. VINs can learn to plan, and
are suitable for predicting outcomes that involve planning-based reasoning,
such as policies for reinforcement learning. Key to our approach is a novel
differentiable approximation of the value-iteration algorithm, which can be
represented as a convolutional neural network, and trained end-to-end using
standard backpropagation. We evaluate VIN based policies on discrete and
continuous path-planning domains, and on a natural-language based search task.
We show that by learning an explicit planning computation, VIN policies
generalize better to new, unseen domains. | http://arxiv.org/pdf/1602.02867 | Aviv Tamar, Yi Wu, Garrett Thomas, Sergey Levine, Pieter Abbeel | cs.AI, cs.LG, cs.NE, stat.ML | Fixed missing table values | Advances in Neural Information Processing Systems 29 pages
2154--2162, 2016 | cs.AI | 20160209 | 20170320 | [
{
"id": "1602.02261"
},
{
"id": "1602.01783"
},
{
"id": "1604.06778"
},
{
"id": "1604.07095"
}
] |
1602.02867 | 58 | Our training set consists of N; = 5000 random grid-world instances, with N; = 7 shortest-path trajectories (calculated using an optimal planning algorithm) from a random start-state to a random goal-state for each instance; a total of N; x N; trajectories. For each state s = (7, j) in each trajectory, we produce a (2 x m x n)-sized observation image Simage- The first channel of simage encodes the obstacle presence (1 for obstacle, 0 otherwise), while the second channel encodes the goal position (1 at the goal, 0 otherwise). The full observation vector is 4(s) = [s, Simage]- In addition, for each state we produce a label a that encodes the action (one of 8 directions) that an optimal shortest-path policy would take in that state. We design a VIN for this task as follows. The state space S was chosen to be a m x n grid-world, similar to the true state space S/*| The reward R in this space can be represented by an m x n map, and we chose the reward mapping fz to be a CNN with Simage as its input, one layer with 150 kernels of size 3 x 3, and a second layer | 1602.02867#58 | Value Iteration Networks | We introduce the value iteration network (VIN): a fully differentiable neural
network with a `planning module' embedded within. VINs can learn to plan, and
are suitable for predicting outcomes that involve planning-based reasoning,
such as policies for reinforcement learning. Key to our approach is a novel
differentiable approximation of the value-iteration algorithm, which can be
represented as a convolutional neural network, and trained end-to-end using
standard backpropagation. We evaluate VIN based policies on discrete and
continuous path-planning domains, and on a natural-language based search task.
We show that by learning an explicit planning computation, VIN policies
generalize better to new, unseen domains. | http://arxiv.org/pdf/1602.02867 | Aviv Tamar, Yi Wu, Garrett Thomas, Sergey Levine, Pieter Abbeel | cs.AI, cs.LG, cs.NE, stat.ML | Fixed missing table values | Advances in Neural Information Processing Systems 29 pages
2154--2162, 2016 | cs.AI | 20160209 | 20170320 | [
{
"id": "1602.02261"
},
{
"id": "1602.01783"
},
{
"id": "1604.06778"
},
{
"id": "1604.07095"
}
] |
1602.02867 | 59 | map, and we chose the reward mapping fz to be a CNN with Simage as its input, one layer with 150 kernels of size 3 x 3, and a second layer with one 3 x 3 filter to output R. Thus, maps the image of obstacles and goal to a âreward imageâ. The transitions P were defined as 3 x 3 convolution kernels in the VI block, and exploit the fact that transitions in the grid-world are local. Note that the transitions defined this way do not depend on the state s. Interestingly, we shall see that the network learned rewards and transitions that nevertheless enable it to successfully plan in this task. For the attention module, since there is a one-to-one mapping between the agent position in S and in S, we chose a trivial approach that selects the Q values in the VI block for the state in the real MDP s, i.e., w(s) = Q(s,-). The final reactive policy is a fully connected softmax output layer with weights W, Tre(-|w(s)) x exp (WT y(s)) . We trained several neural-network policies based on a multi-class logistic regression loss function using stochastic gradient descent, with an RMSProp step size [29], implemented in the | 1602.02867#59 | Value Iteration Networks | We introduce the value iteration network (VIN): a fully differentiable neural
network with a `planning module' embedded within. VINs can learn to plan, and
are suitable for predicting outcomes that involve planning-based reasoning,
such as policies for reinforcement learning. Key to our approach is a novel
differentiable approximation of the value-iteration algorithm, which can be
represented as a convolutional neural network, and trained end-to-end using
standard backpropagation. We evaluate VIN based policies on discrete and
continuous path-planning domains, and on a natural-language based search task.
We show that by learning an explicit planning computation, VIN policies
generalize better to new, unseen domains. | http://arxiv.org/pdf/1602.02867 | Aviv Tamar, Yi Wu, Garrett Thomas, Sergey Levine, Pieter Abbeel | cs.AI, cs.LG, cs.NE, stat.ML | Fixed missing table values | Advances in Neural Information Processing Systems 29 pages
2154--2162, 2016 | cs.AI | 20160209 | 20170320 | [
{
"id": "1602.02261"
},
{
"id": "1602.01783"
},
{
"id": "1604.06778"
},
{
"id": "1604.07095"
}
] |
1602.02867 | 61 | VIN network We used the VIN model of Section 3 as described above, with 10 channels for the q layer in the VI block. The recurrence K was set relative to the problem size: K = 10 for 8 à 8 domains, K = 20 for 16 à 16 domains, and K = 36 for 28 à 28 domains. The guideline for choosing these values was to keep the network small while guaranteeing that goal information can ï¬ow to every state in the map. CNN network: We devised a CNN-based reactive policy inspired by the recent impressive results of DQN [21], with 5 convolution layers with [50, 50, 100, 100, 100] kernels of size 3 à 3, and 2 à 2 max-pooling after the ï¬rst and third layers. The ï¬nal layer is fully connected, and maps to a softmax over actions. To represent the current state, we added to simage a channel that encodes the current position (1 at the current state, 0 otherwise). | 1602.02867#61 | Value Iteration Networks | We introduce the value iteration network (VIN): a fully differentiable neural
network with a `planning module' embedded within. VINs can learn to plan, and
are suitable for predicting outcomes that involve planning-based reasoning,
such as policies for reinforcement learning. Key to our approach is a novel
differentiable approximation of the value-iteration algorithm, which can be
represented as a convolutional neural network, and trained end-to-end using
standard backpropagation. We evaluate VIN based policies on discrete and
continuous path-planning domains, and on a natural-language based search task.
We show that by learning an explicit planning computation, VIN policies
generalize better to new, unseen domains. | http://arxiv.org/pdf/1602.02867 | Aviv Tamar, Yi Wu, Garrett Thomas, Sergey Levine, Pieter Abbeel | cs.AI, cs.LG, cs.NE, stat.ML | Fixed missing table values | Advances in Neural Information Processing Systems 29 pages
2154--2162, 2016 | cs.AI | 20160209 | 20170320 | [
{
"id": "1602.02261"
},
{
"id": "1602.01783"
},
{
"id": "1604.06778"
},
{
"id": "1604.07095"
}
] |
1602.02867 | 62 | 4For a particular conï¬guration of obstacles, the true grid-world domain can be captured by a m à n state space with the obstacles encoded in the MDP transitions, as in our notation. For a general obstacle conï¬guration, the obstacle positions have to also be encoded in the state. The VIN was able to learn a policy for a general obstacle conï¬guration by planning in a m à n state space by also taking into account the observation of the map.
12 | 1602.02867#62 | Value Iteration Networks | We introduce the value iteration network (VIN): a fully differentiable neural
network with a `planning module' embedded within. VINs can learn to plan, and
are suitable for predicting outcomes that involve planning-based reasoning,
such as policies for reinforcement learning. Key to our approach is a novel
differentiable approximation of the value-iteration algorithm, which can be
represented as a convolutional neural network, and trained end-to-end using
standard backpropagation. We evaluate VIN based policies on discrete and
continuous path-planning domains, and on a natural-language based search task.
We show that by learning an explicit planning computation, VIN policies
generalize better to new, unseen domains. | http://arxiv.org/pdf/1602.02867 | Aviv Tamar, Yi Wu, Garrett Thomas, Sergey Levine, Pieter Abbeel | cs.AI, cs.LG, cs.NE, stat.ML | Fixed missing table values | Advances in Neural Information Processing Systems 29 pages
2154--2162, 2016 | cs.AI | 20160209 | 20170320 | [
{
"id": "1602.02261"
},
{
"id": "1602.01783"
},
{
"id": "1604.06778"
},
{
"id": "1604.07095"
}
] |
1602.02867 | 63 | 12
Fully Convolutional Network (FCN): The problem setting for this domain is similar to semantic segmentation [19], in which each pixel in the image is assigned a semantic label (the action in our case). We therefore devised an FCN inspired by a state-of-the-art semantic segmentation algorithm [19], with 3 convolution layers, where the ï¬rst layer has a ï¬lter that spans the whole image, to properly convey information from the goal to every other state. The ï¬rst convolution layer has 150 ï¬lters of size (2m â 1) à (2n â 1), which span the whole image and can convey information about the goal to every pixel. The second layer has 150 ï¬lters of size 1 à 1, and the third layer has 10 ï¬lters of size 1 à 1, to produce an output sized 10 à m à n, similarly to the ¯Q layer in our VIN. Similarly to the attention mechanism in the VIN, the values that correspond to the current state (pixel) are passed to a fully connected softmax output layer.
# D.2 Mars Domain | 1602.02867#63 | Value Iteration Networks | We introduce the value iteration network (VIN): a fully differentiable neural
network with a `planning module' embedded within. VINs can learn to plan, and
are suitable for predicting outcomes that involve planning-based reasoning,
such as policies for reinforcement learning. Key to our approach is a novel
differentiable approximation of the value-iteration algorithm, which can be
represented as a convolutional neural network, and trained end-to-end using
standard backpropagation. We evaluate VIN based policies on discrete and
continuous path-planning domains, and on a natural-language based search task.
We show that by learning an explicit planning computation, VIN policies
generalize better to new, unseen domains. | http://arxiv.org/pdf/1602.02867 | Aviv Tamar, Yi Wu, Garrett Thomas, Sergey Levine, Pieter Abbeel | cs.AI, cs.LG, cs.NE, stat.ML | Fixed missing table values | Advances in Neural Information Processing Systems 29 pages
2154--2162, 2016 | cs.AI | 20160209 | 20170320 | [
{
"id": "1602.02261"
},
{
"id": "1602.01783"
},
{
"id": "1604.06778"
},
{
"id": "1604.07095"
}
] |
1602.02867 | 64 | We consider the problem of autonomously navigating the surface of Mars by a rover such as the Mars Science Laboratory (MSL) (Lockwood, 2006) over long-distance trajectories. The MSL has a limited ability for climbing high-degree slopes, and its path-planning algorithm should therefore avoid navigating into high-slope areas. In our experiment, we plan trajectories that avoid slopes of 10 degrees or more, using overhead terrain images from the High Resolution Imaging Science Experiment (HiRISE) (McEwen et al., 2007). The HiRISE data consists of grayscale images of the Mars terrain, and matching elevation data, accurate to tens of centimeters. We used an image of a 33.3km by 6.3km area at 49.96 degrees latitude and 219.2 degrees longitude, with a 10.5 sq. meters / pixel resolution. Each domain is a 128 à 128 image patch, on which we deï¬ned a 16 à 16 grid-world, where each state was considered an obstacle if its corresponding 8 à 8 image patch contained an angle of 10 degrees or more, evaluated using an additional elevation data. An example of the domain and terrain image is depicted in Figure 3. The | 1602.02867#64 | Value Iteration Networks | We introduce the value iteration network (VIN): a fully differentiable neural
network with a `planning module' embedded within. VINs can learn to plan, and
are suitable for predicting outcomes that involve planning-based reasoning,
such as policies for reinforcement learning. Key to our approach is a novel
differentiable approximation of the value-iteration algorithm, which can be
represented as a convolutional neural network, and trained end-to-end using
standard backpropagation. We evaluate VIN based policies on discrete and
continuous path-planning domains, and on a natural-language based search task.
We show that by learning an explicit planning computation, VIN policies
generalize better to new, unseen domains. | http://arxiv.org/pdf/1602.02867 | Aviv Tamar, Yi Wu, Garrett Thomas, Sergey Levine, Pieter Abbeel | cs.AI, cs.LG, cs.NE, stat.ML | Fixed missing table values | Advances in Neural Information Processing Systems 29 pages
2154--2162, 2016 | cs.AI | 20160209 | 20170320 | [
{
"id": "1602.02261"
},
{
"id": "1602.01783"
},
{
"id": "1604.06778"
},
{
"id": "1604.07095"
}
] |
1602.02867 | 65 | image patch contained an angle of 10 degrees or more, evaluated using an additional elevation data. An example of the domain and terrain image is depicted in Figure 3. The MDP for shortest-path planning in this case is similar to the grid-world domain of Section 4.1, and the VIN design was similar, only with a deeper CNN in the reward mapping fR for processing the image. Our goal is to train a network that predicts the shortest-path trajectory directly from the terrain image data. We emphasize that the ground-truth elevation data is not part of the input, and the elevation therefore must be inferred (if needed) from the terrain image itself. Our VIN design follows the model of Section 4.1. In this case, however, instead of feeding in the obstacle map, we feed in the raw terrain image, and accordingly modify the reward mapping fR with 2 additional CNN layers for processing the image: the ï¬rst with 6 kernels of size 5 à 5 and 4 à 4 max-pooling, and the second with a 12 kernels of size 3 à 3 and 2 à 2 max-pooling. The resulting 12 à m à n tensor is concatenated with the goal image, and passed to a third | 1602.02867#65 | Value Iteration Networks | We introduce the value iteration network (VIN): a fully differentiable neural
network with a `planning module' embedded within. VINs can learn to plan, and
are suitable for predicting outcomes that involve planning-based reasoning,
such as policies for reinforcement learning. Key to our approach is a novel
differentiable approximation of the value-iteration algorithm, which can be
represented as a convolutional neural network, and trained end-to-end using
standard backpropagation. We evaluate VIN based policies on discrete and
continuous path-planning domains, and on a natural-language based search task.
We show that by learning an explicit planning computation, VIN policies
generalize better to new, unseen domains. | http://arxiv.org/pdf/1602.02867 | Aviv Tamar, Yi Wu, Garrett Thomas, Sergey Levine, Pieter Abbeel | cs.AI, cs.LG, cs.NE, stat.ML | Fixed missing table values | Advances in Neural Information Processing Systems 29 pages
2154--2162, 2016 | cs.AI | 20160209 | 20170320 | [
{
"id": "1602.02261"
},
{
"id": "1602.01783"
},
{
"id": "1604.06778"
},
{
"id": "1604.07095"
}
] |
1602.02867 | 66 | of size 3 à 3 and 2 à 2 max-pooling. The resulting 12 à m à n tensor is concatenated with the goal image, and passed to a third layer with 150 kernels of size 3 à 3 and a fourth layer with one 3 à 3 ï¬lter to output ¯R. The state inputs and output labels remain as in the grid-world experiments. We emphasize that the whole network is trained end-to-end, without pre-training the input ï¬lters. In Table 4 we present our results for training a m = n = 16 map from a 10K image-patch dataset, with 7 random trajectories per patch, evaluated on a held-out test set of 1K patches. Figure 3 shows an instance of the input image, the obstacles, the shortest-path trajectory, and the trajectory predicted by our method. To put the 84.8% success rate in context, we compare with the best performance achievable without access to the elevation data. To make this comparison, we trained a CNN to classify whether an 8 à 8 patch is an obstacle or not. This classiï¬er was trained using the same image data as the VIN network, but its labels were the true obstacle classiï¬cations | 1602.02867#66 | Value Iteration Networks | We introduce the value iteration network (VIN): a fully differentiable neural
network with a `planning module' embedded within. VINs can learn to plan, and
are suitable for predicting outcomes that involve planning-based reasoning,
such as policies for reinforcement learning. Key to our approach is a novel
differentiable approximation of the value-iteration algorithm, which can be
represented as a convolutional neural network, and trained end-to-end using
standard backpropagation. We evaluate VIN based policies on discrete and
continuous path-planning domains, and on a natural-language based search task.
We show that by learning an explicit planning computation, VIN policies
generalize better to new, unseen domains. | http://arxiv.org/pdf/1602.02867 | Aviv Tamar, Yi Wu, Garrett Thomas, Sergey Levine, Pieter Abbeel | cs.AI, cs.LG, cs.NE, stat.ML | Fixed missing table values | Advances in Neural Information Processing Systems 29 pages
2154--2162, 2016 | cs.AI | 20160209 | 20170320 | [
{
"id": "1602.02261"
},
{
"id": "1602.01783"
},
{
"id": "1604.06778"
},
{
"id": "1604.07095"
}
] |
1602.02867 | 67 | or not. This classiï¬er was trained using the same image data as the VIN network, but its labels were the true obstacle classiï¬cations from the elevation map (we reiterate that the VIN network did not have access to these ground-truth obstacle classiï¬cation labels during training or testing). Training this classiï¬er is a standard binary classiï¬cation problem, and its performance represents the best obstacle identiï¬cation possible with our CNN in this domain. The best-achievable shortest-path prediction is then deï¬ned as the shortest path in an obstacle map generated by this classiï¬er from the raw image. The results of this optimal predictor are reported in Table 1. The 90.3% success rate shows that obstacle identiï¬cation from the raw image is indeed challenging. Thus, the success rate of the VIN network, which was trained without any obstacle labels, and had to âï¬gure outâ the planning process is quite remarkable. | 1602.02867#67 | Value Iteration Networks | We introduce the value iteration network (VIN): a fully differentiable neural
network with a `planning module' embedded within. VINs can learn to plan, and
are suitable for predicting outcomes that involve planning-based reasoning,
such as policies for reinforcement learning. Key to our approach is a novel
differentiable approximation of the value-iteration algorithm, which can be
represented as a convolutional neural network, and trained end-to-end using
standard backpropagation. We evaluate VIN based policies on discrete and
continuous path-planning domains, and on a natural-language based search task.
We show that by learning an explicit planning computation, VIN policies
generalize better to new, unseen domains. | http://arxiv.org/pdf/1602.02867 | Aviv Tamar, Yi Wu, Garrett Thomas, Sergey Levine, Pieter Abbeel | cs.AI, cs.LG, cs.NE, stat.ML | Fixed missing table values | Advances in Neural Information Processing Systems 29 pages
2154--2162, 2016 | cs.AI | 20160209 | 20170320 | [
{
"id": "1602.02261"
},
{
"id": "1602.01783"
},
{
"id": "1604.06778"
},
{
"id": "1604.07095"
}
] |
1602.02867 | 68 | # D.3 Continuous Control
For training we chose the guided policy search (GPS) algorithm with unknown dynamics [17], which is suitable for learning policies for continuous dynamics with contacts, and we used the publicly available GPS code [7], and Mujoco [30] for physical simulation. GPS works by learning time- varying iLQG controllers for each domain, and then ï¬tting the controllers to a single NN policy using
13
VIN Best achievable Pred. loss 0.089 - Traj. diff. 84.8% 0.016 90.3% 0.0089 Succ. rate
Table 4: Performance of VINs on the Mars domain. For comparison, the performance of a planner that used obstacle predictions trained from labeled obstacle data is shown. This upper bound on performance demonstrates the difï¬culty in identifying obstacles from the raw image data. Remarkably, the VIN achieved close performance without access to any labeled data about the obstacles. | 1602.02867#68 | Value Iteration Networks | We introduce the value iteration network (VIN): a fully differentiable neural
network with a `planning module' embedded within. VINs can learn to plan, and
are suitable for predicting outcomes that involve planning-based reasoning,
such as policies for reinforcement learning. Key to our approach is a novel
differentiable approximation of the value-iteration algorithm, which can be
represented as a convolutional neural network, and trained end-to-end using
standard backpropagation. We evaluate VIN based policies on discrete and
continuous path-planning domains, and on a natural-language based search task.
We show that by learning an explicit planning computation, VIN policies
generalize better to new, unseen domains. | http://arxiv.org/pdf/1602.02867 | Aviv Tamar, Yi Wu, Garrett Thomas, Sergey Levine, Pieter Abbeel | cs.AI, cs.LG, cs.NE, stat.ML | Fixed missing table values | Advances in Neural Information Processing Systems 29 pages
2154--2162, 2016 | cs.AI | 20160209 | 20170320 | [
{
"id": "1602.02261"
},
{
"id": "1602.01783"
},
{
"id": "1604.06778"
},
{
"id": "1604.07095"
}
] |
1602.02867 | 69 | supervised learning. This process is repeated for several iterations, and a special cost function is used to enforce an agreement between the trajectory distribution of the iLQG and NN controllers. We refer to [17, 7] for the full algorithm details. For our task, we ran 10 iterations of iLQG, with the cost being a quadratic distance to the goal, followed by one iteration of NN policy ï¬tting. This allows us to cleanly compare VINs to other policies without GPS-speciï¬c effects. Our VIN design is similar to the grid-world cases: the state space ¯S is a 16 à 16 grid-world, and the transitions ¯P are 3 à 3 convolution kernels in the VI block, similar to the grid-world of Section 4.1. However, we made some important modiï¬cations: the attention module selects a 5 à 5 patch of the value ¯V , centered around the current (discretized) position in the map. The ï¬nal reactive policy is a 3-layer fully connected network, with a 2-dimensional continuous output for the controls. In addition, due to the limited number of training domains, we pre-trained the VIN with transition weights that correspond to discounted | 1602.02867#69 | Value Iteration Networks | We introduce the value iteration network (VIN): a fully differentiable neural
network with a `planning module' embedded within. VINs can learn to plan, and
are suitable for predicting outcomes that involve planning-based reasoning,
such as policies for reinforcement learning. Key to our approach is a novel
differentiable approximation of the value-iteration algorithm, which can be
represented as a convolutional neural network, and trained end-to-end using
standard backpropagation. We evaluate VIN based policies on discrete and
continuous path-planning domains, and on a natural-language based search task.
We show that by learning an explicit planning computation, VIN policies
generalize better to new, unseen domains. | http://arxiv.org/pdf/1602.02867 | Aviv Tamar, Yi Wu, Garrett Thomas, Sergey Levine, Pieter Abbeel | cs.AI, cs.LG, cs.NE, stat.ML | Fixed missing table values | Advances in Neural Information Processing Systems 29 pages
2154--2162, 2016 | cs.AI | 20160209 | 20170320 | [
{
"id": "1602.02261"
},
{
"id": "1602.01783"
},
{
"id": "1604.06778"
},
{
"id": "1604.07095"
}
] |
1602.02867 | 70 | continuous output for the controls. In addition, due to the limited number of training domains, we pre-trained the VIN with transition weights that correspond to discounted grid-world transitions (for example, the transitions for an action to go north-west would be γ in the top left corner and zeros otherwise), before training end-to-end. This is a reasonable prior for the weights in a 2-d task, and we emphasize that even with this initialization, the initial value function is meaningless, since the reward map fR is not yet learned. The reward mapping fR is a CNN with simage as its input, one layer with 150 kernels of size 3 à 3, and a second layer with one 3 à 3 ï¬lter to output ¯R. | 1602.02867#70 | Value Iteration Networks | We introduce the value iteration network (VIN): a fully differentiable neural
network with a `planning module' embedded within. VINs can learn to plan, and
are suitable for predicting outcomes that involve planning-based reasoning,
such as policies for reinforcement learning. Key to our approach is a novel
differentiable approximation of the value-iteration algorithm, which can be
represented as a convolutional neural network, and trained end-to-end using
standard backpropagation. We evaluate VIN based policies on discrete and
continuous path-planning domains, and on a natural-language based search task.
We show that by learning an explicit planning computation, VIN policies
generalize better to new, unseen domains. | http://arxiv.org/pdf/1602.02867 | Aviv Tamar, Yi Wu, Garrett Thomas, Sergey Levine, Pieter Abbeel | cs.AI, cs.LG, cs.NE, stat.ML | Fixed missing table values | Advances in Neural Information Processing Systems 29 pages
2154--2162, 2016 | cs.AI | 20160209 | 20170320 | [
{
"id": "1602.02261"
},
{
"id": "1602.01783"
},
{
"id": "1604.06778"
},
{
"id": "1604.07095"
}
] |
1602.02867 | 72 | âWebNavâ [23] is a recently proposed goal-driven web navigation benchmark. In WebNav, web pages and links from some website form a directed graph G(S, E). The agent is presented with a query text, which consists of N, sentences from a target page at most N;, hops away from the starting page. The goal for the agent is to navigate to that target page from the starting page via clicking at most Ny links per page. Here, we choose N;, = Ny = Np = 4. In [23], the agent receives a reward of 1 when reaching the target page via any path no longer than 10 hops. For evaluation convenience, in our experiment, the agent can receive a reward only if it reaches the destination via the shortest path, which makes the task much harder. We measure the top-1 and top-4 prediction accuracy as well as the average reward for the baseline and our VIN model. For every page s, the valid transitions are A, = {sâ : (s,sâ) ⬠E}. For every web page s and every query text g, we utilize the bag-of-words model with pretrained word embedding provided by [23] to produce feature vectors ¢(s) and | 1602.02867#72 | Value Iteration Networks | We introduce the value iteration network (VIN): a fully differentiable neural
network with a `planning module' embedded within. VINs can learn to plan, and
are suitable for predicting outcomes that involve planning-based reasoning,
such as policies for reinforcement learning. Key to our approach is a novel
differentiable approximation of the value-iteration algorithm, which can be
represented as a convolutional neural network, and trained end-to-end using
standard backpropagation. We evaluate VIN based policies on discrete and
continuous path-planning domains, and on a natural-language based search task.
We show that by learning an explicit planning computation, VIN policies
generalize better to new, unseen domains. | http://arxiv.org/pdf/1602.02867 | Aviv Tamar, Yi Wu, Garrett Thomas, Sergey Levine, Pieter Abbeel | cs.AI, cs.LG, cs.NE, stat.ML | Fixed missing table values | Advances in Neural Information Processing Systems 29 pages
2154--2162, 2016 | cs.AI | 20160209 | 20170320 | [
{
"id": "1602.02261"
},
{
"id": "1602.01783"
},
{
"id": "1604.06778"
},
{
"id": "1604.07095"
}
] |
1602.02867 | 74 | a hidden vector h: h(s,q) = tanh (w aia \) . The final baseline policy is computed via Tsi(sâ|8, q) o exp (h(s,q)'(sâ)) for sâ ⬠Ag. We design a VIN for this task as follows. We firstly selected a smaller website as the approximate graph G(S, E), and choose S as the states in VI. For query q and a page § in S, we compute the reward R(s) by fr(3|q) = tanh ((Wre(a) + bp)! o(3)) with parameters Wr (diagonal matrix) and br (vector). For transition, since the graph remains unchanged, P is fixed. For the attention module II(V*, s), we compute it by II(V*,s) = });cg sigmoid ((Win(s) + bn)" o(3)) v*(s), where Wy and by are parameters and Wy is diagonal. Moreover, we compute the coefficient 7 based on the query q and the state s using a tanh-layer neural net parametrized by W,: y(s,q) =
14 | 1602.02867#74 | Value Iteration Networks | We introduce the value iteration network (VIN): a fully differentiable neural
network with a `planning module' embedded within. VINs can learn to plan, and
are suitable for predicting outcomes that involve planning-based reasoning,
such as policies for reinforcement learning. Key to our approach is a novel
differentiable approximation of the value-iteration algorithm, which can be
represented as a convolutional neural network, and trained end-to-end using
standard backpropagation. We evaluate VIN based policies on discrete and
continuous path-planning domains, and on a natural-language based search task.
We show that by learning an explicit planning computation, VIN policies
generalize better to new, unseen domains. | http://arxiv.org/pdf/1602.02867 | Aviv Tamar, Yi Wu, Garrett Thomas, Sergey Levine, Pieter Abbeel | cs.AI, cs.LG, cs.NE, stat.ML | Fixed missing table values | Advances in Neural Information Processing Systems 29 pages
2154--2162, 2016 | cs.AI | 20160209 | 20170320 | [
{
"id": "1602.02261"
},
{
"id": "1602.01783"
},
{
"id": "1604.06778"
},
{
"id": "1604.07095"
}
] |
1602.02867 | 76 | Table 5: Performance on the full wikipedia dataset.
# aay
tanh Wγ . Finally, we combine the VI module and the baseline method as our VIN
model by simply adding the outputs from these two networks together. In addition to the experiments reported in the main text, we performed experiments on the full wikipedia, using âwikipedia for schoolsâ as the graph for VIN planning. We report our preliminary results here. Full wikipedia website: The full wikipedia dataset consists 779169 training queries (3 million training samples) and 20004 testing queries (76664 testing samples) over 4.8 million pages with maximum 300 links per page. We use the whole WikiSchool website as our approximate graph and set K = 4. In VIN, to accelerate training, we ï¬rstly only train the VI module with K = 0. Then, we ï¬x ¯R obtained in the K = 0 case and jointly train the whole model with K = 4. The results are shown in Tab. 5 VIN achieves 1.5% better prediction accuracy than the baseline. Interestingly, with only 1.5% prediction accuracy enhancement, VIN achieves 2.5% better success rate than the baseline: note that the agent can only success when making 4 consecutive correct predictions. This indicates the VI does provide useful high-level planning information.
# D.5 Additional Technical Comments | 1602.02867#76 | Value Iteration Networks | We introduce the value iteration network (VIN): a fully differentiable neural
network with a `planning module' embedded within. VINs can learn to plan, and
are suitable for predicting outcomes that involve planning-based reasoning,
such as policies for reinforcement learning. Key to our approach is a novel
differentiable approximation of the value-iteration algorithm, which can be
represented as a convolutional neural network, and trained end-to-end using
standard backpropagation. We evaluate VIN based policies on discrete and
continuous path-planning domains, and on a natural-language based search task.
We show that by learning an explicit planning computation, VIN policies
generalize better to new, unseen domains. | http://arxiv.org/pdf/1602.02867 | Aviv Tamar, Yi Wu, Garrett Thomas, Sergey Levine, Pieter Abbeel | cs.AI, cs.LG, cs.NE, stat.ML | Fixed missing table values | Advances in Neural Information Processing Systems 29 pages
2154--2162, 2016 | cs.AI | 20160209 | 20170320 | [
{
"id": "1602.02261"
},
{
"id": "1602.01783"
},
{
"id": "1604.06778"
},
{
"id": "1604.07095"
}
] |
1602.02867 | 77 | # D.5 Additional Technical Comments
Runtime: For the 2D domains, different samples from the same domain share the same VI com- putation, since they have the same observation. Therefore, a single VI computation is required for samples from the same domain. Using this, and GPU code (Theano), VINs are not much slower than the baselines. For the language task, however, since Theano doesnât support convolutions on graphs nor sparse operations on GPU, VINs were considerably slower in our implementation.
# E Hierarchical VI Modules | 1602.02867#77 | Value Iteration Networks | We introduce the value iteration network (VIN): a fully differentiable neural
network with a `planning module' embedded within. VINs can learn to plan, and
are suitable for predicting outcomes that involve planning-based reasoning,
such as policies for reinforcement learning. Key to our approach is a novel
differentiable approximation of the value-iteration algorithm, which can be
represented as a convolutional neural network, and trained end-to-end using
standard backpropagation. We evaluate VIN based policies on discrete and
continuous path-planning domains, and on a natural-language based search task.
We show that by learning an explicit planning computation, VIN policies
generalize better to new, unseen domains. | http://arxiv.org/pdf/1602.02867 | Aviv Tamar, Yi Wu, Garrett Thomas, Sergey Levine, Pieter Abbeel | cs.AI, cs.LG, cs.NE, stat.ML | Fixed missing table values | Advances in Neural Information Processing Systems 29 pages
2154--2162, 2016 | cs.AI | 20160209 | 20170320 | [
{
"id": "1602.02261"
},
{
"id": "1602.01783"
},
{
"id": "1604.06778"
},
{
"id": "1604.07095"
}
] |
1602.02867 | 78 | The number of VI iterations K required in the VIN depends on the problem size. Consider, for example, a grid-world in which the goal is located L steps away from some state s. Then, at least L iterations of VI are required to convey the reward information from the goal to state s, and clearly, any action prediction obtained with less than L VI iterations at state s is unaware of the goal location, and therefore unacceptable. To convey reward information faster in VI, and reduce the effective K, we propose to perform VI at multiple levels of resolution. We term this model a hierarchical VI Network (HVIN), due to its similarity with hierarchical planning algorithms. In a HVIN, a copy of the input down-sampled by a factor of d is ï¬rst fed into a VI module termed the high-level VI module. The down-sampling offers a dà speedup of information transmission in the map, at the price of reduced accuracy. The value layer of the high-level VI module is then up-sampled, and added as an additional input channel to the input of the standard VI module. Thus, the high-level VI module learns a mapping from down-sampled image features to a suitable reward-shaping for | 1602.02867#78 | Value Iteration Networks | We introduce the value iteration network (VIN): a fully differentiable neural
network with a `planning module' embedded within. VINs can learn to plan, and
are suitable for predicting outcomes that involve planning-based reasoning,
such as policies for reinforcement learning. Key to our approach is a novel
differentiable approximation of the value-iteration algorithm, which can be
represented as a convolutional neural network, and trained end-to-end using
standard backpropagation. We evaluate VIN based policies on discrete and
continuous path-planning domains, and on a natural-language based search task.
We show that by learning an explicit planning computation, VIN policies
generalize better to new, unseen domains. | http://arxiv.org/pdf/1602.02867 | Aviv Tamar, Yi Wu, Garrett Thomas, Sergey Levine, Pieter Abbeel | cs.AI, cs.LG, cs.NE, stat.ML | Fixed missing table values | Advances in Neural Information Processing Systems 29 pages
2154--2162, 2016 | cs.AI | 20160209 | 20170320 | [
{
"id": "1602.02261"
},
{
"id": "1602.01783"
},
{
"id": "1604.06778"
},
{
"id": "1604.07095"
}
] |
1602.02867 | 79 | channel to the input of the standard VI module. Thus, the high-level VI module learns a mapping from down-sampled image features to a suitable reward-shaping for the nominal VI module. The full HVIN model is depicted in Figure 7. This model can easily be extended to include multiple levels of hierarchy. Table 6 shows the performance of the HVIN module in the grid-world task, compared to the VIN results reported in the main text. We used a 2 Ã 2 down-sampling layer. Similarly to the standard VIN, 3 Ã 3 convolution kernels, 150 channels for each hidden layer H (for both the down-sampled image, and standard image), and 10 channels for the q layer in each VI block. Similarly to the VIN networks, the recurrence K was set relative to the problem size, taking into account the down- sampling factor: K = 4 for 8 Ã 8 domains, K = 10 for 16 Ã 16 domains, and K = 16 for 28 Ã 28 domains (in comparison, the respective K values for standard VINs were 10, 20, and 36). The HVINs demonstrated better performance for the larger 28 Ã 28 map, which we attribute to the improved information transmission in the hierarchical VI module. | 1602.02867#79 | Value Iteration Networks | We introduce the value iteration network (VIN): a fully differentiable neural
network with a `planning module' embedded within. VINs can learn to plan, and
are suitable for predicting outcomes that involve planning-based reasoning,
such as policies for reinforcement learning. Key to our approach is a novel
differentiable approximation of the value-iteration algorithm, which can be
represented as a convolutional neural network, and trained end-to-end using
standard backpropagation. We evaluate VIN based policies on discrete and
continuous path-planning domains, and on a natural-language based search task.
We show that by learning an explicit planning computation, VIN policies
generalize better to new, unseen domains. | http://arxiv.org/pdf/1602.02867 | Aviv Tamar, Yi Wu, Garrett Thomas, Sergey Levine, Pieter Abbeel | cs.AI, cs.LG, cs.NE, stat.ML | Fixed missing table values | Advances in Neural Information Processing Systems 29 pages
2154--2162, 2016 | cs.AI | 20160209 | 20170320 | [
{
"id": "1602.02261"
},
{
"id": "1602.01783"
},
{
"id": "1604.06778"
},
{
"id": "1604.07095"
}
] |
1602.02867 | 80 | 15
Hierarchical VI Network Observation Reward vl Module Reward R Map I up-sample down x sample lew % e 4 Value a K recurrence High-level VI block
Figure 7: Hierarchical VI network. A copy of the input is ï¬rst fed into a convolution layer and then downsampled. This signal is then fed into a VI module to produce a coarse value function, corresponding to the upper level in the hierarchy. This value function is then up-sampled, and added as an additional channel in the reward layer of a standard VI module (lower level of the hierarchy).
Domain 8 Ã 8 16 Ã 16 28 Ã 28 Prediction loss 0.004 0.05 0.11 VIN Success Trajectory rate 99.6% 99.3% 97% diff. 0.001 0.089 0.086 Hierarchical VIN Prediction loss 0.005 0.03 0.05 Success Trajectory rate 99.3% 99% 98.1% diff. 0.0 0.007 0.037
Table 6: HVIN performance on grid-world domain.
16 | 1602.02867#80 | Value Iteration Networks | We introduce the value iteration network (VIN): a fully differentiable neural
network with a `planning module' embedded within. VINs can learn to plan, and
are suitable for predicting outcomes that involve planning-based reasoning,
such as policies for reinforcement learning. Key to our approach is a novel
differentiable approximation of the value-iteration algorithm, which can be
represented as a convolutional neural network, and trained end-to-end using
standard backpropagation. We evaluate VIN based policies on discrete and
continuous path-planning domains, and on a natural-language based search task.
We show that by learning an explicit planning computation, VIN policies
generalize better to new, unseen domains. | http://arxiv.org/pdf/1602.02867 | Aviv Tamar, Yi Wu, Garrett Thomas, Sergey Levine, Pieter Abbeel | cs.AI, cs.LG, cs.NE, stat.ML | Fixed missing table values | Advances in Neural Information Processing Systems 29 pages
2154--2162, 2016 | cs.AI | 20160209 | 20170320 | [
{
"id": "1602.02261"
},
{
"id": "1602.01783"
},
{
"id": "1604.06778"
},
{
"id": "1604.07095"
}
] |
1602.02410 | 1 | Google Brain
Abstract In this work we explore recent advances in Re- current Neural Networks for large scale Lan- guage Modeling, a task central to language un- derstanding. We extend current models to deal with two key challenges present in this task: cor- pora and vocabulary sizes, and complex, long term structure of language. We perform an ex- haustive study on techniques such as character Convolutional Neural Networks or Long-Short Term Memory, on the One Billion Word Bench- mark. Our best single model signiï¬cantly im- proves state-of-the-art perplexity from 51.3 down to 30.0 (whilst reducing the number of param- eters by a factor of 20), while an ensemble of models sets a new record by improving perplex- ity from 41.0 down to 23.7. We also release these models for the NLP and ML community to study and improve upon.
# 1. Introduction | 1602.02410#1 | Exploring the Limits of Language Modeling | In this work we explore recent advances in Recurrent Neural Networks for
large scale Language Modeling, a task central to language understanding. We
extend current models to deal with two key challenges present in this task:
corpora and vocabulary sizes, and complex, long term structure of language. We
perform an exhaustive study on techniques such as character Convolutional
Neural Networks or Long-Short Term Memory, on the One Billion Word Benchmark.
Our best single model significantly improves state-of-the-art perplexity from
51.3 down to 30.0 (whilst reducing the number of parameters by a factor of 20),
while an ensemble of models sets a new record by improving perplexity from 41.0
down to 23.7. We also release these models for the NLP and ML community to
study and improve upon. | http://arxiv.org/pdf/1602.02410 | Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, Yonghui Wu | cs.CL | null | null | cs.CL | 20160207 | 20160211 | [
{
"id": "1512.00103"
},
{
"id": "1511.03729"
},
{
"id": "1508.06615"
},
{
"id": "1502.04681"
},
{
"id": "1509.00685"
},
{
"id": "1508.00657"
},
{
"id": "1511.03962"
},
{
"id": "1508.02096"
},
{
"id": "1506.05869"
}
] |
1602.02410 | 2 | # 1. Introduction
Language Modeling (LM) is a task central to Natural Language Processing (NLP) and Language Understanding. Models which can accurately place distributions over sen- tences not only encode complexities of language such as grammatical structure, but also distill a fair amount of in- formation about the knowledge that a corpora may con- tain. Indeed, models that are able to assign a low probabil- ity to sentences that are grammatically correct but unlikely may help other tasks in fundamental language understand- ing like question answering, machine translation, or text summarization.
LMs have played a key role in traditional NLP tasks such as speech recognition (Mikolov et al., 2010; Arisoy et al., 2012), machine translation (Schwenk et al., 2012; Vaswani et al.), or text summarization (Rush et al., 2015; Filippova et al., 2015). Often (although not always), training better
language models improves the underlying metrics of the downstream task (such as word error rate for speech recog- nition, or BLEU score for translation), which makes the task of training better LMs valuable by itself. | 1602.02410#2 | Exploring the Limits of Language Modeling | In this work we explore recent advances in Recurrent Neural Networks for
large scale Language Modeling, a task central to language understanding. We
extend current models to deal with two key challenges present in this task:
corpora and vocabulary sizes, and complex, long term structure of language. We
perform an exhaustive study on techniques such as character Convolutional
Neural Networks or Long-Short Term Memory, on the One Billion Word Benchmark.
Our best single model significantly improves state-of-the-art perplexity from
51.3 down to 30.0 (whilst reducing the number of parameters by a factor of 20),
while an ensemble of models sets a new record by improving perplexity from 41.0
down to 23.7. We also release these models for the NLP and ML community to
study and improve upon. | http://arxiv.org/pdf/1602.02410 | Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, Yonghui Wu | cs.CL | null | null | cs.CL | 20160207 | 20160211 | [
{
"id": "1512.00103"
},
{
"id": "1511.03729"
},
{
"id": "1508.06615"
},
{
"id": "1502.04681"
},
{
"id": "1509.00685"
},
{
"id": "1508.00657"
},
{
"id": "1511.03962"
},
{
"id": "1508.02096"
},
{
"id": "1506.05869"
}
] |
1602.02410 | 3 | Further, when trained on vast amounts of data, language models compactly extract knowledge encoded in the train- ing data. For example, when trained on movie subti- tles (Serban et al., 2015; Vinyals & Le, 2015), these lan- guage models are able to generate basic answers to ques- tions about object colors, facts about people, etc. Lastly, recently proposed sequence-to-sequence models employ conditional language models (Mikolov & Zweig, 2012) as their key component to solve diverse tasks like machine translation (Sutskever et al., 2014; Cho et al., 2014; Kalch- brenner et al., 2014) or video generation (Srivastava et al., 2015a). | 1602.02410#3 | Exploring the Limits of Language Modeling | In this work we explore recent advances in Recurrent Neural Networks for
large scale Language Modeling, a task central to language understanding. We
extend current models to deal with two key challenges present in this task:
corpora and vocabulary sizes, and complex, long term structure of language. We
perform an exhaustive study on techniques such as character Convolutional
Neural Networks or Long-Short Term Memory, on the One Billion Word Benchmark.
Our best single model significantly improves state-of-the-art perplexity from
51.3 down to 30.0 (whilst reducing the number of parameters by a factor of 20),
while an ensemble of models sets a new record by improving perplexity from 41.0
down to 23.7. We also release these models for the NLP and ML community to
study and improve upon. | http://arxiv.org/pdf/1602.02410 | Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, Yonghui Wu | cs.CL | null | null | cs.CL | 20160207 | 20160211 | [
{
"id": "1512.00103"
},
{
"id": "1511.03729"
},
{
"id": "1508.06615"
},
{
"id": "1502.04681"
},
{
"id": "1509.00685"
},
{
"id": "1508.00657"
},
{
"id": "1511.03962"
},
{
"id": "1508.02096"
},
{
"id": "1506.05869"
}
] |
1602.02410 | 4 | Deep Learning and Recurrent Neural Networks (RNNs) have fueled language modeling research in the past years as it allowed researchers to explore many tasks for which the strong conditional independence assumptions are unre- alistic. Despite the fact that simpler models, such as N- grams, only use a short history of previous words to predict the next word, they are still a key component to high qual- ity, low perplexity LMs. Indeed, most recent work on large scale LM has shown that RNNs are great in combination with N-grams, as they may have different strengths that complement N-gram models, but worse when considered in isolation (Mikolov et al., 2011; Mikolov, 2012; Chelba et al., 2013; Williams et al., 2015; Ji et al., 2015a; Shazeer et al., 2015).
We believe that, despite much work being devoted to small data sets like the Penn Tree Bank (PTB) (Marcus et al., 1993), research on larger tasks is very relevant as overï¬t- ting is not the main limitation in current language model- ing, but is the main characteristic of the PTB task. Results on larger corpora usually show better what matters as many ideas work well on small data sets but fail to improve on
Exploring the Limits of Language Modeling | 1602.02410#4 | Exploring the Limits of Language Modeling | In this work we explore recent advances in Recurrent Neural Networks for
large scale Language Modeling, a task central to language understanding. We
extend current models to deal with two key challenges present in this task:
corpora and vocabulary sizes, and complex, long term structure of language. We
perform an exhaustive study on techniques such as character Convolutional
Neural Networks or Long-Short Term Memory, on the One Billion Word Benchmark.
Our best single model significantly improves state-of-the-art perplexity from
51.3 down to 30.0 (whilst reducing the number of parameters by a factor of 20),
while an ensemble of models sets a new record by improving perplexity from 41.0
down to 23.7. We also release these models for the NLP and ML community to
study and improve upon. | http://arxiv.org/pdf/1602.02410 | Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, Yonghui Wu | cs.CL | null | null | cs.CL | 20160207 | 20160211 | [
{
"id": "1512.00103"
},
{
"id": "1511.03729"
},
{
"id": "1508.06615"
},
{
"id": "1502.04681"
},
{
"id": "1509.00685"
},
{
"id": "1508.00657"
},
{
"id": "1511.03962"
},
{
"id": "1508.02096"
},
{
"id": "1506.05869"
}
] |
1602.02410 | 5 | Exploring the Limits of Language Modeling
CAT Cc A T Char CNN i ae Cc | A T | Char CNN | Char CNN THE T| [4] [e] || THE | {a) (b) (ce)
⢠We show that an ensemble of a number of different models can bring down perplexity on this task to 23.7, a large improvement compared to current state-of-art.
⢠We share the model and recipes in order to help and motivate further research in this area.
In Section 2 we review important concepts and previous work on language modeling. Section 3 presents our contri- butions to the ï¬eld of neural language modeling, emphasiz- ing large scale recurrent neural network training. Sections 4 and 5 aim at exhaustively describing our experience and understanding throughout the project, as well as emplacing our work relative to other known approaches.
# 2. Related Work
Figure 1. A high-level diagram of the models presented in this pa- per. (a) is a standard LSTM LM. (b) represents an LM where both input and Softmax embeddings have been replaced by a character CNN. In (c) we replace the Softmax by a next character prediction LSTM network.
In this section we describe previous work relevant to the approaches discussed in this paper. A more detailed dis- cussion on language modeling research is provided in (Mikolov, 2012). | 1602.02410#5 | Exploring the Limits of Language Modeling | In this work we explore recent advances in Recurrent Neural Networks for
large scale Language Modeling, a task central to language understanding. We
extend current models to deal with two key challenges present in this task:
corpora and vocabulary sizes, and complex, long term structure of language. We
perform an exhaustive study on techniques such as character Convolutional
Neural Networks or Long-Short Term Memory, on the One Billion Word Benchmark.
Our best single model significantly improves state-of-the-art perplexity from
51.3 down to 30.0 (whilst reducing the number of parameters by a factor of 20),
while an ensemble of models sets a new record by improving perplexity from 41.0
down to 23.7. We also release these models for the NLP and ML community to
study and improve upon. | http://arxiv.org/pdf/1602.02410 | Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, Yonghui Wu | cs.CL | null | null | cs.CL | 20160207 | 20160211 | [
{
"id": "1512.00103"
},
{
"id": "1511.03729"
},
{
"id": "1508.06615"
},
{
"id": "1502.04681"
},
{
"id": "1509.00685"
},
{
"id": "1508.00657"
},
{
"id": "1511.03962"
},
{
"id": "1508.02096"
},
{
"id": "1506.05869"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.