doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1512.03385 | 34 | The plain/residual architectures follow the form in Fig. 3 32 images, with (middle/right). The network inputs are 32 3 convo- the per-pixel mean subtracted. The ï¬rst layer is 3 lutions. Then we use a stack of 6n layers with 3 3 convo- lutions on the feature maps of sizes respectively, with 2n layers for each feature map size. The numbers of ï¬lters are respectively. The subsampling is per- formed by convolutions with a stride of 2. The network ends with a global average pooling, a 10-way fully-connected layer, and softmax. There are totally 6n+2 stacked weighted layers. The following table summarizes the architecture:
output map size # layers # ï¬lters 32Ã32 1+2n 16 16Ã16 2n 32 8Ã8 2n 64
When shortcut connections are used, they are connected 3 layers (totally 3n shortcuts). On this to the pairs of 3 dataset we use identity shortcuts in all cases (i.e., option A),
7 | 1512.03385#34 | Deep Residual Learning for Image Recognition | Deeper neural networks are more difficult to train. We present a residual
learning framework to ease the training of networks that are substantially
deeper than those used previously. We explicitly reformulate the layers as
learning residual functions with reference to the layer inputs, instead of
learning unreferenced functions. We provide comprehensive empirical evidence
showing that these residual networks are easier to optimize, and can gain
accuracy from considerably increased depth. On the ImageNet dataset we evaluate
residual nets with a depth of up to 152 layers---8x deeper than VGG nets but
still having lower complexity. An ensemble of these residual nets achieves
3.57% error on the ImageNet test set. This result won the 1st place on the
ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100
and 1000 layers.
The depth of representations is of central importance for many visual
recognition tasks. Solely due to our extremely deep representations, we obtain
a 28% relative improvement on the COCO object detection dataset. Deep residual
nets are foundations of our submissions to ILSVRC & COCO 2015 competitions,
where we also won the 1st places on the tasks of ImageNet detection, ImageNet
localization, COCO detection, and COCO segmentation. | http://arxiv.org/pdf/1512.03385 | Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun | cs.CV | Tech report | null | cs.CV | 20151210 | 20151210 | [
{
"id": "1505.00387"
},
{
"id": "1504.06066"
}
] |
1512.03385 | 35 | 7
method Maxout [10] NIN [25] DSN [24] error (%) 9.38 8.81 8.22 FitNet [35] Highway [42, 43] Highway [42, 43] ResNet ResNet ResNet ResNet ResNet ResNet # layers 19 19 32 20 32 44 56 110 1202 # params 2.5M 2.3M 1.25M 8.80 0.27M 8.75 0.46M 7.51 0.66M 7.17 0.85M 6.97 1.7M 19.4M 7.93 8.39 7.54 (7.72±0.16) 6.43 (6.61±0.16)
Table 6. Classiï¬cation error on the CIFAR-10 test set. All meth- ods are with data augmentation. For ResNet-110, we run it 5 times and show âbest (mean±std)â as in [43].
so our residual models have exactly the same depth, width, and number of parameters as the plain counterparts. | 1512.03385#35 | Deep Residual Learning for Image Recognition | Deeper neural networks are more difficult to train. We present a residual
learning framework to ease the training of networks that are substantially
deeper than those used previously. We explicitly reformulate the layers as
learning residual functions with reference to the layer inputs, instead of
learning unreferenced functions. We provide comprehensive empirical evidence
showing that these residual networks are easier to optimize, and can gain
accuracy from considerably increased depth. On the ImageNet dataset we evaluate
residual nets with a depth of up to 152 layers---8x deeper than VGG nets but
still having lower complexity. An ensemble of these residual nets achieves
3.57% error on the ImageNet test set. This result won the 1st place on the
ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100
and 1000 layers.
The depth of representations is of central importance for many visual
recognition tasks. Solely due to our extremely deep representations, we obtain
a 28% relative improvement on the COCO object detection dataset. Deep residual
nets are foundations of our submissions to ILSVRC & COCO 2015 competitions,
where we also won the 1st places on the tasks of ImageNet detection, ImageNet
localization, COCO detection, and COCO segmentation. | http://arxiv.org/pdf/1512.03385 | Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun | cs.CV | Tech report | null | cs.CV | 20151210 | 20151210 | [
{
"id": "1505.00387"
},
{
"id": "1504.06066"
}
] |
1512.03385 | 36 | so our residual models have exactly the same depth, width, and number of parameters as the plain counterparts.
We use a weight decay of 0.0001 and momentum of 0.9, and adopt the weight initialization in [13] and BN [16] but with no dropout. These models are trained with a mini- batch size of 128 on two GPUs. We start with a learning rate of 0.1, divide it by 10 at 32k and 48k iterations, and terminate training at 64k iterations, which is determined on a 45k/5k train/val split. We follow the simple data augmen- tation in [24] for training: 4 pixels are padded on each side, 32 crop is randomly sampled from the padded and a 32 image or its horizontal ï¬ip. For testing, we only evaluate the single view of the original 32
à , leading to 20, 32, 44, and 56-layer networks. Fig. 6 (left) shows the behaviors of the plain nets. The deep plain nets suffer from increased depth, and exhibit higher training error when going deeper. This phenomenon is similar to that on ImageNet (Fig. 4, left) and on MNIST (see [42]), suggesting that such an optimization difï¬culty is a fundamental problem. | 1512.03385#36 | Deep Residual Learning for Image Recognition | Deeper neural networks are more difficult to train. We present a residual
learning framework to ease the training of networks that are substantially
deeper than those used previously. We explicitly reformulate the layers as
learning residual functions with reference to the layer inputs, instead of
learning unreferenced functions. We provide comprehensive empirical evidence
showing that these residual networks are easier to optimize, and can gain
accuracy from considerably increased depth. On the ImageNet dataset we evaluate
residual nets with a depth of up to 152 layers---8x deeper than VGG nets but
still having lower complexity. An ensemble of these residual nets achieves
3.57% error on the ImageNet test set. This result won the 1st place on the
ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100
and 1000 layers.
The depth of representations is of central importance for many visual
recognition tasks. Solely due to our extremely deep representations, we obtain
a 28% relative improvement on the COCO object detection dataset. Deep residual
nets are foundations of our submissions to ILSVRC & COCO 2015 competitions,
where we also won the 1st places on the tasks of ImageNet detection, ImageNet
localization, COCO detection, and COCO segmentation. | http://arxiv.org/pdf/1512.03385 | Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun | cs.CV | Tech report | null | cs.CV | 20151210 | 20151210 | [
{
"id": "1505.00387"
},
{
"id": "1504.06066"
}
] |
1512.03385 | 37 | Fig. 6 (middle) shows the behaviors of ResNets. Also similar to the ImageNet cases (Fig. 4, right), our ResNets manage to overcome the optimization difï¬culty and demon- strate accuracy gains when the depth increases.
We further explore n = 18 that leads to a 110-layer ResNet. In this case, we ï¬nd that the initial learning rate of 0.1 is slightly too large to start converging5. So we use 0.01 to warm up the training until the training error is below 80% (about 400 iterations), and then go back to 0.1 and con- tinue training. The rest of the learning schedule is as done previously. This 110-layer network converges well (Fig. 6, middle). It has fewer parameters than other deep and thin
5With an initial learning rate of 0.1, it starts converging (<90% error) after several epochs, but still reaches similar accuracy.
S6-layer 20-layer T 2 3 â+
# iter.
# (Le)
# iter.
(1e4)
# iter.
# (Let) | 1512.03385#37 | Deep Residual Learning for Image Recognition | Deeper neural networks are more difficult to train. We present a residual
learning framework to ease the training of networks that are substantially
deeper than those used previously. We explicitly reformulate the layers as
learning residual functions with reference to the layer inputs, instead of
learning unreferenced functions. We provide comprehensive empirical evidence
showing that these residual networks are easier to optimize, and can gain
accuracy from considerably increased depth. On the ImageNet dataset we evaluate
residual nets with a depth of up to 152 layers---8x deeper than VGG nets but
still having lower complexity. An ensemble of these residual nets achieves
3.57% error on the ImageNet test set. This result won the 1st place on the
ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100
and 1000 layers.
The depth of representations is of central importance for many visual
recognition tasks. Solely due to our extremely deep representations, we obtain
a 28% relative improvement on the COCO object detection dataset. Deep residual
nets are foundations of our submissions to ILSVRC & COCO 2015 competitions,
where we also won the 1st places on the tasks of ImageNet detection, ImageNet
localization, COCO detection, and COCO segmentation. | http://arxiv.org/pdf/1512.03385 | Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun | cs.CV | Tech report | null | cs.CV | 20151210 | 20151210 | [
{
"id": "1505.00387"
},
{
"id": "1504.06066"
}
] |
1512.03385 | 38 | S6-layer 20-layer T 2 3 â+
# iter.
# (Le)
# iter.
(1e4)
# iter.
# (Let)
Figure 6. Training on CIFAR-10. Dashed lines denote training error, and bold lines denote testing error. Left: plain networks. The error of plain-110 is higher than 60% and not displayed. Middle: ResNets. Right: ResNets with 110 and 1202 layers.
F=-plain-20 F=-plain-20 plain-56 Net-20 IâResNet-56 I ResNet-110 0 20 layer index (sorted by magnitude) "
Figure 7. Standard deviations (std) of layer responses on CIFAR- 10. The responses are the outputs of each 3Ã3 layer, after BN and before nonlinearity. Top: the layers are shown in their original order. Bottom: the responses are ranked in descending order.
training data test data VGG-16 ResNet-101 07+12 VOC 07 test 73.2 76.4 07++12 VOC 12 test 70.4 73.8
Table 7. Object detection mAP (%) on the PASCAL VOC 2007/2012 test sets using baseline Faster R-CNN. See also Ta- ble 10 and 11 for better results. | 1512.03385#38 | Deep Residual Learning for Image Recognition | Deeper neural networks are more difficult to train. We present a residual
learning framework to ease the training of networks that are substantially
deeper than those used previously. We explicitly reformulate the layers as
learning residual functions with reference to the layer inputs, instead of
learning unreferenced functions. We provide comprehensive empirical evidence
showing that these residual networks are easier to optimize, and can gain
accuracy from considerably increased depth. On the ImageNet dataset we evaluate
residual nets with a depth of up to 152 layers---8x deeper than VGG nets but
still having lower complexity. An ensemble of these residual nets achieves
3.57% error on the ImageNet test set. This result won the 1st place on the
ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100
and 1000 layers.
The depth of representations is of central importance for many visual
recognition tasks. Solely due to our extremely deep representations, we obtain
a 28% relative improvement on the COCO object detection dataset. Deep residual
nets are foundations of our submissions to ILSVRC & COCO 2015 competitions,
where we also won the 1st places on the tasks of ImageNet detection, ImageNet
localization, COCO detection, and COCO segmentation. | http://arxiv.org/pdf/1512.03385 | Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun | cs.CV | Tech report | null | cs.CV | 20151210 | 20151210 | [
{
"id": "1505.00387"
},
{
"id": "1504.06066"
}
] |
1512.03385 | 39 | Table 7. Object detection mAP (%) on the PASCAL VOC 2007/2012 test sets using baseline Faster R-CNN. See also Ta- ble 10 and 11 for better results.
metric VGG-16 ResNet-101 [email protected] 41.5 48.4 mAP@[.5, .95] 21.2 27.2
Table 8. Object detection mAP (%) on the COCO validation set using baseline Faster R-CNN. See also Table 9 for better results.
networks such as FitNet [35] and Highway [42] (Table 6), yet is among the state-of-the-art results (6.43%, Table 6). | 1512.03385#39 | Deep Residual Learning for Image Recognition | Deeper neural networks are more difficult to train. We present a residual
learning framework to ease the training of networks that are substantially
deeper than those used previously. We explicitly reformulate the layers as
learning residual functions with reference to the layer inputs, instead of
learning unreferenced functions. We provide comprehensive empirical evidence
showing that these residual networks are easier to optimize, and can gain
accuracy from considerably increased depth. On the ImageNet dataset we evaluate
residual nets with a depth of up to 152 layers---8x deeper than VGG nets but
still having lower complexity. An ensemble of these residual nets achieves
3.57% error on the ImageNet test set. This result won the 1st place on the
ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100
and 1000 layers.
The depth of representations is of central importance for many visual
recognition tasks. Solely due to our extremely deep representations, we obtain
a 28% relative improvement on the COCO object detection dataset. Deep residual
nets are foundations of our submissions to ILSVRC & COCO 2015 competitions,
where we also won the 1st places on the tasks of ImageNet detection, ImageNet
localization, COCO detection, and COCO segmentation. | http://arxiv.org/pdf/1512.03385 | Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun | cs.CV | Tech report | null | cs.CV | 20151210 | 20151210 | [
{
"id": "1505.00387"
},
{
"id": "1504.06066"
}
] |
1512.03385 | 40 | networks such as FitNet [35] and Highway [42] (Table 6), yet is among the state-of-the-art results (6.43%, Table 6).
Analysis of Layer Responses. Fig. 7 shows the standard deviations (std) of the layer responses. The responses are the outputs of each 3 3 layer, after BN and before other nonlinearity (ReLU/addition). For ResNets, this analy- sis reveals the response strength of the residual functions. Fig. 7 shows that ResNets have generally smaller responses than their plain counterparts. These results support our ba- sic motivation (Sec.3.1) that the residual functions might be generally closer to zero than the non-residual functions. We also notice that the deeper ResNet has smaller magni- tudes of responses, as evidenced by the comparisons among ResNet-20, 56, and 110 in Fig. 7. When there are more layers, an individual layer of ResNets tends to modify the signal less. | 1512.03385#40 | Deep Residual Learning for Image Recognition | Deeper neural networks are more difficult to train. We present a residual
learning framework to ease the training of networks that are substantially
deeper than those used previously. We explicitly reformulate the layers as
learning residual functions with reference to the layer inputs, instead of
learning unreferenced functions. We provide comprehensive empirical evidence
showing that these residual networks are easier to optimize, and can gain
accuracy from considerably increased depth. On the ImageNet dataset we evaluate
residual nets with a depth of up to 152 layers---8x deeper than VGG nets but
still having lower complexity. An ensemble of these residual nets achieves
3.57% error on the ImageNet test set. This result won the 1st place on the
ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100
and 1000 layers.
The depth of representations is of central importance for many visual
recognition tasks. Solely due to our extremely deep representations, we obtain
a 28% relative improvement on the COCO object detection dataset. Deep residual
nets are foundations of our submissions to ILSVRC & COCO 2015 competitions,
where we also won the 1st places on the tasks of ImageNet detection, ImageNet
localization, COCO detection, and COCO segmentation. | http://arxiv.org/pdf/1512.03385 | Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun | cs.CV | Tech report | null | cs.CV | 20151210 | 20151210 | [
{
"id": "1505.00387"
},
{
"id": "1504.06066"
}
] |
1512.03385 | 41 | Exploring Over 1000 layers. We explore an aggressively deep model of over 1000 layers. We set n = 200 that leads to a 1202-layer network, which is trained as described above. Our method shows no optimization difï¬culty, and this 103-layer network is able to achieve training error <0.1% (Fig. 6, right). Its test error is still fairly good (7.93%, Table 6).
But there are still open problems on such aggressively deep models. The testing result of this 1202-layer network is worse than that of our 110-layer network, although both
have similar training error. We argue that this is because of overï¬tting. The 1202-layer network may be unnecessarily large (19.4M) for this small dataset. Strong regularization such as maxout [10] or dropout [14] is applied to obtain the best results ([10, 25, 24, 35]) on this dataset. In this paper, we use no maxout/dropout and just simply impose regular- ization via deep and thin architectures by design, without distracting from the focus on the difï¬culties of optimiza- tion. But combining with stronger regularization may im- prove results, which we will study in the future.
# 4.3. Object Detection on PASCAL and MS COCO | 1512.03385#41 | Deep Residual Learning for Image Recognition | Deeper neural networks are more difficult to train. We present a residual
learning framework to ease the training of networks that are substantially
deeper than those used previously. We explicitly reformulate the layers as
learning residual functions with reference to the layer inputs, instead of
learning unreferenced functions. We provide comprehensive empirical evidence
showing that these residual networks are easier to optimize, and can gain
accuracy from considerably increased depth. On the ImageNet dataset we evaluate
residual nets with a depth of up to 152 layers---8x deeper than VGG nets but
still having lower complexity. An ensemble of these residual nets achieves
3.57% error on the ImageNet test set. This result won the 1st place on the
ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100
and 1000 layers.
The depth of representations is of central importance for many visual
recognition tasks. Solely due to our extremely deep representations, we obtain
a 28% relative improvement on the COCO object detection dataset. Deep residual
nets are foundations of our submissions to ILSVRC & COCO 2015 competitions,
where we also won the 1st places on the tasks of ImageNet detection, ImageNet
localization, COCO detection, and COCO segmentation. | http://arxiv.org/pdf/1512.03385 | Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun | cs.CV | Tech report | null | cs.CV | 20151210 | 20151210 | [
{
"id": "1505.00387"
},
{
"id": "1504.06066"
}
] |
1512.03385 | 42 | # 4.3. Object Detection on PASCAL and MS COCO
Our method has good generalization performance on other recognition tasks. Table 7 and 8 show the object de- tection baseline results on PASCAL VOC 2007 and 2012 [5] and COCO [26]. We adopt Faster R-CNN [32] as the de- tection method. Here we are interested in the improvements of replacing VGG-16 [41] with ResNet-101. The detection implementation (see appendix) of using both models is the same, so the gains can only be attributed to better networks. Most remarkably, on the challenging COCO dataset we ob- tain a 6.0% increase in COCOâs standard metric (mAP@[.5, .95]), which is a 28% relative improvement. This gain is solely due to the learned representations.
Based on deep residual nets, we won the 1st places in several tracks in ILSVRC & COCO 2015 competitions: Im- ageNet detection, ImageNet localization, COCO detection, and COCO segmentation. The details are in the appendix.
8
# References
[1] Y. Bengio, P. Simard, and P. Frasconi. Learning long-term dependen- cies with gradient descent is difï¬cult. IEEE Transactions on Neural Networks, 5(2):157â166, 1994. | 1512.03385#42 | Deep Residual Learning for Image Recognition | Deeper neural networks are more difficult to train. We present a residual
learning framework to ease the training of networks that are substantially
deeper than those used previously. We explicitly reformulate the layers as
learning residual functions with reference to the layer inputs, instead of
learning unreferenced functions. We provide comprehensive empirical evidence
showing that these residual networks are easier to optimize, and can gain
accuracy from considerably increased depth. On the ImageNet dataset we evaluate
residual nets with a depth of up to 152 layers---8x deeper than VGG nets but
still having lower complexity. An ensemble of these residual nets achieves
3.57% error on the ImageNet test set. This result won the 1st place on the
ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100
and 1000 layers.
The depth of representations is of central importance for many visual
recognition tasks. Solely due to our extremely deep representations, we obtain
a 28% relative improvement on the COCO object detection dataset. Deep residual
nets are foundations of our submissions to ILSVRC & COCO 2015 competitions,
where we also won the 1st places on the tasks of ImageNet detection, ImageNet
localization, COCO detection, and COCO segmentation. | http://arxiv.org/pdf/1512.03385 | Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun | cs.CV | Tech report | null | cs.CV | 20151210 | 20151210 | [
{
"id": "1505.00387"
},
{
"id": "1504.06066"
}
] |
1512.03385 | 43 | [2] C. M. Bishop. Neural networks for pattern recognition. Oxford university press, 1995.
[3] W. L. Briggs, S. F. McCormick, et al. A Multigrid Tutorial. Siam, 2000.
[4] K. Chatï¬eld, V. Lempitsky, A. Vedaldi, and A. Zisserman. The devil is in the details: an evaluation of recent feature encoding methods. In BMVC, 2011.
[5] M. Everingham, L. Van Gool, C. K. Williams, J. Winn, and A. Zis- serman. The Pascal Visual Object Classes (VOC) Challenge. IJCV, pages 303â338, 2010.
[6] S. Gidaris and N. Komodakis. Object detection via a multi-region & semantic segmentation-aware cnn model. In ICCV, 2015.
[7] R. Girshick. Fast R-CNN. In ICCV, 2015. [8] R. Girshick, J. Donahue, T. Darrell, and J. Malik. Rich feature hier- archies for accurate object detection and semantic segmentation. In CVPR, 2014.
[9] X. Glorot and Y. Bengio. Understanding the difï¬culty of training deep feedforward neural networks. In AISTATS, 2010. | 1512.03385#43 | Deep Residual Learning for Image Recognition | Deeper neural networks are more difficult to train. We present a residual
learning framework to ease the training of networks that are substantially
deeper than those used previously. We explicitly reformulate the layers as
learning residual functions with reference to the layer inputs, instead of
learning unreferenced functions. We provide comprehensive empirical evidence
showing that these residual networks are easier to optimize, and can gain
accuracy from considerably increased depth. On the ImageNet dataset we evaluate
residual nets with a depth of up to 152 layers---8x deeper than VGG nets but
still having lower complexity. An ensemble of these residual nets achieves
3.57% error on the ImageNet test set. This result won the 1st place on the
ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100
and 1000 layers.
The depth of representations is of central importance for many visual
recognition tasks. Solely due to our extremely deep representations, we obtain
a 28% relative improvement on the COCO object detection dataset. Deep residual
nets are foundations of our submissions to ILSVRC & COCO 2015 competitions,
where we also won the 1st places on the tasks of ImageNet detection, ImageNet
localization, COCO detection, and COCO segmentation. | http://arxiv.org/pdf/1512.03385 | Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun | cs.CV | Tech report | null | cs.CV | 20151210 | 20151210 | [
{
"id": "1505.00387"
},
{
"id": "1504.06066"
}
] |
1512.03385 | 44 | [9] X. Glorot and Y. Bengio. Understanding the difï¬culty of training deep feedforward neural networks. In AISTATS, 2010.
[10] I. J. Goodfellow, D. Warde-Farley, M. Mirza, A. Courville, and Y. Bengio. Maxout networks. arXiv:1302.4389, 2013.
[11] K. He and J. Sun. Convolutional neural networks at constrained time cost. In CVPR, 2015.
[12] K. He, X. Zhang, S. Ren, and J. Sun. Spatial pyramid pooling in deep convolutional networks for visual recognition. In ECCV, 2014. [13] K. He, X. Zhang, S. Ren, and J. Sun. Delving deep into rectiï¬ers: Surpassing human-level performance on imagenet classiï¬cation. In ICCV, 2015.
[14] G. E. Hinton, N. Srivastava, A. Krizhevsky, I. Sutskever, and R. R. Salakhutdinov. Improving neural networks by preventing co- adaptation of feature detectors. arXiv:1207.0580, 2012. | 1512.03385#44 | Deep Residual Learning for Image Recognition | Deeper neural networks are more difficult to train. We present a residual
learning framework to ease the training of networks that are substantially
deeper than those used previously. We explicitly reformulate the layers as
learning residual functions with reference to the layer inputs, instead of
learning unreferenced functions. We provide comprehensive empirical evidence
showing that these residual networks are easier to optimize, and can gain
accuracy from considerably increased depth. On the ImageNet dataset we evaluate
residual nets with a depth of up to 152 layers---8x deeper than VGG nets but
still having lower complexity. An ensemble of these residual nets achieves
3.57% error on the ImageNet test set. This result won the 1st place on the
ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100
and 1000 layers.
The depth of representations is of central importance for many visual
recognition tasks. Solely due to our extremely deep representations, we obtain
a 28% relative improvement on the COCO object detection dataset. Deep residual
nets are foundations of our submissions to ILSVRC & COCO 2015 competitions,
where we also won the 1st places on the tasks of ImageNet detection, ImageNet
localization, COCO detection, and COCO segmentation. | http://arxiv.org/pdf/1512.03385 | Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun | cs.CV | Tech report | null | cs.CV | 20151210 | 20151210 | [
{
"id": "1505.00387"
},
{
"id": "1504.06066"
}
] |
1512.03385 | 45 | [15] S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural computation, 9(8):1735â1780, 1997.
[16] S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In ICML, 2015. [17] H. Jegou, M. Douze, and C. Schmid. Product quantization for nearest
neighbor search. TPAMI, 33, 2011.
[18] H. Jegou, F. Perronnin, M. Douze, J. Sanchez, P. Perez, and C. Schmid. Aggregating local image descriptors into compact codes. TPAMI, 2012.
[19] Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, and T. Darrell. Caffe: Convolutional architecture for fast feature embedding. arXiv:1408.5093, 2014.
[20] A. Krizhevsky. Learning multiple layers of features from tiny im- ages. Tech Report, 2009. | 1512.03385#45 | Deep Residual Learning for Image Recognition | Deeper neural networks are more difficult to train. We present a residual
learning framework to ease the training of networks that are substantially
deeper than those used previously. We explicitly reformulate the layers as
learning residual functions with reference to the layer inputs, instead of
learning unreferenced functions. We provide comprehensive empirical evidence
showing that these residual networks are easier to optimize, and can gain
accuracy from considerably increased depth. On the ImageNet dataset we evaluate
residual nets with a depth of up to 152 layers---8x deeper than VGG nets but
still having lower complexity. An ensemble of these residual nets achieves
3.57% error on the ImageNet test set. This result won the 1st place on the
ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100
and 1000 layers.
The depth of representations is of central importance for many visual
recognition tasks. Solely due to our extremely deep representations, we obtain
a 28% relative improvement on the COCO object detection dataset. Deep residual
nets are foundations of our submissions to ILSVRC & COCO 2015 competitions,
where we also won the 1st places on the tasks of ImageNet detection, ImageNet
localization, COCO detection, and COCO segmentation. | http://arxiv.org/pdf/1512.03385 | Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun | cs.CV | Tech report | null | cs.CV | 20151210 | 20151210 | [
{
"id": "1505.00387"
},
{
"id": "1504.06066"
}
] |
1512.03385 | 46 | [20] A. Krizhevsky. Learning multiple layers of features from tiny im- ages. Tech Report, 2009.
[21] A. Krizhevsky, I. Sutskever, and G. Hinton. Imagenet classiï¬cation with deep convolutional neural networks. In NIPS, 2012.
[22] Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel. Backpropagation applied to hand- written zip code recognition. Neural computation, 1989.
[23] Y. LeCun, L. Bottou, G. B. Orr, and K.-R. M¨uller. Efï¬cient backprop. In Neural Networks: Tricks of the Trade, pages 9â50. Springer, 1998. [24] C.-Y. Lee, S. Xie, P. Gallagher, Z. Zhang, and Z. Tu. Deeplysupervised nets. arXiv:1409.5185, 2014.
[25] M. Lin, Q. Chen, and S. Yan. Network in network. arXiv:1312.4400, 2013. | 1512.03385#46 | Deep Residual Learning for Image Recognition | Deeper neural networks are more difficult to train. We present a residual
learning framework to ease the training of networks that are substantially
deeper than those used previously. We explicitly reformulate the layers as
learning residual functions with reference to the layer inputs, instead of
learning unreferenced functions. We provide comprehensive empirical evidence
showing that these residual networks are easier to optimize, and can gain
accuracy from considerably increased depth. On the ImageNet dataset we evaluate
residual nets with a depth of up to 152 layers---8x deeper than VGG nets but
still having lower complexity. An ensemble of these residual nets achieves
3.57% error on the ImageNet test set. This result won the 1st place on the
ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100
and 1000 layers.
The depth of representations is of central importance for many visual
recognition tasks. Solely due to our extremely deep representations, we obtain
a 28% relative improvement on the COCO object detection dataset. Deep residual
nets are foundations of our submissions to ILSVRC & COCO 2015 competitions,
where we also won the 1st places on the tasks of ImageNet detection, ImageNet
localization, COCO detection, and COCO segmentation. | http://arxiv.org/pdf/1512.03385 | Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun | cs.CV | Tech report | null | cs.CV | 20151210 | 20151210 | [
{
"id": "1505.00387"
},
{
"id": "1504.06066"
}
] |
1512.03385 | 47 | [25] M. Lin, Q. Chen, and S. Yan. Network in network. arXiv:1312.4400, 2013.
[26] T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Doll´ar, and C. L. Zitnick. Microsoft COCO: Common objects in context. In ECCV. 2014.
[27] J. Long, E. Shelhamer, and T. Darrell. Fully convolutional networks for semantic segmentation. In CVPR, 2015.
9
[28] G. Mont´ufar, R. Pascanu, K. Cho, and Y. Bengio. On the number of linear regions of deep neural networks. In NIPS, 2014.
[29] V. Nair and G. E. Hinton. Rectiï¬ed linear units improve restricted boltzmann machines. In ICML, 2010.
[30] F. Perronnin and C. Dance. Fisher kernels on visual vocabularies for image categorization. In CVPR, 2007.
[31] T. Raiko, H. Valpola, and Y. LeCun. Deep learning made easier by linear transformations in perceptrons. In AISTATS, 2012. | 1512.03385#47 | Deep Residual Learning for Image Recognition | Deeper neural networks are more difficult to train. We present a residual
learning framework to ease the training of networks that are substantially
deeper than those used previously. We explicitly reformulate the layers as
learning residual functions with reference to the layer inputs, instead of
learning unreferenced functions. We provide comprehensive empirical evidence
showing that these residual networks are easier to optimize, and can gain
accuracy from considerably increased depth. On the ImageNet dataset we evaluate
residual nets with a depth of up to 152 layers---8x deeper than VGG nets but
still having lower complexity. An ensemble of these residual nets achieves
3.57% error on the ImageNet test set. This result won the 1st place on the
ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100
and 1000 layers.
The depth of representations is of central importance for many visual
recognition tasks. Solely due to our extremely deep representations, we obtain
a 28% relative improvement on the COCO object detection dataset. Deep residual
nets are foundations of our submissions to ILSVRC & COCO 2015 competitions,
where we also won the 1st places on the tasks of ImageNet detection, ImageNet
localization, COCO detection, and COCO segmentation. | http://arxiv.org/pdf/1512.03385 | Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun | cs.CV | Tech report | null | cs.CV | 20151210 | 20151210 | [
{
"id": "1505.00387"
},
{
"id": "1504.06066"
}
] |
1512.03385 | 48 | [31] T. Raiko, H. Valpola, and Y. LeCun. Deep learning made easier by linear transformations in perceptrons. In AISTATS, 2012.
[32] S. Ren, K. He, R. Girshick, and J. Sun. Faster R-CNN: Towards real-time object detection with region proposal networks. In NIPS, 2015.
[33] S. Ren, K. He, R. Girshick, X. Zhang, and J. Sun. Object detection networks on convolutional feature maps. arXiv:1504.06066, 2015.
[34] B. D. Ripley. Pattern recognition and neural networks. Cambridge university press, 1996.
[35] A. Romero, N. Ballas, S. E. Kahou, A. Chassang, C. Gatta, and Y. Bengio. Fitnets: Hints for thin deep nets. In ICLR, 2015.
[36] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Imagenet | 1512.03385#48 | Deep Residual Learning for Image Recognition | Deeper neural networks are more difficult to train. We present a residual
learning framework to ease the training of networks that are substantially
deeper than those used previously. We explicitly reformulate the layers as
learning residual functions with reference to the layer inputs, instead of
learning unreferenced functions. We provide comprehensive empirical evidence
showing that these residual networks are easier to optimize, and can gain
accuracy from considerably increased depth. On the ImageNet dataset we evaluate
residual nets with a depth of up to 152 layers---8x deeper than VGG nets but
still having lower complexity. An ensemble of these residual nets achieves
3.57% error on the ImageNet test set. This result won the 1st place on the
ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100
and 1000 layers.
The depth of representations is of central importance for many visual
recognition tasks. Solely due to our extremely deep representations, we obtain
a 28% relative improvement on the COCO object detection dataset. Deep residual
nets are foundations of our submissions to ILSVRC & COCO 2015 competitions,
where we also won the 1st places on the tasks of ImageNet detection, ImageNet
localization, COCO detection, and COCO segmentation. | http://arxiv.org/pdf/1512.03385 | Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun | cs.CV | Tech report | null | cs.CV | 20151210 | 20151210 | [
{
"id": "1505.00387"
},
{
"id": "1504.06066"
}
] |
1512.03385 | 49 | [36] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Imagenet
Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, et al. large scale visual recognition challenge. arXiv:1409.0575, 2014. [37] A. M. Saxe, J. L. McClelland, and S. Ganguli. Exact solutions to the nonlinear dynamics of learning in deep linear neural networks. arXiv:1312.6120, 2013.
[38] N. N. Schraudolph. Accelerated gradient descent by factor-centering decomposition. Technical report, 1998.
[39] N. N. Schraudolph. Centering neural network gradient factors. In Neural Networks: Tricks of the Trade, pages 207â226. Springer, 1998.
[40] P. Sermanet, D. Eigen, X. Zhang, M. Mathieu, R. Fergus, and Y. Le- Cun. Overfeat: Integrated recognition, localization and detection using convolutional networks. In ICLR, 2014.
[41] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. In ICLR, 2015. | 1512.03385#49 | Deep Residual Learning for Image Recognition | Deeper neural networks are more difficult to train. We present a residual
learning framework to ease the training of networks that are substantially
deeper than those used previously. We explicitly reformulate the layers as
learning residual functions with reference to the layer inputs, instead of
learning unreferenced functions. We provide comprehensive empirical evidence
showing that these residual networks are easier to optimize, and can gain
accuracy from considerably increased depth. On the ImageNet dataset we evaluate
residual nets with a depth of up to 152 layers---8x deeper than VGG nets but
still having lower complexity. An ensemble of these residual nets achieves
3.57% error on the ImageNet test set. This result won the 1st place on the
ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100
and 1000 layers.
The depth of representations is of central importance for many visual
recognition tasks. Solely due to our extremely deep representations, we obtain
a 28% relative improvement on the COCO object detection dataset. Deep residual
nets are foundations of our submissions to ILSVRC & COCO 2015 competitions,
where we also won the 1st places on the tasks of ImageNet detection, ImageNet
localization, COCO detection, and COCO segmentation. | http://arxiv.org/pdf/1512.03385 | Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun | cs.CV | Tech report | null | cs.CV | 20151210 | 20151210 | [
{
"id": "1505.00387"
},
{
"id": "1504.06066"
}
] |
1512.03385 | 50 | [41] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. In ICLR, 2015.
[42] R. K. Srivastava, K. Greff, and J. Schmidhuber. Highway networks. arXiv:1505.00387, 2015.
[43] R. K. Srivastava, K. Greff, and J. Schmidhuber. Training very deep networks. 1507.06228, 2015.
[44] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Er- han, V. Vanhoucke, and A. Rabinovich. Going deeper with convolu- tions. In CVPR, 2015.
[45] R. Szeliski. Fast surface interpolation using hierarchical basis func- tions. TPAMI, 1990.
[46] R. Szeliski. Locally adapted hierarchical basis preconditioning. In SIGGRAPH, 2006. | 1512.03385#50 | Deep Residual Learning for Image Recognition | Deeper neural networks are more difficult to train. We present a residual
learning framework to ease the training of networks that are substantially
deeper than those used previously. We explicitly reformulate the layers as
learning residual functions with reference to the layer inputs, instead of
learning unreferenced functions. We provide comprehensive empirical evidence
showing that these residual networks are easier to optimize, and can gain
accuracy from considerably increased depth. On the ImageNet dataset we evaluate
residual nets with a depth of up to 152 layers---8x deeper than VGG nets but
still having lower complexity. An ensemble of these residual nets achieves
3.57% error on the ImageNet test set. This result won the 1st place on the
ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100
and 1000 layers.
The depth of representations is of central importance for many visual
recognition tasks. Solely due to our extremely deep representations, we obtain
a 28% relative improvement on the COCO object detection dataset. Deep residual
nets are foundations of our submissions to ILSVRC & COCO 2015 competitions,
where we also won the 1st places on the tasks of ImageNet detection, ImageNet
localization, COCO detection, and COCO segmentation. | http://arxiv.org/pdf/1512.03385 | Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun | cs.CV | Tech report | null | cs.CV | 20151210 | 20151210 | [
{
"id": "1505.00387"
},
{
"id": "1504.06066"
}
] |
1512.03385 | 51 | [46] R. Szeliski. Locally adapted hierarchical basis preconditioning. In SIGGRAPH, 2006.
[47] T. Vatanen, T. Raiko, H. Valpola, and Y. LeCun. Pushing stochas- tic gradient towards second-order methodsâbackpropagation learn- In Neural Information ing with transformations in nonlinearities. Processing, 2013.
[48] A. Vedaldi and B. Fulkerson. VLFeat: An open and portable library of computer vision algorithms, 2008.
[49] W. Venables and B. Ripley. Modern applied statistics with s-plus. 1999.
[50] M. D. Zeiler and R. Fergus. Visualizing and understanding convolu- tional neural networks. In ECCV, 2014.
# A. Object Detection Baselines
In this section we introduce our detection method based on the baseline Faster R-CNN [32] system. The models are initialized by the ImageNet classiï¬cation models, and then ï¬ne-tuned on the object detection data. We have experi- mented with ResNet-50/101 at the time of the ILSVRC & COCO 2015 detection competitions. | 1512.03385#51 | Deep Residual Learning for Image Recognition | Deeper neural networks are more difficult to train. We present a residual
learning framework to ease the training of networks that are substantially
deeper than those used previously. We explicitly reformulate the layers as
learning residual functions with reference to the layer inputs, instead of
learning unreferenced functions. We provide comprehensive empirical evidence
showing that these residual networks are easier to optimize, and can gain
accuracy from considerably increased depth. On the ImageNet dataset we evaluate
residual nets with a depth of up to 152 layers---8x deeper than VGG nets but
still having lower complexity. An ensemble of these residual nets achieves
3.57% error on the ImageNet test set. This result won the 1st place on the
ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100
and 1000 layers.
The depth of representations is of central importance for many visual
recognition tasks. Solely due to our extremely deep representations, we obtain
a 28% relative improvement on the COCO object detection dataset. Deep residual
nets are foundations of our submissions to ILSVRC & COCO 2015 competitions,
where we also won the 1st places on the tasks of ImageNet detection, ImageNet
localization, COCO detection, and COCO segmentation. | http://arxiv.org/pdf/1512.03385 | Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun | cs.CV | Tech report | null | cs.CV | 20151210 | 20151210 | [
{
"id": "1505.00387"
},
{
"id": "1504.06066"
}
] |
1512.03385 | 52 | Unlike VGG-16 used in [32], our ResNet has no hidden fc layers. We adopt the idea of âNetworks on Conv fea- ture mapsâ (NoC) [33] to address this issue. We compute the full-image shared conv feature maps using those lay- ers whose strides on the image are no greater than 16 pixels (i.e., conv1, conv2 x, conv3 x, and conv4 x, totally 91 conv layers in ResNet-101; Table 1). We consider these layers as analogous to the 13 conv layers in VGG-16, and by doing so, both ResNet and VGG-16 have conv feature maps of the same total stride (16 pixels). These layers are shared by a region proposal network (RPN, generating 300 proposals) [32] and a Fast R-CNN detection network [7]. RoI pool- ing [7] is performed before conv5 1. On this RoI-pooled feature, all layers of conv5 x and up are adopted for each region, playing the roles of VGG-16âs fc layers. The ï¬nal classiï¬cation layer is replaced by two sibling layers (classi- ï¬cation and box regression [7]). | 1512.03385#52 | Deep Residual Learning for Image Recognition | Deeper neural networks are more difficult to train. We present a residual
learning framework to ease the training of networks that are substantially
deeper than those used previously. We explicitly reformulate the layers as
learning residual functions with reference to the layer inputs, instead of
learning unreferenced functions. We provide comprehensive empirical evidence
showing that these residual networks are easier to optimize, and can gain
accuracy from considerably increased depth. On the ImageNet dataset we evaluate
residual nets with a depth of up to 152 layers---8x deeper than VGG nets but
still having lower complexity. An ensemble of these residual nets achieves
3.57% error on the ImageNet test set. This result won the 1st place on the
ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100
and 1000 layers.
The depth of representations is of central importance for many visual
recognition tasks. Solely due to our extremely deep representations, we obtain
a 28% relative improvement on the COCO object detection dataset. Deep residual
nets are foundations of our submissions to ILSVRC & COCO 2015 competitions,
where we also won the 1st places on the tasks of ImageNet detection, ImageNet
localization, COCO detection, and COCO segmentation. | http://arxiv.org/pdf/1512.03385 | Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun | cs.CV | Tech report | null | cs.CV | 20151210 | 20151210 | [
{
"id": "1505.00387"
},
{
"id": "1504.06066"
}
] |
1512.03385 | 53 | For the usage of BN layers, after pre-training, we com- pute the BN statistics (means and variances) for each layer on the ImageNet training set. Then the BN layers are ï¬xed during ï¬ne-tuning for object detection. As such, the BN layers become linear activations with constant offsets and scales, and BN statistics are not updated by ï¬ne-tuning. We ï¬x the BN layers mainly for reducing memory consumption in Faster R-CNN training.
# PASCAL VOC
Following [7, 32], for the PASCAL VOC 2007 test set, we use the 5k trainval images in VOC 2007 and 16k train- val images in VOC 2012 for training (â07+12â). For the PASCAL VOC 2012 test set, we use the 10k trainval+test images in VOC 2007 and 16k trainval images in VOC 2012 for training (â07++12â). The hyper-parameters for train- ing Faster R-CNN are the same as in [32]. Table 7 shows the results. ResNet-101 improves the mAP by >3% over VGG-16. This gain is solely because of the improved fea- tures learned by ResNet.
# MS COCO | 1512.03385#53 | Deep Residual Learning for Image Recognition | Deeper neural networks are more difficult to train. We present a residual
learning framework to ease the training of networks that are substantially
deeper than those used previously. We explicitly reformulate the layers as
learning residual functions with reference to the layer inputs, instead of
learning unreferenced functions. We provide comprehensive empirical evidence
showing that these residual networks are easier to optimize, and can gain
accuracy from considerably increased depth. On the ImageNet dataset we evaluate
residual nets with a depth of up to 152 layers---8x deeper than VGG nets but
still having lower complexity. An ensemble of these residual nets achieves
3.57% error on the ImageNet test set. This result won the 1st place on the
ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100
and 1000 layers.
The depth of representations is of central importance for many visual
recognition tasks. Solely due to our extremely deep representations, we obtain
a 28% relative improvement on the COCO object detection dataset. Deep residual
nets are foundations of our submissions to ILSVRC & COCO 2015 competitions,
where we also won the 1st places on the tasks of ImageNet detection, ImageNet
localization, COCO detection, and COCO segmentation. | http://arxiv.org/pdf/1512.03385 | Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun | cs.CV | Tech report | null | cs.CV | 20151210 | 20151210 | [
{
"id": "1505.00387"
},
{
"id": "1504.06066"
}
] |
1512.03385 | 54 | # MS COCO
The MS COCO dataset [26] involves 80 object cate- gories. We evaluate the PASCAL VOC metric (mAP @ IoU = 0.5) and the standard COCO metric (mAP @ IoU = .5:.05:.95). We use the 80k images on the train set for train- ing and the 40k images on the val set for evaluation. Our detection system for COCO is similar to that for PASCAL VOC. We train the COCO models with an 8-GPU imple- mentation, and thus the RPN step has a mini-batch size of
10
8 images (i.e., 1 per GPU) and the Fast R-CNN step has a mini-batch size of 16 images. The RPN step and Fast R- CNN step are both trained for 240k iterations with a learn- ing rate of 0.001 and then for 80k iterations with 0.0001. | 1512.03385#54 | Deep Residual Learning for Image Recognition | Deeper neural networks are more difficult to train. We present a residual
learning framework to ease the training of networks that are substantially
deeper than those used previously. We explicitly reformulate the layers as
learning residual functions with reference to the layer inputs, instead of
learning unreferenced functions. We provide comprehensive empirical evidence
showing that these residual networks are easier to optimize, and can gain
accuracy from considerably increased depth. On the ImageNet dataset we evaluate
residual nets with a depth of up to 152 layers---8x deeper than VGG nets but
still having lower complexity. An ensemble of these residual nets achieves
3.57% error on the ImageNet test set. This result won the 1st place on the
ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100
and 1000 layers.
The depth of representations is of central importance for many visual
recognition tasks. Solely due to our extremely deep representations, we obtain
a 28% relative improvement on the COCO object detection dataset. Deep residual
nets are foundations of our submissions to ILSVRC & COCO 2015 competitions,
where we also won the 1st places on the tasks of ImageNet detection, ImageNet
localization, COCO detection, and COCO segmentation. | http://arxiv.org/pdf/1512.03385 | Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun | cs.CV | Tech report | null | cs.CV | 20151210 | 20151210 | [
{
"id": "1505.00387"
},
{
"id": "1504.06066"
}
] |
1512.03385 | 55 | Table 8 shows the results on the MS COCO validation set. ResNet-101 has a 6% increase of mAP@[.5, .95] over VGG-16, which is a 28% relative improvement, solely con- tributed by the features learned by the better network. Re- markably, the mAP@[.5, .95]âs absolute increase (6.0%) is nearly as big as [email protected]âs (6.9%). This suggests that a deeper network can improve both recognition and localiza- tion.
# B. Object Detection Improvements
For completeness, we report the improvements made for the competitions. These improvements are based on deep features and thus should beneï¬t from residual learning. | 1512.03385#55 | Deep Residual Learning for Image Recognition | Deeper neural networks are more difficult to train. We present a residual
learning framework to ease the training of networks that are substantially
deeper than those used previously. We explicitly reformulate the layers as
learning residual functions with reference to the layer inputs, instead of
learning unreferenced functions. We provide comprehensive empirical evidence
showing that these residual networks are easier to optimize, and can gain
accuracy from considerably increased depth. On the ImageNet dataset we evaluate
residual nets with a depth of up to 152 layers---8x deeper than VGG nets but
still having lower complexity. An ensemble of these residual nets achieves
3.57% error on the ImageNet test set. This result won the 1st place on the
ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100
and 1000 layers.
The depth of representations is of central importance for many visual
recognition tasks. Solely due to our extremely deep representations, we obtain
a 28% relative improvement on the COCO object detection dataset. Deep residual
nets are foundations of our submissions to ILSVRC & COCO 2015 competitions,
where we also won the 1st places on the tasks of ImageNet detection, ImageNet
localization, COCO detection, and COCO segmentation. | http://arxiv.org/pdf/1512.03385 | Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun | cs.CV | Tech report | null | cs.CV | 20151210 | 20151210 | [
{
"id": "1505.00387"
},
{
"id": "1504.06066"
}
] |
1512.03385 | 56 | # B. Object Detection Improvements
For completeness, we report the improvements made for the competitions. These improvements are based on deep features and thus should beneï¬t from residual learning.
MS COCO Box reï¬nement. Our box reï¬nement partially follows the it- erative localization in [6]. In Faster R-CNN, the ï¬nal output is a regressed box that is different from its proposal box. So for inference, we pool a new feature from the regressed box and obtain a new classiï¬cation score and a new regressed box. We combine these 300 new predictions with the orig- inal 300 predictions. Non-maximum suppression (NMS) is applied on the union set of predicted boxes using an IoU threshold of 0.3 [8], followed by box voting [6]. Box re- ï¬nement improves mAP by about 2 points (Table 9). | 1512.03385#56 | Deep Residual Learning for Image Recognition | Deeper neural networks are more difficult to train. We present a residual
learning framework to ease the training of networks that are substantially
deeper than those used previously. We explicitly reformulate the layers as
learning residual functions with reference to the layer inputs, instead of
learning unreferenced functions. We provide comprehensive empirical evidence
showing that these residual networks are easier to optimize, and can gain
accuracy from considerably increased depth. On the ImageNet dataset we evaluate
residual nets with a depth of up to 152 layers---8x deeper than VGG nets but
still having lower complexity. An ensemble of these residual nets achieves
3.57% error on the ImageNet test set. This result won the 1st place on the
ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100
and 1000 layers.
The depth of representations is of central importance for many visual
recognition tasks. Solely due to our extremely deep representations, we obtain
a 28% relative improvement on the COCO object detection dataset. Deep residual
nets are foundations of our submissions to ILSVRC & COCO 2015 competitions,
where we also won the 1st places on the tasks of ImageNet detection, ImageNet
localization, COCO detection, and COCO segmentation. | http://arxiv.org/pdf/1512.03385 | Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun | cs.CV | Tech report | null | cs.CV | 20151210 | 20151210 | [
{
"id": "1505.00387"
},
{
"id": "1504.06066"
}
] |
1512.03385 | 57 | Global context. We combine global context in the Fast R-CNN step. Given the full-image conv feature map, we pool a feature by global Spatial Pyramid Pooling [12] (with a âsingle-levelâ pyramid) which can be implemented as âRoIâ pooling using the entire imageâs bounding box as the RoI. This pooled feature is fed into the post-RoI layers to obtain a global context feature. This global feature is con- catenated with the original per-region feature, followed by the sibling classiï¬cation and box regression layers. This new structure is trained end-to-end. Global context im- proves [email protected] by about 1 point (Table 9). | 1512.03385#57 | Deep Residual Learning for Image Recognition | Deeper neural networks are more difficult to train. We present a residual
learning framework to ease the training of networks that are substantially
deeper than those used previously. We explicitly reformulate the layers as
learning residual functions with reference to the layer inputs, instead of
learning unreferenced functions. We provide comprehensive empirical evidence
showing that these residual networks are easier to optimize, and can gain
accuracy from considerably increased depth. On the ImageNet dataset we evaluate
residual nets with a depth of up to 152 layers---8x deeper than VGG nets but
still having lower complexity. An ensemble of these residual nets achieves
3.57% error on the ImageNet test set. This result won the 1st place on the
ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100
and 1000 layers.
The depth of representations is of central importance for many visual
recognition tasks. Solely due to our extremely deep representations, we obtain
a 28% relative improvement on the COCO object detection dataset. Deep residual
nets are foundations of our submissions to ILSVRC & COCO 2015 competitions,
where we also won the 1st places on the tasks of ImageNet detection, ImageNet
localization, COCO detection, and COCO segmentation. | http://arxiv.org/pdf/1512.03385 | Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun | cs.CV | Tech report | null | cs.CV | 20151210 | 20151210 | [
{
"id": "1505.00387"
},
{
"id": "1504.06066"
}
] |
1512.03385 | 58 | Multi-scale testing. In the above, all results are obtained by single-scale training/testing as in [32], where the imageâs shorter side is s = 600 pixels. Multi-scale training/testing has been developed in [12, 7] by selecting a scale from a feature pyramid, and in [33] by using maxout layers. In our current implementation, we have performed multi-scale testing following [33]; we have not performed multi-scale training because of limited time. In addition, we have per- formed multi-scale testing only for the Fast R-CNN step (but not yet for the RPN step). With a trained model, we compute conv feature maps on an image pyramid, where the 200, 400, 600, 800, 1000 imageâs shorter sides are s . } | 1512.03385#58 | Deep Residual Learning for Image Recognition | Deeper neural networks are more difficult to train. We present a residual
learning framework to ease the training of networks that are substantially
deeper than those used previously. We explicitly reformulate the layers as
learning residual functions with reference to the layer inputs, instead of
learning unreferenced functions. We provide comprehensive empirical evidence
showing that these residual networks are easier to optimize, and can gain
accuracy from considerably increased depth. On the ImageNet dataset we evaluate
residual nets with a depth of up to 152 layers---8x deeper than VGG nets but
still having lower complexity. An ensemble of these residual nets achieves
3.57% error on the ImageNet test set. This result won the 1st place on the
ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100
and 1000 layers.
The depth of representations is of central importance for many visual
recognition tasks. Solely due to our extremely deep representations, we obtain
a 28% relative improvement on the COCO object detection dataset. Deep residual
nets are foundations of our submissions to ILSVRC & COCO 2015 competitions,
where we also won the 1st places on the tasks of ImageNet detection, ImageNet
localization, COCO detection, and COCO segmentation. | http://arxiv.org/pdf/1512.03385 | Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun | cs.CV | Tech report | null | cs.CV | 20151210 | 20151210 | [
{
"id": "1505.00387"
},
{
"id": "1504.06066"
}
] |
1512.03385 | 60 | Table 9. Object detection improvements on MS COCO using Faster R-CNN and ResNet-101.
net VGG-16 ResNet-101 data 07+12 07+12 system baseline baseline baseline+++ ResNet-101 COCO+07+12 mAP areo 73.2 76.5 79.0 70.9 65.5 52.1 83.1 84.7 86.4 52.0 81.9 65.7 84.8 84.6 77.5 76.7 38.8 73.6 73.9 83.0 72.6 76.4 79.8 80.7 76.2 68.3 55.9 85.1 85.3 89.8 56.7 87.8 69.4 88.3 88.9 80.9 78.4 41.7 78.6 79.8 85.3 72.0 85.6 90.0 89.6 87.8 80.8 76.1 89.9 89.9 89.6 75.5 90.0 80.7 89.6 90.3 89.1 88.7 65.4 88.1 85.6 89.0 86.8 bike bird boat bottle bus car cat chair cow table dog horse mbike person plant sheep sofa train tv | 1512.03385#60 | Deep Residual Learning for Image Recognition | Deeper neural networks are more difficult to train. We present a residual
learning framework to ease the training of networks that are substantially
deeper than those used previously. We explicitly reformulate the layers as
learning residual functions with reference to the layer inputs, instead of
learning unreferenced functions. We provide comprehensive empirical evidence
showing that these residual networks are easier to optimize, and can gain
accuracy from considerably increased depth. On the ImageNet dataset we evaluate
residual nets with a depth of up to 152 layers---8x deeper than VGG nets but
still having lower complexity. An ensemble of these residual nets achieves
3.57% error on the ImageNet test set. This result won the 1st place on the
ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100
and 1000 layers.
The depth of representations is of central importance for many visual
recognition tasks. Solely due to our extremely deep representations, we obtain
a 28% relative improvement on the COCO object detection dataset. Deep residual
nets are foundations of our submissions to ILSVRC & COCO 2015 competitions,
where we also won the 1st places on the tasks of ImageNet detection, ImageNet
localization, COCO detection, and COCO segmentation. | http://arxiv.org/pdf/1512.03385 | Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun | cs.CV | Tech report | null | cs.CV | 20151210 | 20151210 | [
{
"id": "1505.00387"
},
{
"id": "1504.06066"
}
] |
1512.03385 | 62 | mAP areo 70.4 84.9 79.8 74.3 53.9 49.8 77.5 75.9 88.5 45.6 77.1 55.3 86.9 81.7 80.9 79.6 40.1 72.6 60.9 81.2 61.5 baseline baseline 73.8 86.5 81.6 77.2 58.0 51.0 78.6 76.6 93.2 48.6 80.4 59.0 92.1 85.3 84.8 80.7 48.1 77.3 66.5 84.7 65.6 baseline+++ ResNet-101 COCO+07++12 83.8 92.1 88.4 84.8 75.9 71.4 86.3 87.8 94.2 66.8 89.4 69.2 93.9 91.9 90.9 89.6 67.9 88.2 76.8 90.3 80.0 net VGG-16 ResNet-101 data 07++12 07++12 system bike bird boat bottle bus car cat chair cow table dog horse mbike person plant sheep sofa train Table 11. Detection results on the PASCAL VOC 2012 test set (http://host.robots.ox.ac.uk:8080/leaderboard/ | 1512.03385#62 | Deep Residual Learning for Image Recognition | Deeper neural networks are more difficult to train. We present a residual
learning framework to ease the training of networks that are substantially
deeper than those used previously. We explicitly reformulate the layers as
learning residual functions with reference to the layer inputs, instead of
learning unreferenced functions. We provide comprehensive empirical evidence
showing that these residual networks are easier to optimize, and can gain
accuracy from considerably increased depth. On the ImageNet dataset we evaluate
residual nets with a depth of up to 152 layers---8x deeper than VGG nets but
still having lower complexity. An ensemble of these residual nets achieves
3.57% error on the ImageNet test set. This result won the 1st place on the
ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100
and 1000 layers.
The depth of representations is of central importance for many visual
recognition tasks. Solely due to our extremely deep representations, we obtain
a 28% relative improvement on the COCO object detection dataset. Deep residual
nets are foundations of our submissions to ILSVRC & COCO 2015 competitions,
where we also won the 1st places on the tasks of ImageNet detection, ImageNet
localization, COCO detection, and COCO segmentation. | http://arxiv.org/pdf/1512.03385 | Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun | cs.CV | Tech report | null | cs.CV | 20151210 | 20151210 | [
{
"id": "1505.00387"
},
{
"id": "1504.06066"
}
] |
1512.03385 | 64 | We select two adjacent scales from the pyramid following [33]. RoI pooling and subsequent layers are performed on the feature maps of these two scales [33], which are merged by maxout as in [33]. Multi-scale testing improves the mAP by over 2 points (Table 9).
GoogLeNet [44] (ILSVRCâ14) our single model (ILSVRCâ15) our ensemble (ILSVRCâ15) val2 - 60.5 63.6 test 43.9 58.8 62.1
Using validation data. Next we use the 80k+40k trainval set for training and the 20k test-dev set for evaluation. The test- dev set has no publicly available ground truth and the result is reported by the evaluation server. Under this setting, the results are an [email protected] of 55.7% and an mAP@[.5, .95] of 34.9% (Table 9). This is our single-model result. | 1512.03385#64 | Deep Residual Learning for Image Recognition | Deeper neural networks are more difficult to train. We present a residual
learning framework to ease the training of networks that are substantially
deeper than those used previously. We explicitly reformulate the layers as
learning residual functions with reference to the layer inputs, instead of
learning unreferenced functions. We provide comprehensive empirical evidence
showing that these residual networks are easier to optimize, and can gain
accuracy from considerably increased depth. On the ImageNet dataset we evaluate
residual nets with a depth of up to 152 layers---8x deeper than VGG nets but
still having lower complexity. An ensemble of these residual nets achieves
3.57% error on the ImageNet test set. This result won the 1st place on the
ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100
and 1000 layers.
The depth of representations is of central importance for many visual
recognition tasks. Solely due to our extremely deep representations, we obtain
a 28% relative improvement on the COCO object detection dataset. Deep residual
nets are foundations of our submissions to ILSVRC & COCO 2015 competitions,
where we also won the 1st places on the tasks of ImageNet detection, ImageNet
localization, COCO detection, and COCO segmentation. | http://arxiv.org/pdf/1512.03385 | Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun | cs.CV | Tech report | null | cs.CV | 20151210 | 20151210 | [
{
"id": "1505.00387"
},
{
"id": "1504.06066"
}
] |
1512.03385 | 65 | Ensemble. In Faster R-CNN, the system is designed to learn region proposals and also object classiï¬ers, so an ensemble can be used to boost both tasks. We use an ensemble for proposing regions, and the union set of proposals are pro- cessed by an ensemble of per-region classiï¬ers. Table 9 shows our result based on an ensemble of 3 networks. The mAP is 59.0% and 37.4% on the test-dev set. This result won the 1st place in the detection task in COCO 2015.
# PASCAL VOC
We revisit the PASCAL VOC dataset based on the above model. With the single model on the COCO dataset (55.7% [email protected] in Table 9), we ï¬ne-tune this model on the PAS- CAL VOC sets. The improvements of box reï¬nement, con- text, and multi-scale testing are also adopted. By doing so
Table 12. Our results (mAP, %) on the ImageNet detection dataset. Our detection system is Faster R-CNN [32] with the improvements in Table 9, using ResNet-101. | 1512.03385#65 | Deep Residual Learning for Image Recognition | Deeper neural networks are more difficult to train. We present a residual
learning framework to ease the training of networks that are substantially
deeper than those used previously. We explicitly reformulate the layers as
learning residual functions with reference to the layer inputs, instead of
learning unreferenced functions. We provide comprehensive empirical evidence
showing that these residual networks are easier to optimize, and can gain
accuracy from considerably increased depth. On the ImageNet dataset we evaluate
residual nets with a depth of up to 152 layers---8x deeper than VGG nets but
still having lower complexity. An ensemble of these residual nets achieves
3.57% error on the ImageNet test set. This result won the 1st place on the
ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100
and 1000 layers.
The depth of representations is of central importance for many visual
recognition tasks. Solely due to our extremely deep representations, we obtain
a 28% relative improvement on the COCO object detection dataset. Deep residual
nets are foundations of our submissions to ILSVRC & COCO 2015 competitions,
where we also won the 1st places on the tasks of ImageNet detection, ImageNet
localization, COCO detection, and COCO segmentation. | http://arxiv.org/pdf/1512.03385 | Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun | cs.CV | Tech report | null | cs.CV | 20151210 | 20151210 | [
{
"id": "1505.00387"
},
{
"id": "1504.06066"
}
] |
1512.03385 | 66 | we achieve 85.6% mAP on PASCAL VOC 2007 (Table 10) and 83.8% on PASCAL VOC 2012 (Table 11)6. The result on PASCAL VOC 2012 is 10 points higher than the previ- ous state-of-the-art result [6].
# ImageNet Detection
The ImageNet Detection (DET) task involves 200 object categories. The accuracy is evaluated by [email protected]. Our object detection algorithm for ImageNet DET is the same as that for MS COCO in Table 9. The networks are pre- trained on the 1000-class ImageNet classiï¬cation set, and are ï¬ne-tuned on the DET data. We split the validation set into two parts (val1/val2) following [8]. We ï¬ne-tune the detection models using the DET training set and the val1 set. The val2 set is used for validation. We do not use other ILSVRC 2015 data. Our single model with ResNet-101 has
6http://host.robots.ox.ac.uk:8080/anonymous/3OJ4OJ.html,
submitted on 2015-11-26.
11 | 1512.03385#66 | Deep Residual Learning for Image Recognition | Deeper neural networks are more difficult to train. We present a residual
learning framework to ease the training of networks that are substantially
deeper than those used previously. We explicitly reformulate the layers as
learning residual functions with reference to the layer inputs, instead of
learning unreferenced functions. We provide comprehensive empirical evidence
showing that these residual networks are easier to optimize, and can gain
accuracy from considerably increased depth. On the ImageNet dataset we evaluate
residual nets with a depth of up to 152 layers---8x deeper than VGG nets but
still having lower complexity. An ensemble of these residual nets achieves
3.57% error on the ImageNet test set. This result won the 1st place on the
ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100
and 1000 layers.
The depth of representations is of central importance for many visual
recognition tasks. Solely due to our extremely deep representations, we obtain
a 28% relative improvement on the COCO object detection dataset. Deep residual
nets are foundations of our submissions to ILSVRC & COCO 2015 competitions,
where we also won the 1st places on the tasks of ImageNet detection, ImageNet
localization, COCO detection, and COCO segmentation. | http://arxiv.org/pdf/1512.03385 | Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun | cs.CV | Tech report | null | cs.CV | 20151210 | 20151210 | [
{
"id": "1505.00387"
},
{
"id": "1504.06066"
}
] |
1512.03385 | 67 | 6http://host.robots.ox.ac.uk:8080/anonymous/3OJ4OJ.html,
submitted on 2015-11-26.
11
LOC network VGGâs [41] VGG-16 LOC method testing 1-crop ResNet-101 1-crop ResNet-101 dense ResNet-101 dense RPN+RCNN ResNet-101 dense RPN+RCNN ensemble dense RPN RPN RPN LOC error on GT CLS 33.1 [41] 13.3 11.7 classiï¬cation network ResNet-101 ResNet-101 ensemble top-5 LOC error on predicted CLS 14.4 10.6 8.9
Table 13. Localization error (%) on the ImageNet validation. In the column of âLOC error on GT classâ ([41]), the ground truth class is used. In the âtestingâ column, â1-cropâ denotes testing on a center crop of 224Ã224 pixels, âdenseâ denotes dense (fully convolutional) and multi-scale testing.
58.8% mAP and our ensemble of 3 models has 62.1% mAP on the DET test set (Table 12). This result won the 1st place in the ImageNet detection task in ILSVRC 2015, surpassing the second place by 8.5 points (absolute). | 1512.03385#67 | Deep Residual Learning for Image Recognition | Deeper neural networks are more difficult to train. We present a residual
learning framework to ease the training of networks that are substantially
deeper than those used previously. We explicitly reformulate the layers as
learning residual functions with reference to the layer inputs, instead of
learning unreferenced functions. We provide comprehensive empirical evidence
showing that these residual networks are easier to optimize, and can gain
accuracy from considerably increased depth. On the ImageNet dataset we evaluate
residual nets with a depth of up to 152 layers---8x deeper than VGG nets but
still having lower complexity. An ensemble of these residual nets achieves
3.57% error on the ImageNet test set. This result won the 1st place on the
ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100
and 1000 layers.
The depth of representations is of central importance for many visual
recognition tasks. Solely due to our extremely deep representations, we obtain
a 28% relative improvement on the COCO object detection dataset. Deep residual
nets are foundations of our submissions to ILSVRC & COCO 2015 competitions,
where we also won the 1st places on the tasks of ImageNet detection, ImageNet
localization, COCO detection, and COCO segmentation. | http://arxiv.org/pdf/1512.03385 | Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun | cs.CV | Tech report | null | cs.CV | 20151210 | 20151210 | [
{
"id": "1505.00387"
},
{
"id": "1504.06066"
}
] |
1512.03385 | 68 | # C. ImageNet Localization
The ImageNet Localization (LOC) task [36] requires to classify and localize the objects. Following [40, 41], we assume that the image-level classiï¬ers are ï¬rst adopted for predicting the class labels of an image, and the localiza- tion algorithm only accounts for predicting bounding boxes based on the predicted classes. We adopt the âper-class re- gressionâ (PCR) strategy [40, 41], learning a bounding box regressor for each class. We pre-train the networks for Im- ageNet classiï¬cation and then ï¬ne-tune them for localiza- tion. We train networks on the provided 1000-class Ima- geNet training set. | 1512.03385#68 | Deep Residual Learning for Image Recognition | Deeper neural networks are more difficult to train. We present a residual
learning framework to ease the training of networks that are substantially
deeper than those used previously. We explicitly reformulate the layers as
learning residual functions with reference to the layer inputs, instead of
learning unreferenced functions. We provide comprehensive empirical evidence
showing that these residual networks are easier to optimize, and can gain
accuracy from considerably increased depth. On the ImageNet dataset we evaluate
residual nets with a depth of up to 152 layers---8x deeper than VGG nets but
still having lower complexity. An ensemble of these residual nets achieves
3.57% error on the ImageNet test set. This result won the 1st place on the
ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100
and 1000 layers.
The depth of representations is of central importance for many visual
recognition tasks. Solely due to our extremely deep representations, we obtain
a 28% relative improvement on the COCO object detection dataset. Deep residual
nets are foundations of our submissions to ILSVRC & COCO 2015 competitions,
where we also won the 1st places on the tasks of ImageNet detection, ImageNet
localization, COCO detection, and COCO segmentation. | http://arxiv.org/pdf/1512.03385 | Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun | cs.CV | Tech report | null | cs.CV | 20151210 | 20151210 | [
{
"id": "1505.00387"
},
{
"id": "1504.06066"
}
] |
1512.03385 | 69 | Our localization algorithm is based on the RPN frame- work of [32] with a few modiï¬cations. Unlike the way in [32] that is category-agnostic, our RPN for localization is designed in a per-class form. This RPN ends with two sib- 1 convolutional layers for binary classiï¬cation (cls) ling 1 and box regression (reg), as in [32]. The cls and reg layers are both in a per-class from, in contrast to [32]. Speciï¬- cally, the cls layer has a 1000-d output, and each dimension is binary logistic regression for predicting being or not be- ing an object class; the reg layer has a 1000 4-d output consisting of box regressors for 1000 classes. As in [32], our bounding box regression is with reference to multiple translation-invariant âanchorâ boxes at each position. | 1512.03385#69 | Deep Residual Learning for Image Recognition | Deeper neural networks are more difficult to train. We present a residual
learning framework to ease the training of networks that are substantially
deeper than those used previously. We explicitly reformulate the layers as
learning residual functions with reference to the layer inputs, instead of
learning unreferenced functions. We provide comprehensive empirical evidence
showing that these residual networks are easier to optimize, and can gain
accuracy from considerably increased depth. On the ImageNet dataset we evaluate
residual nets with a depth of up to 152 layers---8x deeper than VGG nets but
still having lower complexity. An ensemble of these residual nets achieves
3.57% error on the ImageNet test set. This result won the 1st place on the
ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100
and 1000 layers.
The depth of representations is of central importance for many visual
recognition tasks. Solely due to our extremely deep representations, we obtain
a 28% relative improvement on the COCO object detection dataset. Deep residual
nets are foundations of our submissions to ILSVRC & COCO 2015 competitions,
where we also won the 1st places on the tasks of ImageNet detection, ImageNet
localization, COCO detection, and COCO segmentation. | http://arxiv.org/pdf/1512.03385 | Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun | cs.CV | Tech report | null | cs.CV | 20151210 | 20151210 | [
{
"id": "1505.00387"
},
{
"id": "1504.06066"
}
] |
1512.03385 | 70 | As in our ImageNet classiï¬cation training (Sec. 3.4), we randomly sample 224 224 crops for data augmentation. We use a mini-batch size of 256 images for ï¬ne-tuning. To avoid negative samples being dominate, 8 anchors are ran- domly sampled for each image, where the sampled positive and negative anchors have a ratio of 1:1 [32]. For testing, the network is applied on the image fully-convolutionally.
Table 13 compares the localization results. Following [41], we ï¬rst perform âoracleâ testing using the ground truth class as the classiï¬cation prediction. VGGâs paper [41] re12
method OverFeat [40] (ILSVRCâ13) GoogLeNet [44] (ILSVRCâ14) VGG [41] (ILSVRCâ14) ours (ILSVRCâ15) top-5 localization err val 30.0 - 26.9 8.9 test 29.9 26.7 25.3 9.0
Table 14. Comparisons of localization error (%) on the ImageNet dataset with state-of-the-art methods. | 1512.03385#70 | Deep Residual Learning for Image Recognition | Deeper neural networks are more difficult to train. We present a residual
learning framework to ease the training of networks that are substantially
deeper than those used previously. We explicitly reformulate the layers as
learning residual functions with reference to the layer inputs, instead of
learning unreferenced functions. We provide comprehensive empirical evidence
showing that these residual networks are easier to optimize, and can gain
accuracy from considerably increased depth. On the ImageNet dataset we evaluate
residual nets with a depth of up to 152 layers---8x deeper than VGG nets but
still having lower complexity. An ensemble of these residual nets achieves
3.57% error on the ImageNet test set. This result won the 1st place on the
ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100
and 1000 layers.
The depth of representations is of central importance for many visual
recognition tasks. Solely due to our extremely deep representations, we obtain
a 28% relative improvement on the COCO object detection dataset. Deep residual
nets are foundations of our submissions to ILSVRC & COCO 2015 competitions,
where we also won the 1st places on the tasks of ImageNet detection, ImageNet
localization, COCO detection, and COCO segmentation. | http://arxiv.org/pdf/1512.03385 | Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun | cs.CV | Tech report | null | cs.CV | 20151210 | 20151210 | [
{
"id": "1505.00387"
},
{
"id": "1504.06066"
}
] |
1512.03385 | 71 | Table 14. Comparisons of localization error (%) on the ImageNet dataset with state-of-the-art methods.
ports a center-crop error of 33.1% (Table 13) using ground truth classes. Under the same setting, our RPN method us- ing ResNet-101 net signiï¬cantly reduces the center-crop er- ror to 13.3%. This comparison demonstrates the excellent performance of our framework. With dense (fully convolu- tional) and multi-scale testing, our ResNet-101 has an error of 11.7% using ground truth classes. Using ResNet-101 for predicting classes (4.6% top-5 classiï¬cation error, Table 4), the top-5 localization error is 14.4%. | 1512.03385#71 | Deep Residual Learning for Image Recognition | Deeper neural networks are more difficult to train. We present a residual
learning framework to ease the training of networks that are substantially
deeper than those used previously. We explicitly reformulate the layers as
learning residual functions with reference to the layer inputs, instead of
learning unreferenced functions. We provide comprehensive empirical evidence
showing that these residual networks are easier to optimize, and can gain
accuracy from considerably increased depth. On the ImageNet dataset we evaluate
residual nets with a depth of up to 152 layers---8x deeper than VGG nets but
still having lower complexity. An ensemble of these residual nets achieves
3.57% error on the ImageNet test set. This result won the 1st place on the
ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100
and 1000 layers.
The depth of representations is of central importance for many visual
recognition tasks. Solely due to our extremely deep representations, we obtain
a 28% relative improvement on the COCO object detection dataset. Deep residual
nets are foundations of our submissions to ILSVRC & COCO 2015 competitions,
where we also won the 1st places on the tasks of ImageNet detection, ImageNet
localization, COCO detection, and COCO segmentation. | http://arxiv.org/pdf/1512.03385 | Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun | cs.CV | Tech report | null | cs.CV | 20151210 | 20151210 | [
{
"id": "1505.00387"
},
{
"id": "1504.06066"
}
] |
1512.03385 | 72 | The above results are only based on the proposal network (RPN) in Faster R-CNN [32]. One may use the detection network (Fast R-CNN [7]) in Faster R-CNN to improve the results. But we notice that on this dataset, one image usually contains a single dominate object, and the proposal regions highly overlap with each other and thus have very similar RoI-pooled features. As a result, the image-centric training of Fast R-CNN [7] generates samples of small variations, which may not be desired for stochastic training. Motivated by this, in our current experiment we use the original R- CNN [8] that is RoI-centric, in place of Fast R-CNN. | 1512.03385#72 | Deep Residual Learning for Image Recognition | Deeper neural networks are more difficult to train. We present a residual
learning framework to ease the training of networks that are substantially
deeper than those used previously. We explicitly reformulate the layers as
learning residual functions with reference to the layer inputs, instead of
learning unreferenced functions. We provide comprehensive empirical evidence
showing that these residual networks are easier to optimize, and can gain
accuracy from considerably increased depth. On the ImageNet dataset we evaluate
residual nets with a depth of up to 152 layers---8x deeper than VGG nets but
still having lower complexity. An ensemble of these residual nets achieves
3.57% error on the ImageNet test set. This result won the 1st place on the
ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100
and 1000 layers.
The depth of representations is of central importance for many visual
recognition tasks. Solely due to our extremely deep representations, we obtain
a 28% relative improvement on the COCO object detection dataset. Deep residual
nets are foundations of our submissions to ILSVRC & COCO 2015 competitions,
where we also won the 1st places on the tasks of ImageNet detection, ImageNet
localization, COCO detection, and COCO segmentation. | http://arxiv.org/pdf/1512.03385 | Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun | cs.CV | Tech report | null | cs.CV | 20151210 | 20151210 | [
{
"id": "1505.00387"
},
{
"id": "1504.06066"
}
] |
1512.03385 | 73 | Our R-CNN implementation is as follows. We apply the per-class RPN trained as above on the training images to predict bounding boxes for the ground truth class. These predicted boxes play a role of class-dependent proposals. For each training image, the highest scored 200 proposals are extracted as training samples to train an R-CNN classi- ï¬er. The image region is cropped from a proposal, warped to 224 224 pixels, and fed into the classiï¬cation network as in R-CNN [8]. The outputs of this network consist of two sibling fc layers for cls and reg, also in a per-class form. This R-CNN network is ï¬ne-tuned on the training set us- ing a mini-batch size of 256 in the RoI-centric fashion. For testing, the RPN generates the highest scored 200 proposals for each predicted class, and the R-CNN network is used to update these proposalsâ scores and box positions. | 1512.03385#73 | Deep Residual Learning for Image Recognition | Deeper neural networks are more difficult to train. We present a residual
learning framework to ease the training of networks that are substantially
deeper than those used previously. We explicitly reformulate the layers as
learning residual functions with reference to the layer inputs, instead of
learning unreferenced functions. We provide comprehensive empirical evidence
showing that these residual networks are easier to optimize, and can gain
accuracy from considerably increased depth. On the ImageNet dataset we evaluate
residual nets with a depth of up to 152 layers---8x deeper than VGG nets but
still having lower complexity. An ensemble of these residual nets achieves
3.57% error on the ImageNet test set. This result won the 1st place on the
ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100
and 1000 layers.
The depth of representations is of central importance for many visual
recognition tasks. Solely due to our extremely deep representations, we obtain
a 28% relative improvement on the COCO object detection dataset. Deep residual
nets are foundations of our submissions to ILSVRC & COCO 2015 competitions,
where we also won the 1st places on the tasks of ImageNet detection, ImageNet
localization, COCO detection, and COCO segmentation. | http://arxiv.org/pdf/1512.03385 | Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun | cs.CV | Tech report | null | cs.CV | 20151210 | 20151210 | [
{
"id": "1505.00387"
},
{
"id": "1504.06066"
}
] |
1512.03385 | 74 | This method reduces the top-5 localization error to 10.6% (Table 13). This is our single-model result on the validation set. Using an ensemble of networks for both clas- siï¬cation and localization, we achieve a top-5 localization error of 9.0% on the test set. This number signiï¬cantly out- performs the ILSVRC 14 results (Table 14), showing a 64% relative reduction of error. This result won the 1st place in the ImageNet localization task in ILSVRC 2015. | 1512.03385#74 | Deep Residual Learning for Image Recognition | Deeper neural networks are more difficult to train. We present a residual
learning framework to ease the training of networks that are substantially
deeper than those used previously. We explicitly reformulate the layers as
learning residual functions with reference to the layer inputs, instead of
learning unreferenced functions. We provide comprehensive empirical evidence
showing that these residual networks are easier to optimize, and can gain
accuracy from considerably increased depth. On the ImageNet dataset we evaluate
residual nets with a depth of up to 152 layers---8x deeper than VGG nets but
still having lower complexity. An ensemble of these residual nets achieves
3.57% error on the ImageNet test set. This result won the 1st place on the
ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100
and 1000 layers.
The depth of representations is of central importance for many visual
recognition tasks. Solely due to our extremely deep representations, we obtain
a 28% relative improvement on the COCO object detection dataset. Deep residual
nets are foundations of our submissions to ILSVRC & COCO 2015 competitions,
where we also won the 1st places on the tasks of ImageNet detection, ImageNet
localization, COCO detection, and COCO segmentation. | http://arxiv.org/pdf/1512.03385 | Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun | cs.CV | Tech report | null | cs.CV | 20151210 | 20151210 | [
{
"id": "1505.00387"
},
{
"id": "1504.06066"
}
] |
1512.02595 | 0 | 5 1 0 2 c e D 8
] L C . s c [
1 v 5 9 5 2 0 . 2 1 5 1 : v i X r a
# Deep Speech 2: End-to-End Speech Recognition in English and Mandarin
Baidu Research â Silicon Valley AI Labâ Dario Amodei, Rishita Anubhai, Eric Battenberg, Carl Case, Jared Casper, Bryan Catanzaro, Jingdong Chen, Mike Chrzanowski, Adam Coates, Greg Diamos, Erich Elsen, Jesse Engel, Linxi Fan, Christopher Fougner, Tony Han, Awni Hannun, Billy Jun, Patrick LeGresley, Libby Lin, Sharan Narang, Andrew Ng, Sherjil Ozair, Ryan Prenger, Jonathan Raiman, Sanjeev Satheesh, David Seetapun, Shubho Sengupta, Yi Wang, Zhiqian Wang, Chong Wang, Bo Xiao, Dani Yogatama, Jun Zhan, Zhenyao Zhu
# Abstract | 1512.02595#0 | Deep Speech 2: End-to-End Speech Recognition in English and Mandarin | We show that an end-to-end deep learning approach can be used to recognize
either English or Mandarin Chinese speech--two vastly different languages.
Because it replaces entire pipelines of hand-engineered components with neural
networks, end-to-end learning allows us to handle a diverse variety of speech
including noisy environments, accents and different languages. Key to our
approach is our application of HPC techniques, resulting in a 7x speedup over
our previous system. Because of this efficiency, experiments that previously
took weeks now run in days. This enables us to iterate more quickly to identify
superior architectures and algorithms. As a result, in several cases, our
system is competitive with the transcription of human workers when benchmarked
on standard datasets. Finally, using a technique called Batch Dispatch with
GPUs in the data center, we show that our system can be inexpensively deployed
in an online setting, delivering low latency when serving users at scale. | http://arxiv.org/pdf/1512.02595 | Dario Amodei, Rishita Anubhai, Eric Battenberg, Carl Case, Jared Casper, Bryan Catanzaro, Jingdong Chen, Mike Chrzanowski, Adam Coates, Greg Diamos, Erich Elsen, Jesse Engel, Linxi Fan, Christopher Fougner, Tony Han, Awni Hannun, Billy Jun, Patrick LeGresley, Libby Lin, Sharan Narang, Andrew Ng, Sherjil Ozair, Ryan Prenger, Jonathan Raiman, Sanjeev Satheesh, David Seetapun, Shubho Sengupta, Yi Wang, Zhiqian Wang, Chong Wang, Bo Xiao, Dani Yogatama, Jun Zhan, Zhenyao Zhu | cs.CL | null | null | cs.CL | 20151208 | 20151208 | [] |
1512.02325 | 1 | Abstract. We present a method for detecting objects in images using a single deep neural network. Our approach, named SSD, discretizes the output space of bounding boxes into a set of default boxes over different aspect ratios and scales per feature map location. At prediction time, the network generates scores for the presence of each object category in each default box and produces adjustments to the box to better match the object shape. Additionally, the network combines pre- dictions from multiple feature maps with different resolutions to naturally handle objects of various sizes. SSD is simple relative to methods that require object proposals because it completely eliminates proposal generation and subsequent pixel or feature resampling stages and encapsulates all computation in a single network. This makes SSD easy to train and straightforward to integrate into sys- tems that require a detection component. Experimental results on the PASCAL VOC, COCO, and ILSVRC datasets conï¬rm that SSD has competitive accuracy to methods that utilize an additional object proposal step and is much faster, while providing a uniï¬ed framework for both training and inference. For 300 à 300 in- put, SSD achieves 74.3% mAP1 on VOC2007 test at 59 FPS on a | 1512.02325#1 | SSD: Single Shot MultiBox Detector | We present a method for detecting objects in images using a single deep
neural network. Our approach, named SSD, discretizes the output space of
bounding boxes into a set of default boxes over different aspect ratios and
scales per feature map location. At prediction time, the network generates
scores for the presence of each object category in each default box and
produces adjustments to the box to better match the object shape. Additionally,
the network combines predictions from multiple feature maps with different
resolutions to naturally handle objects of various sizes. Our SSD model is
simple relative to methods that require object proposals because it completely
eliminates proposal generation and subsequent pixel or feature resampling stage
and encapsulates all computation in a single network. This makes SSD easy to
train and straightforward to integrate into systems that require a detection
component. Experimental results on the PASCAL VOC, MS COCO, and ILSVRC datasets
confirm that SSD has comparable accuracy to methods that utilize an additional
object proposal step and is much faster, while providing a unified framework
for both training and inference. Compared to other single stage methods, SSD
has much better accuracy, even with a smaller input image size. For $300\times
300$ input, SSD achieves 72.1% mAP on VOC2007 test at 58 FPS on a Nvidia Titan
X and for $500\times 500$ input, SSD achieves 75.1% mAP, outperforming a
comparable state of the art Faster R-CNN model. Code is available at
https://github.com/weiliu89/caffe/tree/ssd . | http://arxiv.org/pdf/1512.02325 | Wei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott Reed, Cheng-Yang Fu, Alexander C. Berg | cs.CV | ECCV 2016 | null | cs.CV | 20151208 | 20161229 | [] |
1512.02595 | 1 | # Abstract
We show that an end-to-end deep learning approach can be used to recognize either English or Mandarin Chinese speechâtwo vastly different languages. Be- cause it replaces entire pipelines of hand-engineered components with neural net- works, end-to-end learning allows us to handle a diverse variety of speech includ- ing noisy environments, accents and different languages. Key to our approach is our application of HPC techniques, resulting in a 7x speedup over our previous system [26]. Because of this efï¬ciency, experiments that previously took weeks now run in days. This enables us to iterate more quickly to identify superior ar- chitectures and algorithms. As a result, in several cases, our system is competitive with the transcription of human workers when benchmarked on standard datasets. Finally, using a technique called Batch Dispatch with GPUs in the data center, we show that our system can be inexpensively deployed in an online setting, deliver- ing low latency when serving users at scale.
# Introduction | 1512.02595#1 | Deep Speech 2: End-to-End Speech Recognition in English and Mandarin | We show that an end-to-end deep learning approach can be used to recognize
either English or Mandarin Chinese speech--two vastly different languages.
Because it replaces entire pipelines of hand-engineered components with neural
networks, end-to-end learning allows us to handle a diverse variety of speech
including noisy environments, accents and different languages. Key to our
approach is our application of HPC techniques, resulting in a 7x speedup over
our previous system. Because of this efficiency, experiments that previously
took weeks now run in days. This enables us to iterate more quickly to identify
superior architectures and algorithms. As a result, in several cases, our
system is competitive with the transcription of human workers when benchmarked
on standard datasets. Finally, using a technique called Batch Dispatch with
GPUs in the data center, we show that our system can be inexpensively deployed
in an online setting, delivering low latency when serving users at scale. | http://arxiv.org/pdf/1512.02595 | Dario Amodei, Rishita Anubhai, Eric Battenberg, Carl Case, Jared Casper, Bryan Catanzaro, Jingdong Chen, Mike Chrzanowski, Adam Coates, Greg Diamos, Erich Elsen, Jesse Engel, Linxi Fan, Christopher Fougner, Tony Han, Awni Hannun, Billy Jun, Patrick LeGresley, Libby Lin, Sharan Narang, Andrew Ng, Sherjil Ozair, Ryan Prenger, Jonathan Raiman, Sanjeev Satheesh, David Seetapun, Shubho Sengupta, Yi Wang, Zhiqian Wang, Chong Wang, Bo Xiao, Dani Yogatama, Jun Zhan, Zhenyao Zhu | cs.CL | null | null | cs.CL | 20151208 | 20151208 | [] |
1512.02325 | 2 | framework for both training and inference. For 300 Ã 300 in- put, SSD achieves 74.3% mAP1 on VOC2007 test at 59 FPS on a Nvidia Titan X and for 512 Ã 512 input, SSD achieves 76.9% mAP, outperforming a compa- rable state-of-the-art Faster R-CNN model. Compared to other single stage meth- ods, SSD has much better accuracy even with a smaller input image size. Code is available at: https://github.com/weiliu89/caffe/tree/ssd . | 1512.02325#2 | SSD: Single Shot MultiBox Detector | We present a method for detecting objects in images using a single deep
neural network. Our approach, named SSD, discretizes the output space of
bounding boxes into a set of default boxes over different aspect ratios and
scales per feature map location. At prediction time, the network generates
scores for the presence of each object category in each default box and
produces adjustments to the box to better match the object shape. Additionally,
the network combines predictions from multiple feature maps with different
resolutions to naturally handle objects of various sizes. Our SSD model is
simple relative to methods that require object proposals because it completely
eliminates proposal generation and subsequent pixel or feature resampling stage
and encapsulates all computation in a single network. This makes SSD easy to
train and straightforward to integrate into systems that require a detection
component. Experimental results on the PASCAL VOC, MS COCO, and ILSVRC datasets
confirm that SSD has comparable accuracy to methods that utilize an additional
object proposal step and is much faster, while providing a unified framework
for both training and inference. Compared to other single stage methods, SSD
has much better accuracy, even with a smaller input image size. For $300\times
300$ input, SSD achieves 72.1% mAP on VOC2007 test at 58 FPS on a Nvidia Titan
X and for $500\times 500$ input, SSD achieves 75.1% mAP, outperforming a
comparable state of the art Faster R-CNN model. Code is available at
https://github.com/weiliu89/caffe/tree/ssd . | http://arxiv.org/pdf/1512.02325 | Wei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott Reed, Cheng-Yang Fu, Alexander C. Berg | cs.CV | ECCV 2016 | null | cs.CV | 20151208 | 20161229 | [] |
1512.02595 | 2 | # Introduction
Decades worth of hand-engineered domain knowledge has gone into current state-of-the-art auto- matic speech recognition (ASR) pipelines. A simple but powerful alternative solution is to train such ASR models end-to-end, using deep learning to replace most modules with a single model [26]. We present the second generation of our speech system that exempliï¬es the major advantages of end- to-end learning. The Deep Speech 2 ASR pipeline approaches or exceeds the accuracy of Amazon Mechanical Turk human workers on several benchmarks, works in multiple languages with little modiï¬cation, and is deployable in a production setting. It thus represents a signiï¬cant step towards a single ASR system that addresses the entire range of speech recognition contexts handled by hu- mans. Since our system is built on end-to-end deep learning, we can employ a spectrum of deep learning techniques: capturing large training sets, training larger models with high performance computing, and methodically exploring the space of neural network architectures. We show that through these techniques we are able to reduce error rates of our previous end-to-end system [26] in English by up to 43%, and can also recognize Mandarin speech with high accuracy. | 1512.02595#2 | Deep Speech 2: End-to-End Speech Recognition in English and Mandarin | We show that an end-to-end deep learning approach can be used to recognize
either English or Mandarin Chinese speech--two vastly different languages.
Because it replaces entire pipelines of hand-engineered components with neural
networks, end-to-end learning allows us to handle a diverse variety of speech
including noisy environments, accents and different languages. Key to our
approach is our application of HPC techniques, resulting in a 7x speedup over
our previous system. Because of this efficiency, experiments that previously
took weeks now run in days. This enables us to iterate more quickly to identify
superior architectures and algorithms. As a result, in several cases, our
system is competitive with the transcription of human workers when benchmarked
on standard datasets. Finally, using a technique called Batch Dispatch with
GPUs in the data center, we show that our system can be inexpensively deployed
in an online setting, delivering low latency when serving users at scale. | http://arxiv.org/pdf/1512.02595 | Dario Amodei, Rishita Anubhai, Eric Battenberg, Carl Case, Jared Casper, Bryan Catanzaro, Jingdong Chen, Mike Chrzanowski, Adam Coates, Greg Diamos, Erich Elsen, Jesse Engel, Linxi Fan, Christopher Fougner, Tony Han, Awni Hannun, Billy Jun, Patrick LeGresley, Libby Lin, Sharan Narang, Andrew Ng, Sherjil Ozair, Ryan Prenger, Jonathan Raiman, Sanjeev Satheesh, David Seetapun, Shubho Sengupta, Yi Wang, Zhiqian Wang, Chong Wang, Bo Xiao, Dani Yogatama, Jun Zhan, Zhenyao Zhu | cs.CL | null | null | cs.CL | 20151208 | 20151208 | [] |
1512.02325 | 3 | 5 v 5 2 3 2 0 . 2 1 5 1 : v i X r a
Keywords: Real-time Object Detection; Convolutional Neural Network
# 1 Introduction
Current state-of-the-art object detection systems are variants of the following approach: hypothesize bounding boxes, resample pixels or features for each box, and apply a high- quality classiï¬er. This pipeline has prevailed on detection benchmarks since the Selec- tive Search work [1] through the current leading results on PASCAL VOC, COCO, and ILSVRC detection all based on Faster R-CNN[2] albeit with deeper features such as [3]. While accurate, these approaches have been too computationally intensive for em- bedded systems and, even with high-end hardware, too slow for real-time applications.
1 We achieved even better results using an improved data augmentation scheme in follow-on experiments: 77.2% mAP for 300Ã300 input and 79.8% mAP for 512Ã512 input on VOC2007. Please see Sec. 3.6 for details.
2 Liu et al. | 1512.02325#3 | SSD: Single Shot MultiBox Detector | We present a method for detecting objects in images using a single deep
neural network. Our approach, named SSD, discretizes the output space of
bounding boxes into a set of default boxes over different aspect ratios and
scales per feature map location. At prediction time, the network generates
scores for the presence of each object category in each default box and
produces adjustments to the box to better match the object shape. Additionally,
the network combines predictions from multiple feature maps with different
resolutions to naturally handle objects of various sizes. Our SSD model is
simple relative to methods that require object proposals because it completely
eliminates proposal generation and subsequent pixel or feature resampling stage
and encapsulates all computation in a single network. This makes SSD easy to
train and straightforward to integrate into systems that require a detection
component. Experimental results on the PASCAL VOC, MS COCO, and ILSVRC datasets
confirm that SSD has comparable accuracy to methods that utilize an additional
object proposal step and is much faster, while providing a unified framework
for both training and inference. Compared to other single stage methods, SSD
has much better accuracy, even with a smaller input image size. For $300\times
300$ input, SSD achieves 72.1% mAP on VOC2007 test at 58 FPS on a Nvidia Titan
X and for $500\times 500$ input, SSD achieves 75.1% mAP, outperforming a
comparable state of the art Faster R-CNN model. Code is available at
https://github.com/weiliu89/caffe/tree/ssd . | http://arxiv.org/pdf/1512.02325 | Wei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott Reed, Cheng-Yang Fu, Alexander C. Berg | cs.CV | ECCV 2016 | null | cs.CV | 20151208 | 20161229 | [] |
1512.02595 | 3 | One of the challenges of speech recognition is the wide range of variability in speech and acoustics. As a result, modern ASR pipelines are made up of numerous components including complex feature extraction, acoustic models, language and pronunciation models, speaker adaptation, etc. Build- ing and tuning these individual components makes developing a new speech recognizer very hard, especially for a new language. Indeed, many parts do not generalize well across environments or languages and it is often necessary to support multiple application-speciï¬c systems in order to pro- vide acceptable accuracy. This state of affairs is different from human speech recognition: people
âAuthorship order is alphabetical.
1
have the innate ability to learn any language during childhood, using general skills to learn language. After learning to read and write, most humans can transcribe speech with robustness to variation in environment, speaker accent and noise, without additional training for the transcription task. To meet the expectations of speech recognition users, we believe that a single engine must learn to be similarly competent; able to handle most applications with only minor modiï¬cations and able to learn new languages from scratch without dramatic changes. Our end-to-end system puts this goal within reach, allowing us to approach or exceed the performance of human workers on several tests in two very different languages: Mandarin and English. | 1512.02595#3 | Deep Speech 2: End-to-End Speech Recognition in English and Mandarin | We show that an end-to-end deep learning approach can be used to recognize
either English or Mandarin Chinese speech--two vastly different languages.
Because it replaces entire pipelines of hand-engineered components with neural
networks, end-to-end learning allows us to handle a diverse variety of speech
including noisy environments, accents and different languages. Key to our
approach is our application of HPC techniques, resulting in a 7x speedup over
our previous system. Because of this efficiency, experiments that previously
took weeks now run in days. This enables us to iterate more quickly to identify
superior architectures and algorithms. As a result, in several cases, our
system is competitive with the transcription of human workers when benchmarked
on standard datasets. Finally, using a technique called Batch Dispatch with
GPUs in the data center, we show that our system can be inexpensively deployed
in an online setting, delivering low latency when serving users at scale. | http://arxiv.org/pdf/1512.02595 | Dario Amodei, Rishita Anubhai, Eric Battenberg, Carl Case, Jared Casper, Bryan Catanzaro, Jingdong Chen, Mike Chrzanowski, Adam Coates, Greg Diamos, Erich Elsen, Jesse Engel, Linxi Fan, Christopher Fougner, Tony Han, Awni Hannun, Billy Jun, Patrick LeGresley, Libby Lin, Sharan Narang, Andrew Ng, Sherjil Ozair, Ryan Prenger, Jonathan Raiman, Sanjeev Satheesh, David Seetapun, Shubho Sengupta, Yi Wang, Zhiqian Wang, Chong Wang, Bo Xiao, Dani Yogatama, Jun Zhan, Zhenyao Zhu | cs.CL | null | null | cs.CL | 20151208 | 20151208 | [] |
1512.02325 | 4 | 2 Liu et al.
Often detection speed for these approaches is measured in seconds per frame (SPF), and even the fastest high-accuracy detector, Faster R-CNN, operates at only 7 frames per second (FPS). There have been many attempts to build faster detectors by attacking each stage of the detection pipeline (see related work in Sec. 4), but so far, signiï¬cantly increased speed comes only at the cost of signiï¬cantly decreased detection accuracy. | 1512.02325#4 | SSD: Single Shot MultiBox Detector | We present a method for detecting objects in images using a single deep
neural network. Our approach, named SSD, discretizes the output space of
bounding boxes into a set of default boxes over different aspect ratios and
scales per feature map location. At prediction time, the network generates
scores for the presence of each object category in each default box and
produces adjustments to the box to better match the object shape. Additionally,
the network combines predictions from multiple feature maps with different
resolutions to naturally handle objects of various sizes. Our SSD model is
simple relative to methods that require object proposals because it completely
eliminates proposal generation and subsequent pixel or feature resampling stage
and encapsulates all computation in a single network. This makes SSD easy to
train and straightforward to integrate into systems that require a detection
component. Experimental results on the PASCAL VOC, MS COCO, and ILSVRC datasets
confirm that SSD has comparable accuracy to methods that utilize an additional
object proposal step and is much faster, while providing a unified framework
for both training and inference. Compared to other single stage methods, SSD
has much better accuracy, even with a smaller input image size. For $300\times
300$ input, SSD achieves 72.1% mAP on VOC2007 test at 58 FPS on a Nvidia Titan
X and for $500\times 500$ input, SSD achieves 75.1% mAP, outperforming a
comparable state of the art Faster R-CNN model. Code is available at
https://github.com/weiliu89/caffe/tree/ssd . | http://arxiv.org/pdf/1512.02325 | Wei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott Reed, Cheng-Yang Fu, Alexander C. Berg | cs.CV | ECCV 2016 | null | cs.CV | 20151208 | 20161229 | [] |
1512.02595 | 4 | Since Deep Speech 2 (DS2) is an end-to-end deep learning system, we can achieve performance gains by focusing on three crucial components: the model architecture, large labeled training datasets, and computational scale. This approach has also yielded great advances in other appli- cation areas such as computer vision and natural language. This paper details our contribution to these three areas for speech recognition, including an extensive investigation of model architectures and the effect of data and model size on recognition performance. In particular, we describe numer- ous experiments with neural networks trained with the Connectionist Temporal Classiï¬cation (CTC) loss function [22] to predict speech transcriptions from audio. We consider networks composed of many layers of recurrent connections, convolutional ï¬lters, and nonlinearities, as well as the impact of a speciï¬c instance of Batch Normalization [63] (BatchNorm) applied to RNNs. We not only ï¬nd networks that produce much better predictions than those in previous work [26], but also ï¬nd instances of recurrent models that can be deployed in a production setting with no signiï¬cant loss in accuracy. | 1512.02595#4 | Deep Speech 2: End-to-End Speech Recognition in English and Mandarin | We show that an end-to-end deep learning approach can be used to recognize
either English or Mandarin Chinese speech--two vastly different languages.
Because it replaces entire pipelines of hand-engineered components with neural
networks, end-to-end learning allows us to handle a diverse variety of speech
including noisy environments, accents and different languages. Key to our
approach is our application of HPC techniques, resulting in a 7x speedup over
our previous system. Because of this efficiency, experiments that previously
took weeks now run in days. This enables us to iterate more quickly to identify
superior architectures and algorithms. As a result, in several cases, our
system is competitive with the transcription of human workers when benchmarked
on standard datasets. Finally, using a technique called Batch Dispatch with
GPUs in the data center, we show that our system can be inexpensively deployed
in an online setting, delivering low latency when serving users at scale. | http://arxiv.org/pdf/1512.02595 | Dario Amodei, Rishita Anubhai, Eric Battenberg, Carl Case, Jared Casper, Bryan Catanzaro, Jingdong Chen, Mike Chrzanowski, Adam Coates, Greg Diamos, Erich Elsen, Jesse Engel, Linxi Fan, Christopher Fougner, Tony Han, Awni Hannun, Billy Jun, Patrick LeGresley, Libby Lin, Sharan Narang, Andrew Ng, Sherjil Ozair, Ryan Prenger, Jonathan Raiman, Sanjeev Satheesh, David Seetapun, Shubho Sengupta, Yi Wang, Zhiqian Wang, Chong Wang, Bo Xiao, Dani Yogatama, Jun Zhan, Zhenyao Zhu | cs.CL | null | null | cs.CL | 20151208 | 20151208 | [] |
1512.02325 | 5 | This paper presents the ï¬rst deep network based object detector that does not re- sample pixels or features for bounding box hypotheses and and is as accurate as ap- proaches that do. This results in a signiï¬cant improvement in speed for high-accuracy detection (59 FPS with mAP 74.3% on VOC2007 test, vs. Faster R-CNN 7 FPS with mAP 73.2% or YOLO 45 FPS with mAP 63.4%). The fundamental improvement in speed comes from eliminating bounding box proposals and the subsequent pixel or fea- ture resampling stage. We are not the ï¬rst to do this (cf [4,5]), but by adding a series of improvements, we manage to increase the accuracy signiï¬cantly over previous at- tempts. Our improvements include using a small convolutional ï¬lter to predict object categories and offsets in bounding box locations, using separate predictors (ï¬lters) for different aspect ratio detections, and applying these ï¬lters to multiple feature maps from the later stages of a network in order to perform detection at multiple scales. With these | 1512.02325#5 | SSD: Single Shot MultiBox Detector | We present a method for detecting objects in images using a single deep
neural network. Our approach, named SSD, discretizes the output space of
bounding boxes into a set of default boxes over different aspect ratios and
scales per feature map location. At prediction time, the network generates
scores for the presence of each object category in each default box and
produces adjustments to the box to better match the object shape. Additionally,
the network combines predictions from multiple feature maps with different
resolutions to naturally handle objects of various sizes. Our SSD model is
simple relative to methods that require object proposals because it completely
eliminates proposal generation and subsequent pixel or feature resampling stage
and encapsulates all computation in a single network. This makes SSD easy to
train and straightforward to integrate into systems that require a detection
component. Experimental results on the PASCAL VOC, MS COCO, and ILSVRC datasets
confirm that SSD has comparable accuracy to methods that utilize an additional
object proposal step and is much faster, while providing a unified framework
for both training and inference. Compared to other single stage methods, SSD
has much better accuracy, even with a smaller input image size. For $300\times
300$ input, SSD achieves 72.1% mAP on VOC2007 test at 58 FPS on a Nvidia Titan
X and for $500\times 500$ input, SSD achieves 75.1% mAP, outperforming a
comparable state of the art Faster R-CNN model. Code is available at
https://github.com/weiliu89/caffe/tree/ssd . | http://arxiv.org/pdf/1512.02325 | Wei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott Reed, Cheng-Yang Fu, Alexander C. Berg | cs.CV | ECCV 2016 | null | cs.CV | 20151208 | 20161229 | [] |
1512.02595 | 5 | Beyond the search for better model architecture, deep learning systems beneï¬t greatly from large quantities of training data. We detail our data capturing pipeline that has enabled us to create larger datasets than what is typically used to train speech recognition systems. Our English speech system is trained on 11,940 hours of speech, while the Mandarin system is trained on 9,400 hours. We use data synthesis to further augment the data during training. | 1512.02595#5 | Deep Speech 2: End-to-End Speech Recognition in English and Mandarin | We show that an end-to-end deep learning approach can be used to recognize
either English or Mandarin Chinese speech--two vastly different languages.
Because it replaces entire pipelines of hand-engineered components with neural
networks, end-to-end learning allows us to handle a diverse variety of speech
including noisy environments, accents and different languages. Key to our
approach is our application of HPC techniques, resulting in a 7x speedup over
our previous system. Because of this efficiency, experiments that previously
took weeks now run in days. This enables us to iterate more quickly to identify
superior architectures and algorithms. As a result, in several cases, our
system is competitive with the transcription of human workers when benchmarked
on standard datasets. Finally, using a technique called Batch Dispatch with
GPUs in the data center, we show that our system can be inexpensively deployed
in an online setting, delivering low latency when serving users at scale. | http://arxiv.org/pdf/1512.02595 | Dario Amodei, Rishita Anubhai, Eric Battenberg, Carl Case, Jared Casper, Bryan Catanzaro, Jingdong Chen, Mike Chrzanowski, Adam Coates, Greg Diamos, Erich Elsen, Jesse Engel, Linxi Fan, Christopher Fougner, Tony Han, Awni Hannun, Billy Jun, Patrick LeGresley, Libby Lin, Sharan Narang, Andrew Ng, Sherjil Ozair, Ryan Prenger, Jonathan Raiman, Sanjeev Satheesh, David Seetapun, Shubho Sengupta, Yi Wang, Zhiqian Wang, Chong Wang, Bo Xiao, Dani Yogatama, Jun Zhan, Zhenyao Zhu | cs.CL | null | null | cs.CL | 20151208 | 20151208 | [] |
1512.02325 | 6 | detections, and applying these ï¬lters to multiple feature maps from the later stages of a network in order to perform detection at multiple scales. With these modiï¬cationsâespecially using multiple layers for prediction at different scalesâwe can achieve high-accuracy using relatively low resolution input, further increasing de- tection speed. While these contributions may seem small independently, we note that the resulting system improves accuracy on real-time detection for PASCAL VOC from 63.4% mAP for YOLO to 74.3% mAP for our SSD. This is a larger relative improve- ment in detection accuracy than that from the recent, very high-proï¬le work on residual networks [3]. Furthermore, signiï¬cantly improving the speed of high-quality detection can broaden the range of settings where computer vision is useful. | 1512.02325#6 | SSD: Single Shot MultiBox Detector | We present a method for detecting objects in images using a single deep
neural network. Our approach, named SSD, discretizes the output space of
bounding boxes into a set of default boxes over different aspect ratios and
scales per feature map location. At prediction time, the network generates
scores for the presence of each object category in each default box and
produces adjustments to the box to better match the object shape. Additionally,
the network combines predictions from multiple feature maps with different
resolutions to naturally handle objects of various sizes. Our SSD model is
simple relative to methods that require object proposals because it completely
eliminates proposal generation and subsequent pixel or feature resampling stage
and encapsulates all computation in a single network. This makes SSD easy to
train and straightforward to integrate into systems that require a detection
component. Experimental results on the PASCAL VOC, MS COCO, and ILSVRC datasets
confirm that SSD has comparable accuracy to methods that utilize an additional
object proposal step and is much faster, while providing a unified framework
for both training and inference. Compared to other single stage methods, SSD
has much better accuracy, even with a smaller input image size. For $300\times
300$ input, SSD achieves 72.1% mAP on VOC2007 test at 58 FPS on a Nvidia Titan
X and for $500\times 500$ input, SSD achieves 75.1% mAP, outperforming a
comparable state of the art Faster R-CNN model. Code is available at
https://github.com/weiliu89/caffe/tree/ssd . | http://arxiv.org/pdf/1512.02325 | Wei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott Reed, Cheng-Yang Fu, Alexander C. Berg | cs.CV | ECCV 2016 | null | cs.CV | 20151208 | 20161229 | [] |
1512.02595 | 6 | Training on large quantities of data usually requires the use of larger models. Indeed, our models have many more parameters than those used in our previous system. Training a single model at these scales requires tens of exaFLOPs1 that would require 3-6 weeks to execute on a single GPU. This makes model exploration a very time consuming exercise, so we have built a highly optimized training system that uses 8 or 16 GPUs to train one model. In contrast to previous large-scale training approaches that use parameter servers and asynchronous updates [18, 10], we use synchronous SGD, which is easier to debug while testing new ideas, and also converges faster for the same degree of data parallelism. To make the entire system efï¬cient, we describe optimizations for a single GPU as well as improvements to scalability for multiple GPUs. We employ optimization techniques typically found in High Performance Computing to improve scalability. These optimizations include a fast implementation of the CTC loss function on the GPU, and a custom memory allocator. We also use carefully integrated compute nodes and a custom implementation of all-reduce to accelerate inter-GPU communication. Overall the system sustains approximately 50 teraFLOP/second when training on 16 GPUs. This amounts to | 1512.02595#6 | Deep Speech 2: End-to-End Speech Recognition in English and Mandarin | We show that an end-to-end deep learning approach can be used to recognize
either English or Mandarin Chinese speech--two vastly different languages.
Because it replaces entire pipelines of hand-engineered components with neural
networks, end-to-end learning allows us to handle a diverse variety of speech
including noisy environments, accents and different languages. Key to our
approach is our application of HPC techniques, resulting in a 7x speedup over
our previous system. Because of this efficiency, experiments that previously
took weeks now run in days. This enables us to iterate more quickly to identify
superior architectures and algorithms. As a result, in several cases, our
system is competitive with the transcription of human workers when benchmarked
on standard datasets. Finally, using a technique called Batch Dispatch with
GPUs in the data center, we show that our system can be inexpensively deployed
in an online setting, delivering low latency when serving users at scale. | http://arxiv.org/pdf/1512.02595 | Dario Amodei, Rishita Anubhai, Eric Battenberg, Carl Case, Jared Casper, Bryan Catanzaro, Jingdong Chen, Mike Chrzanowski, Adam Coates, Greg Diamos, Erich Elsen, Jesse Engel, Linxi Fan, Christopher Fougner, Tony Han, Awni Hannun, Billy Jun, Patrick LeGresley, Libby Lin, Sharan Narang, Andrew Ng, Sherjil Ozair, Ryan Prenger, Jonathan Raiman, Sanjeev Satheesh, David Seetapun, Shubho Sengupta, Yi Wang, Zhiqian Wang, Chong Wang, Bo Xiao, Dani Yogatama, Jun Zhan, Zhenyao Zhu | cs.CL | null | null | cs.CL | 20151208 | 20151208 | [] |
1512.02325 | 7 | We summarize our contributions as follows:
â We introduce SSD, a single-shot detector for multiple categories that is faster than the previous state-of-the-art for single shot detectors (YOLO), and signiï¬cantly more accurate, in fact as accurate as slower techniques that perform explicit region proposals and pooling (including Faster R-CNN).
â The core of SSD is predicting category scores and box offsets for a ï¬xed set of default bounding boxes using small convolutional ï¬lters applied to feature maps. â To achieve high detection accuracy we produce predictions of different scales from feature maps of different scales, and explicitly separate predictions by aspect ratio. â These design features lead to simple end-to-end training and high accuracy, even on low resolution input images, further improving the speed vs accuracy trade-off. â Experiments include timing and accuracy analysis on models with varying input size evaluated on PASCAL VOC, COCO, and ILSVRC and are compared to a range of recent state-of-the-art approaches.
# 2 The Single Shot Detector (SSD)
This section describes our proposed SSD framework for detection (Sec. 2.1) and the associated training methodology (Sec. 2.2). Afterwards, Sec. 3 presents dataset-speciï¬c model details and experimental results.
SSD: Single Shot MultiBox Detector | 1512.02325#7 | SSD: Single Shot MultiBox Detector | We present a method for detecting objects in images using a single deep
neural network. Our approach, named SSD, discretizes the output space of
bounding boxes into a set of default boxes over different aspect ratios and
scales per feature map location. At prediction time, the network generates
scores for the presence of each object category in each default box and
produces adjustments to the box to better match the object shape. Additionally,
the network combines predictions from multiple feature maps with different
resolutions to naturally handle objects of various sizes. Our SSD model is
simple relative to methods that require object proposals because it completely
eliminates proposal generation and subsequent pixel or feature resampling stage
and encapsulates all computation in a single network. This makes SSD easy to
train and straightforward to integrate into systems that require a detection
component. Experimental results on the PASCAL VOC, MS COCO, and ILSVRC datasets
confirm that SSD has comparable accuracy to methods that utilize an additional
object proposal step and is much faster, while providing a unified framework
for both training and inference. Compared to other single stage methods, SSD
has much better accuracy, even with a smaller input image size. For $300\times
300$ input, SSD achieves 72.1% mAP on VOC2007 test at 58 FPS on a Nvidia Titan
X and for $500\times 500$ input, SSD achieves 75.1% mAP, outperforming a
comparable state of the art Faster R-CNN model. Code is available at
https://github.com/weiliu89/caffe/tree/ssd . | http://arxiv.org/pdf/1512.02325 | Wei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott Reed, Cheng-Yang Fu, Alexander C. Berg | cs.CV | ECCV 2016 | null | cs.CV | 20151208 | 20161229 | [] |
1512.02325 | 8 | SSD: Single Shot MultiBox Detector
| | | | = at ow cy, w, h) conf } (cy, ¢2,°++, Cp) (a) Image with GT boxes (b) 8 x 8 feature map (c) 4 x 4 feature map
Fig. 1: SSD framework. (a) SSD only needs an input image and ground truth boxes for each object during training. In a convolutional fashion, we evaluate a small set (e.g. 4) of default boxes of different aspect ratios at each location in several feature maps with 4 in (b) and (c)). For each default box, we predict different scales (e.g. 8 both the shape offsets and the conï¬dences for all object categories ((c1, c2, , cp)). At training time, we ï¬rst match these default boxes to the ground truth boxes. For example, we have matched two default boxes with the cat and one with the dog, which are treated as positives and the rest as negatives. The model loss is a weighted sum between localization loss (e.g. Smooth L1 [6]) and conï¬dence loss (e.g. Softmax).
# 2.1 Model | 1512.02325#8 | SSD: Single Shot MultiBox Detector | We present a method for detecting objects in images using a single deep
neural network. Our approach, named SSD, discretizes the output space of
bounding boxes into a set of default boxes over different aspect ratios and
scales per feature map location. At prediction time, the network generates
scores for the presence of each object category in each default box and
produces adjustments to the box to better match the object shape. Additionally,
the network combines predictions from multiple feature maps with different
resolutions to naturally handle objects of various sizes. Our SSD model is
simple relative to methods that require object proposals because it completely
eliminates proposal generation and subsequent pixel or feature resampling stage
and encapsulates all computation in a single network. This makes SSD easy to
train and straightforward to integrate into systems that require a detection
component. Experimental results on the PASCAL VOC, MS COCO, and ILSVRC datasets
confirm that SSD has comparable accuracy to methods that utilize an additional
object proposal step and is much faster, while providing a unified framework
for both training and inference. Compared to other single stage methods, SSD
has much better accuracy, even with a smaller input image size. For $300\times
300$ input, SSD achieves 72.1% mAP on VOC2007 test at 58 FPS on a Nvidia Titan
X and for $500\times 500$ input, SSD achieves 75.1% mAP, outperforming a
comparable state of the art Faster R-CNN model. Code is available at
https://github.com/weiliu89/caffe/tree/ssd . | http://arxiv.org/pdf/1512.02325 | Wei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott Reed, Cheng-Yang Fu, Alexander C. Berg | cs.CV | ECCV 2016 | null | cs.CV | 20151208 | 20161229 | [] |
1512.02595 | 8 | We benchmark our system on several publicly available test sets and compare the results to our previous end-to-end system [26]. Our goal is to eventually reach human-level performance not only on speciï¬c benchmarks, where it is possible to improve through dataset-speciï¬c tuning, but on a range of benchmarks that reï¬ects a diverse set of scenarios. To that end, we have also measured the performance of human workers on each benchmark for comparison. We ï¬nd that our system outperforms humans in some commonly-studied benchmarks and has signiï¬cantly closed the gap in much harder cases. In addition to public benchmarks, we show the performance of our Mandarin system on internal datasets that reï¬ect real-world product scenarios.
Deep learning systems can be challenging to deploy at scale. Large neural networks are compu- tationally expensive to evaluate for each user utterance, and some network architectures are more easily deployed than others. Through model exploration, we ï¬nd high-accuracy, deployable net- work architectures, which we detail here. We also employ a batching scheme suitable for GPU
# 11 exaFLOP = 1018 FLoating-point OPerations.
2 | 1512.02595#8 | Deep Speech 2: End-to-End Speech Recognition in English and Mandarin | We show that an end-to-end deep learning approach can be used to recognize
either English or Mandarin Chinese speech--two vastly different languages.
Because it replaces entire pipelines of hand-engineered components with neural
networks, end-to-end learning allows us to handle a diverse variety of speech
including noisy environments, accents and different languages. Key to our
approach is our application of HPC techniques, resulting in a 7x speedup over
our previous system. Because of this efficiency, experiments that previously
took weeks now run in days. This enables us to iterate more quickly to identify
superior architectures and algorithms. As a result, in several cases, our
system is competitive with the transcription of human workers when benchmarked
on standard datasets. Finally, using a technique called Batch Dispatch with
GPUs in the data center, we show that our system can be inexpensively deployed
in an online setting, delivering low latency when serving users at scale. | http://arxiv.org/pdf/1512.02595 | Dario Amodei, Rishita Anubhai, Eric Battenberg, Carl Case, Jared Casper, Bryan Catanzaro, Jingdong Chen, Mike Chrzanowski, Adam Coates, Greg Diamos, Erich Elsen, Jesse Engel, Linxi Fan, Christopher Fougner, Tony Han, Awni Hannun, Billy Jun, Patrick LeGresley, Libby Lin, Sharan Narang, Andrew Ng, Sherjil Ozair, Ryan Prenger, Jonathan Raiman, Sanjeev Satheesh, David Seetapun, Shubho Sengupta, Yi Wang, Zhiqian Wang, Chong Wang, Bo Xiao, Dani Yogatama, Jun Zhan, Zhenyao Zhu | cs.CL | null | null | cs.CL | 20151208 | 20151208 | [] |
1512.02325 | 9 | # 2.1 Model
The SSD approach is based on a feed-forward convolutional network that produces a ï¬xed-size collection of bounding boxes and scores for the presence of object class instances in those boxes, followed by a non-maximum suppression step to produce the ï¬nal detections. The early network layers are based on a standard architecture used for high quality image classiï¬cation (truncated before any classiï¬cation layers), which we will call the base network2. We then add auxiliary structure to the network to produce detections with the following key features:
Multi-scale feature maps for detection We add convolutional feature layers to the end of the truncated base network. These layers decrease in size progressively and allow predictions of detections at multiple scales. The convolutional model for predicting detections is different for each feature layer (cf Overfeat[4] and YOLO[5] that operate on a single scale feature map). | 1512.02325#9 | SSD: Single Shot MultiBox Detector | We present a method for detecting objects in images using a single deep
neural network. Our approach, named SSD, discretizes the output space of
bounding boxes into a set of default boxes over different aspect ratios and
scales per feature map location. At prediction time, the network generates
scores for the presence of each object category in each default box and
produces adjustments to the box to better match the object shape. Additionally,
the network combines predictions from multiple feature maps with different
resolutions to naturally handle objects of various sizes. Our SSD model is
simple relative to methods that require object proposals because it completely
eliminates proposal generation and subsequent pixel or feature resampling stage
and encapsulates all computation in a single network. This makes SSD easy to
train and straightforward to integrate into systems that require a detection
component. Experimental results on the PASCAL VOC, MS COCO, and ILSVRC datasets
confirm that SSD has comparable accuracy to methods that utilize an additional
object proposal step and is much faster, while providing a unified framework
for both training and inference. Compared to other single stage methods, SSD
has much better accuracy, even with a smaller input image size. For $300\times
300$ input, SSD achieves 72.1% mAP on VOC2007 test at 58 FPS on a Nvidia Titan
X and for $500\times 500$ input, SSD achieves 75.1% mAP, outperforming a
comparable state of the art Faster R-CNN model. Code is available at
https://github.com/weiliu89/caffe/tree/ssd . | http://arxiv.org/pdf/1512.02325 | Wei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott Reed, Cheng-Yang Fu, Alexander C. Berg | cs.CV | ECCV 2016 | null | cs.CV | 20151208 | 20161229 | [] |
1512.02595 | 9 | # 11 exaFLOP = 1018 FLoating-point OPerations.
2
hardware called Batch Dispatch that leads to an efï¬cient, real-time implementation of our Mandarin engine on production servers. Our implementation achieves a 98th percentile compute latency of 67 milliseconds, while the server is loaded with 10 simultaneous audio streams.
The remainder of the paper is as follows. We begin with a review of related work in deep learning, end-to-end speech recognition, and scalability in Section 2. Section 3 describes the architectural and algorithmic improvements to the model and Section 4 explains how to efï¬ciently compute them. We discuss the training data and steps taken to further augment the training set in Section 5. An analysis of results for the DS2 system in English and Mandarin is presented in Section 6. We end with a description of the steps needed to deploy DS2 to real users in Section 7.
# 2 Related Work | 1512.02595#9 | Deep Speech 2: End-to-End Speech Recognition in English and Mandarin | We show that an end-to-end deep learning approach can be used to recognize
either English or Mandarin Chinese speech--two vastly different languages.
Because it replaces entire pipelines of hand-engineered components with neural
networks, end-to-end learning allows us to handle a diverse variety of speech
including noisy environments, accents and different languages. Key to our
approach is our application of HPC techniques, resulting in a 7x speedup over
our previous system. Because of this efficiency, experiments that previously
took weeks now run in days. This enables us to iterate more quickly to identify
superior architectures and algorithms. As a result, in several cases, our
system is competitive with the transcription of human workers when benchmarked
on standard datasets. Finally, using a technique called Batch Dispatch with
GPUs in the data center, we show that our system can be inexpensively deployed
in an online setting, delivering low latency when serving users at scale. | http://arxiv.org/pdf/1512.02595 | Dario Amodei, Rishita Anubhai, Eric Battenberg, Carl Case, Jared Casper, Bryan Catanzaro, Jingdong Chen, Mike Chrzanowski, Adam Coates, Greg Diamos, Erich Elsen, Jesse Engel, Linxi Fan, Christopher Fougner, Tony Han, Awni Hannun, Billy Jun, Patrick LeGresley, Libby Lin, Sharan Narang, Andrew Ng, Sherjil Ozair, Ryan Prenger, Jonathan Raiman, Sanjeev Satheesh, David Seetapun, Shubho Sengupta, Yi Wang, Zhiqian Wang, Chong Wang, Bo Xiao, Dani Yogatama, Jun Zhan, Zhenyao Zhu | cs.CL | null | null | cs.CL | 20151208 | 20151208 | [] |
1512.02325 | 10 | Convolutional predictors for detection Each added feature layer (or optionally an ex- isting feature layer from the base network) can produce a ï¬xed set of detection predic- tions using a set of convolutional ï¬lters. These are indicated on top of the SSD network n with p channels, the basic el- architecture in Fig. 2. For a feature layer of size m p small kernel ement for predicting parameters of a potential detection is a 3 that produces either a score for a category, or a shape offset relative to the default box coordinates. At each of the m n locations where the kernel is applied, it produces an output value. The bounding box offset output values are measured relative to a default
2 We use the VGG-16 network as a base, but other networks should also produce good results.
3
4 Liu et al. | 1512.02325#10 | SSD: Single Shot MultiBox Detector | We present a method for detecting objects in images using a single deep
neural network. Our approach, named SSD, discretizes the output space of
bounding boxes into a set of default boxes over different aspect ratios and
scales per feature map location. At prediction time, the network generates
scores for the presence of each object category in each default box and
produces adjustments to the box to better match the object shape. Additionally,
the network combines predictions from multiple feature maps with different
resolutions to naturally handle objects of various sizes. Our SSD model is
simple relative to methods that require object proposals because it completely
eliminates proposal generation and subsequent pixel or feature resampling stage
and encapsulates all computation in a single network. This makes SSD easy to
train and straightforward to integrate into systems that require a detection
component. Experimental results on the PASCAL VOC, MS COCO, and ILSVRC datasets
confirm that SSD has comparable accuracy to methods that utilize an additional
object proposal step and is much faster, while providing a unified framework
for both training and inference. Compared to other single stage methods, SSD
has much better accuracy, even with a smaller input image size. For $300\times
300$ input, SSD achieves 72.1% mAP on VOC2007 test at 58 FPS on a Nvidia Titan
X and for $500\times 500$ input, SSD achieves 75.1% mAP, outperforming a
comparable state of the art Faster R-CNN model. Code is available at
https://github.com/weiliu89/caffe/tree/ssd . | http://arxiv.org/pdf/1512.02325 | Wei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott Reed, Cheng-Yang Fu, Alexander C. Berg | cs.CV | ECCV 2016 | null | cs.CV | 20151208 | 20161229 | [] |
1512.02595 | 10 | # 2 Related Work
This work is inspired by previous work in both deep learning and speech recognition. Feed-forward neural network acoustic models were explored more than 20 years ago [7, 50, 19]. Recurrent neu- ral networks and networks with convolution were also used in speech recognition around the same time [51, 67]. More recently DNNs have become a ï¬xture in the ASR pipeline with almost all state of the art speech work containing some form of deep neural network [42, 29, 17, 16, 43, 58]. Convolutional networks have also been found beneï¬cial for acoustic models [1, 53]. Recurrent neural networks, typically LSTMs, are just beginning to be deployed in state-of-the art recogniz- ers [24, 25, 55] and work well together with convolutional layers for the feature extraction [52]. Models with both bidirectional [24] and unidirectional recurrence have been explored as well. | 1512.02595#10 | Deep Speech 2: End-to-End Speech Recognition in English and Mandarin | We show that an end-to-end deep learning approach can be used to recognize
either English or Mandarin Chinese speech--two vastly different languages.
Because it replaces entire pipelines of hand-engineered components with neural
networks, end-to-end learning allows us to handle a diverse variety of speech
including noisy environments, accents and different languages. Key to our
approach is our application of HPC techniques, resulting in a 7x speedup over
our previous system. Because of this efficiency, experiments that previously
took weeks now run in days. This enables us to iterate more quickly to identify
superior architectures and algorithms. As a result, in several cases, our
system is competitive with the transcription of human workers when benchmarked
on standard datasets. Finally, using a technique called Batch Dispatch with
GPUs in the data center, we show that our system can be inexpensively deployed
in an online setting, delivering low latency when serving users at scale. | http://arxiv.org/pdf/1512.02595 | Dario Amodei, Rishita Anubhai, Eric Battenberg, Carl Case, Jared Casper, Bryan Catanzaro, Jingdong Chen, Mike Chrzanowski, Adam Coates, Greg Diamos, Erich Elsen, Jesse Engel, Linxi Fan, Christopher Fougner, Tony Han, Awni Hannun, Billy Jun, Patrick LeGresley, Libby Lin, Sharan Narang, Andrew Ng, Sherjil Ozair, Ryan Prenger, Jonathan Raiman, Sanjeev Satheesh, David Seetapun, Shubho Sengupta, Yi Wang, Zhiqian Wang, Chong Wang, Bo Xiao, Dani Yogatama, Jun Zhan, Zhenyao Zhu | cs.CL | null | null | cs.CL | 20151208 | 20151208 | [] |
1512.02325 | 11 | 2 We use the VGG-16 network as a base, but other networks should also produce good results.
3
4 Liu et al.
Extra Feature Layers vee-16 : Shrough Conv _3 layer âCasiter: Conv: xSx(4n(Ciassese4)) Cassie: Come SxS Clssesr4) 74.3mAP SOFPS âSSD RS ane Bette) Dateatons:8782 per Class eeerae ec colt \ ra ol \ YOLO Custoized Arcitactie b3.4maP 45FPS YOLO B i aa i | i | | root ⢠Datections: 98 per class Non-Maximum Suppression
Fig. 2: A comparison between two single shot detection models: SSD and YOLO [5]. Our SSD model adds several feature layers to the end of a base network, which predict the offsets to default boxes of different scales and aspect ratios and their associated conï¬dences. SSD with a 300 448 YOLO counterpart in accuracy on VOC2007 test while also improving the speed. | 1512.02325#11 | SSD: Single Shot MultiBox Detector | We present a method for detecting objects in images using a single deep
neural network. Our approach, named SSD, discretizes the output space of
bounding boxes into a set of default boxes over different aspect ratios and
scales per feature map location. At prediction time, the network generates
scores for the presence of each object category in each default box and
produces adjustments to the box to better match the object shape. Additionally,
the network combines predictions from multiple feature maps with different
resolutions to naturally handle objects of various sizes. Our SSD model is
simple relative to methods that require object proposals because it completely
eliminates proposal generation and subsequent pixel or feature resampling stage
and encapsulates all computation in a single network. This makes SSD easy to
train and straightforward to integrate into systems that require a detection
component. Experimental results on the PASCAL VOC, MS COCO, and ILSVRC datasets
confirm that SSD has comparable accuracy to methods that utilize an additional
object proposal step and is much faster, while providing a unified framework
for both training and inference. Compared to other single stage methods, SSD
has much better accuracy, even with a smaller input image size. For $300\times
300$ input, SSD achieves 72.1% mAP on VOC2007 test at 58 FPS on a Nvidia Titan
X and for $500\times 500$ input, SSD achieves 75.1% mAP, outperforming a
comparable state of the art Faster R-CNN model. Code is available at
https://github.com/weiliu89/caffe/tree/ssd . | http://arxiv.org/pdf/1512.02325 | Wei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott Reed, Cheng-Yang Fu, Alexander C. Berg | cs.CV | ECCV 2016 | null | cs.CV | 20151208 | 20161229 | [] |
1512.02595 | 11 | End-to-end speech recognition is an active area of research, showing compelling results when used to re-score the outputs of a DNN-HMM [23] and standalone [26]. Two methods are currently used to map variable length audio sequences directly to variable length transcriptions. The RNN encoder- decoder paradigm uses an encoder RNN to map the input to a ï¬xed length vector and a decoder network to expand the ï¬xed length vector into a sequence of output predictions [11, 62]. Adding an attentional mechanism to the decoder greatly improves performance of the system, particularly with long inputs or outputs [2]. In speech, the RNN encoder-decoder with attention performs well both in predicting phonemes [12] or graphemes [3, 8]. | 1512.02595#11 | Deep Speech 2: End-to-End Speech Recognition in English and Mandarin | We show that an end-to-end deep learning approach can be used to recognize
either English or Mandarin Chinese speech--two vastly different languages.
Because it replaces entire pipelines of hand-engineered components with neural
networks, end-to-end learning allows us to handle a diverse variety of speech
including noisy environments, accents and different languages. Key to our
approach is our application of HPC techniques, resulting in a 7x speedup over
our previous system. Because of this efficiency, experiments that previously
took weeks now run in days. This enables us to iterate more quickly to identify
superior architectures and algorithms. As a result, in several cases, our
system is competitive with the transcription of human workers when benchmarked
on standard datasets. Finally, using a technique called Batch Dispatch with
GPUs in the data center, we show that our system can be inexpensively deployed
in an online setting, delivering low latency when serving users at scale. | http://arxiv.org/pdf/1512.02595 | Dario Amodei, Rishita Anubhai, Eric Battenberg, Carl Case, Jared Casper, Bryan Catanzaro, Jingdong Chen, Mike Chrzanowski, Adam Coates, Greg Diamos, Erich Elsen, Jesse Engel, Linxi Fan, Christopher Fougner, Tony Han, Awni Hannun, Billy Jun, Patrick LeGresley, Libby Lin, Sharan Narang, Andrew Ng, Sherjil Ozair, Ryan Prenger, Jonathan Raiman, Sanjeev Satheesh, David Seetapun, Shubho Sengupta, Yi Wang, Zhiqian Wang, Chong Wang, Bo Xiao, Dani Yogatama, Jun Zhan, Zhenyao Zhu | cs.CL | null | null | cs.CL | 20151208 | 20151208 | [] |
1512.02325 | 12 | box position relative to each feature map location (cf the architecture of YOLO[5] that uses an intermediate fully connected layer instead of a convolutional ï¬lter for this step). Default boxes and aspect ratios We associate a set of default bounding boxes with each feature map cell, for multiple feature maps at the top of the network. The default boxes tile the feature map in a convolutional manner, so that the position of each box relative to its corresponding cell is ï¬xed. At each feature map cell, we predict the offsets relative to the default box shapes in the cell, as well as the per-class scores that indicate the presence of a class instance in each of those boxes. Speciï¬cally, for each box out of k at a given location, we compute c class scores and the 4 offsets relative to the original default box shape. This results in a total of (c + 4)k ï¬lters that are applied around each location in the feature map, yielding (c + 4)kmn outputs for a m n feature map. For an illustration of default boxes, please refer to Fig. 1. Our default boxes are similar to the anchor boxes used in Faster R-CNN [2], however we apply them to several feature maps of different resolutions. | 1512.02325#12 | SSD: Single Shot MultiBox Detector | We present a method for detecting objects in images using a single deep
neural network. Our approach, named SSD, discretizes the output space of
bounding boxes into a set of default boxes over different aspect ratios and
scales per feature map location. At prediction time, the network generates
scores for the presence of each object category in each default box and
produces adjustments to the box to better match the object shape. Additionally,
the network combines predictions from multiple feature maps with different
resolutions to naturally handle objects of various sizes. Our SSD model is
simple relative to methods that require object proposals because it completely
eliminates proposal generation and subsequent pixel or feature resampling stage
and encapsulates all computation in a single network. This makes SSD easy to
train and straightforward to integrate into systems that require a detection
component. Experimental results on the PASCAL VOC, MS COCO, and ILSVRC datasets
confirm that SSD has comparable accuracy to methods that utilize an additional
object proposal step and is much faster, while providing a unified framework
for both training and inference. Compared to other single stage methods, SSD
has much better accuracy, even with a smaller input image size. For $300\times
300$ input, SSD achieves 72.1% mAP on VOC2007 test at 58 FPS on a Nvidia Titan
X and for $500\times 500$ input, SSD achieves 75.1% mAP, outperforming a
comparable state of the art Faster R-CNN model. Code is available at
https://github.com/weiliu89/caffe/tree/ssd . | http://arxiv.org/pdf/1512.02325 | Wei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott Reed, Cheng-Yang Fu, Alexander C. Berg | cs.CV | ECCV 2016 | null | cs.CV | 20151208 | 20161229 | [] |
1512.02595 | 12 | The other commonly used technique for mapping variable length audio input to variable length output is the CTC loss function [22] coupled with an RNN to model temporal information. The CTC- RNN model performs well in end-to-end speech recognition with grapheme outputs [23, 27, 26, 40]. The CTC-RNN model has also been shown to work well in predicting phonemes [41, 54], though a lexicon is still needed in this case. Furthermore it has been necessary to pre-train the CTC-RNN network with a DNN cross-entropy network that is fed frame-wise alignments from a GMM-HMM system [54]. In contrast, we train the CTC-RNN networks from scratch without the need of frame- wise alignments for pre-training. | 1512.02595#12 | Deep Speech 2: End-to-End Speech Recognition in English and Mandarin | We show that an end-to-end deep learning approach can be used to recognize
either English or Mandarin Chinese speech--two vastly different languages.
Because it replaces entire pipelines of hand-engineered components with neural
networks, end-to-end learning allows us to handle a diverse variety of speech
including noisy environments, accents and different languages. Key to our
approach is our application of HPC techniques, resulting in a 7x speedup over
our previous system. Because of this efficiency, experiments that previously
took weeks now run in days. This enables us to iterate more quickly to identify
superior architectures and algorithms. As a result, in several cases, our
system is competitive with the transcription of human workers when benchmarked
on standard datasets. Finally, using a technique called Batch Dispatch with
GPUs in the data center, we show that our system can be inexpensively deployed
in an online setting, delivering low latency when serving users at scale. | http://arxiv.org/pdf/1512.02595 | Dario Amodei, Rishita Anubhai, Eric Battenberg, Carl Case, Jared Casper, Bryan Catanzaro, Jingdong Chen, Mike Chrzanowski, Adam Coates, Greg Diamos, Erich Elsen, Jesse Engel, Linxi Fan, Christopher Fougner, Tony Han, Awni Hannun, Billy Jun, Patrick LeGresley, Libby Lin, Sharan Narang, Andrew Ng, Sherjil Ozair, Ryan Prenger, Jonathan Raiman, Sanjeev Satheesh, David Seetapun, Shubho Sengupta, Yi Wang, Zhiqian Wang, Chong Wang, Bo Xiao, Dani Yogatama, Jun Zhan, Zhenyao Zhu | cs.CL | null | null | cs.CL | 20151208 | 20151208 | [] |
1512.02595 | 13 | Exploiting scale in deep learning has been central to the success of the ï¬eld thus far [36, 38]. Train- ing on a single GPU resulted in substantial performance gains [49], which were subsequently scaled linearly to two [36] or more GPUs [15]. We take advantage of work in increasing individual GPU efï¬ciency for low-level deep learning primitives [9]. We build on the past work in using model- parallelism [15], data-parallelism [18] or a combination of the two [64, 26] to create a fast and highly scalable system for training deep RNNs in speech recognition.
Data has also been central to the success of end-to-end speech recognition, with over 7000 hours of labeled speech used in Deep Speech 1 (DS1) [26]. Data augmentation has been highly effective in improving the performance of deep learning in computer vision [39, 56, 14]. This has also been shown to improve speech systems [21, 26]. Techniques used for data augmentation in speech range from simple noise addition [26] to complex perturbations such as simulating changes to the vocal tract length and rate of speech of the speaker [31, 35]. | 1512.02595#13 | Deep Speech 2: End-to-End Speech Recognition in English and Mandarin | We show that an end-to-end deep learning approach can be used to recognize
either English or Mandarin Chinese speech--two vastly different languages.
Because it replaces entire pipelines of hand-engineered components with neural
networks, end-to-end learning allows us to handle a diverse variety of speech
including noisy environments, accents and different languages. Key to our
approach is our application of HPC techniques, resulting in a 7x speedup over
our previous system. Because of this efficiency, experiments that previously
took weeks now run in days. This enables us to iterate more quickly to identify
superior architectures and algorithms. As a result, in several cases, our
system is competitive with the transcription of human workers when benchmarked
on standard datasets. Finally, using a technique called Batch Dispatch with
GPUs in the data center, we show that our system can be inexpensively deployed
in an online setting, delivering low latency when serving users at scale. | http://arxiv.org/pdf/1512.02595 | Dario Amodei, Rishita Anubhai, Eric Battenberg, Carl Case, Jared Casper, Bryan Catanzaro, Jingdong Chen, Mike Chrzanowski, Adam Coates, Greg Diamos, Erich Elsen, Jesse Engel, Linxi Fan, Christopher Fougner, Tony Han, Awni Hannun, Billy Jun, Patrick LeGresley, Libby Lin, Sharan Narang, Andrew Ng, Sherjil Ozair, Ryan Prenger, Jonathan Raiman, Sanjeev Satheesh, David Seetapun, Shubho Sengupta, Yi Wang, Zhiqian Wang, Chong Wang, Bo Xiao, Dani Yogatama, Jun Zhan, Zhenyao Zhu | cs.CL | null | null | cs.CL | 20151208 | 20151208 | [] |
1512.02325 | 14 | # 2.2 Training
The key difference between training SSD and training a typical detector that uses region proposals, is that ground truth information needs to be assigned to speciï¬c outputs in the ï¬xed set of detector outputs. Some version of this is also required for training in YOLO[5] and for the region proposal stage of Faster R-CNN[2] and MultiBox[7]. Once this assignment is determined, the loss function and back propagation are applied end- to-end. Training also involves choosing the set of default boxes and scales for detection as well as the hard negative mining and data augmentation strategies.
SSD: Single Shot MultiBox Detector 5
Matching strategy During training we need to determine which default boxes corre- spond to a ground truth detection and train the network accordingly. For each ground truth box we are selecting from default boxes that vary over location, aspect ratio, and scale. We begin by matching each ground truth box to the default box with the best jaccard overlap (as in MultiBox [7]). Unlike MultiBox, we then match default boxes to any ground truth with jaccard overlap higher than a threshold (0.5). This simpliï¬es the learning problem, allowing the network to predict high scores for multiple overlapping default boxes rather than requiring it to pick only the one with maximum overlap. | 1512.02325#14 | SSD: Single Shot MultiBox Detector | We present a method for detecting objects in images using a single deep
neural network. Our approach, named SSD, discretizes the output space of
bounding boxes into a set of default boxes over different aspect ratios and
scales per feature map location. At prediction time, the network generates
scores for the presence of each object category in each default box and
produces adjustments to the box to better match the object shape. Additionally,
the network combines predictions from multiple feature maps with different
resolutions to naturally handle objects of various sizes. Our SSD model is
simple relative to methods that require object proposals because it completely
eliminates proposal generation and subsequent pixel or feature resampling stage
and encapsulates all computation in a single network. This makes SSD easy to
train and straightforward to integrate into systems that require a detection
component. Experimental results on the PASCAL VOC, MS COCO, and ILSVRC datasets
confirm that SSD has comparable accuracy to methods that utilize an additional
object proposal step and is much faster, while providing a unified framework
for both training and inference. Compared to other single stage methods, SSD
has much better accuracy, even with a smaller input image size. For $300\times
300$ input, SSD achieves 72.1% mAP on VOC2007 test at 58 FPS on a Nvidia Titan
X and for $500\times 500$ input, SSD achieves 75.1% mAP, outperforming a
comparable state of the art Faster R-CNN model. Code is available at
https://github.com/weiliu89/caffe/tree/ssd . | http://arxiv.org/pdf/1512.02325 | Wei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott Reed, Cheng-Yang Fu, Alexander C. Berg | cs.CV | ECCV 2016 | null | cs.CV | 20151208 | 20161229 | [] |
1512.02595 | 14 | Existing speech systems can also be used to bootstrap new data collection. In one approach, the authors use one speech engine to align and ï¬lter a thousand hours of read speech [46]. In another approach, a heavy-weight ofï¬ine speech recognizer is used to generate transcriptions for tens of thousands of hours of speech [33]. This is then passed through a ï¬lter and used to re-train the recog- nizer, resulting in signiï¬cant performance gains. We draw inspiration from these past approaches in
3
bootstrapping larger datasets and data augmentation to increase the effective amount of labeled data for our system.
# 3 Model Architecture | 1512.02595#14 | Deep Speech 2: End-to-End Speech Recognition in English and Mandarin | We show that an end-to-end deep learning approach can be used to recognize
either English or Mandarin Chinese speech--two vastly different languages.
Because it replaces entire pipelines of hand-engineered components with neural
networks, end-to-end learning allows us to handle a diverse variety of speech
including noisy environments, accents and different languages. Key to our
approach is our application of HPC techniques, resulting in a 7x speedup over
our previous system. Because of this efficiency, experiments that previously
took weeks now run in days. This enables us to iterate more quickly to identify
superior architectures and algorithms. As a result, in several cases, our
system is competitive with the transcription of human workers when benchmarked
on standard datasets. Finally, using a technique called Batch Dispatch with
GPUs in the data center, we show that our system can be inexpensively deployed
in an online setting, delivering low latency when serving users at scale. | http://arxiv.org/pdf/1512.02595 | Dario Amodei, Rishita Anubhai, Eric Battenberg, Carl Case, Jared Casper, Bryan Catanzaro, Jingdong Chen, Mike Chrzanowski, Adam Coates, Greg Diamos, Erich Elsen, Jesse Engel, Linxi Fan, Christopher Fougner, Tony Han, Awni Hannun, Billy Jun, Patrick LeGresley, Libby Lin, Sharan Narang, Andrew Ng, Sherjil Ozair, Ryan Prenger, Jonathan Raiman, Sanjeev Satheesh, David Seetapun, Shubho Sengupta, Yi Wang, Zhiqian Wang, Chong Wang, Bo Xiao, Dani Yogatama, Jun Zhan, Zhenyao Zhu | cs.CL | null | null | cs.CL | 20151208 | 20151208 | [] |
1512.02325 | 15 | Training objective The SSD training objective is derived from the MultiBox objec- tive but is extended to handle multiple object categories. Let x; = {1,0} be an indicator for matching the i-th default box to the j-th ground truth box of category p. In the matching strategy above, we can have )°, vy > 1. The overall objective loss function is a weighted sum of the localization loss (loc) and the confidence loss (conf):
L(x, c, l, g) = 1 N (Lconf (x, c) + αLloc(x, l, g)) (1)
where N is the number of matched default boxes. If N = 0, wet set the loss to 0. The localization loss is a Smooth L1 loss [6] between the predicted box (l) and the ground truth box (g) parameters. Similar to Faster R-CNN [2], we regress to offsets for the center (cx, cy) of the default bounding box (d) and for its width (w) and height (h). | 1512.02325#15 | SSD: Single Shot MultiBox Detector | We present a method for detecting objects in images using a single deep
neural network. Our approach, named SSD, discretizes the output space of
bounding boxes into a set of default boxes over different aspect ratios and
scales per feature map location. At prediction time, the network generates
scores for the presence of each object category in each default box and
produces adjustments to the box to better match the object shape. Additionally,
the network combines predictions from multiple feature maps with different
resolutions to naturally handle objects of various sizes. Our SSD model is
simple relative to methods that require object proposals because it completely
eliminates proposal generation and subsequent pixel or feature resampling stage
and encapsulates all computation in a single network. This makes SSD easy to
train and straightforward to integrate into systems that require a detection
component. Experimental results on the PASCAL VOC, MS COCO, and ILSVRC datasets
confirm that SSD has comparable accuracy to methods that utilize an additional
object proposal step and is much faster, while providing a unified framework
for both training and inference. Compared to other single stage methods, SSD
has much better accuracy, even with a smaller input image size. For $300\times
300$ input, SSD achieves 72.1% mAP on VOC2007 test at 58 FPS on a Nvidia Titan
X and for $500\times 500$ input, SSD achieves 75.1% mAP, outperforming a
comparable state of the art Faster R-CNN model. Code is available at
https://github.com/weiliu89/caffe/tree/ssd . | http://arxiv.org/pdf/1512.02325 | Wei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott Reed, Cheng-Yang Fu, Alexander C. Berg | cs.CV | ECCV 2016 | null | cs.CV | 20151208 | 20161229 | [] |
1512.02595 | 15 | 3
bootstrapping larger datasets and data augmentation to increase the effective amount of labeled data for our system.
# 3 Model Architecture
A simple multi-layer model with a single recurrent layer cannot exploit thousands of hours of la- belled speech. In order to learn from datasets this large, we increase the model capacity via depth. We explore architectures with up to 11 layers including many bidirectional recurrent layers and con- volutional layers. These models have nearly 8 times the amount of computation per data example as the models in Deep Speech 1 making fast optimization and computation critical. In order to optimize these models successfully, we use Batch Normalization for RNNs and a novel optimization curricu- lum we call SortaGrad. We also exploit long strides between RNN inputs to reduce computation per example by a factor of 3. This is helpful for both training and evaluation, though requires some modiï¬cations in order to work well with CTC. Finally, though many of our research results make use of bidirectional recurrent layers, we ï¬nd that excellent models exist using only unidirectional recurrent layersâa feature that makes such models much easier to deploy. Taken together these features allow us to tractably optimize deep RNNs and improve performance by more than 40% in both English and Mandarin error rates over the smaller baseline models.
# 3.1 Preliminaries | 1512.02595#15 | Deep Speech 2: End-to-End Speech Recognition in English and Mandarin | We show that an end-to-end deep learning approach can be used to recognize
either English or Mandarin Chinese speech--two vastly different languages.
Because it replaces entire pipelines of hand-engineered components with neural
networks, end-to-end learning allows us to handle a diverse variety of speech
including noisy environments, accents and different languages. Key to our
approach is our application of HPC techniques, resulting in a 7x speedup over
our previous system. Because of this efficiency, experiments that previously
took weeks now run in days. This enables us to iterate more quickly to identify
superior architectures and algorithms. As a result, in several cases, our
system is competitive with the transcription of human workers when benchmarked
on standard datasets. Finally, using a technique called Batch Dispatch with
GPUs in the data center, we show that our system can be inexpensively deployed
in an online setting, delivering low latency when serving users at scale. | http://arxiv.org/pdf/1512.02595 | Dario Amodei, Rishita Anubhai, Eric Battenberg, Carl Case, Jared Casper, Bryan Catanzaro, Jingdong Chen, Mike Chrzanowski, Adam Coates, Greg Diamos, Erich Elsen, Jesse Engel, Linxi Fan, Christopher Fougner, Tony Han, Awni Hannun, Billy Jun, Patrick LeGresley, Libby Lin, Sharan Narang, Andrew Ng, Sherjil Ozair, Ryan Prenger, Jonathan Raiman, Sanjeev Satheesh, David Seetapun, Shubho Sengupta, Yi Wang, Zhiqian Wang, Chong Wang, Bo Xiao, Dani Yogatama, Jun Zhan, Zhenyao Zhu | cs.CL | null | null | cs.CL | 20151208 | 20151208 | [] |
1512.02325 | 16 | Lioe(, l, g) =s S> ck smooth, (1" â 97") i⬠Pos me {cx,cy,w,h} = (95 âd)/4P G5" = G5" â a") fat a gS" gy @ g} =o (Ge) 9 = 108 (Gr)
The conï¬dence loss is the softmax loss over multiple classes conï¬dences (c).
R A exp(c?) Leons (x, ©) ajjlog(ee log(@) where @ = ââ + (3) ou hs yo De oa > S exp()
and the weight term α is set to 1 by cross validation.
Choosing scales and aspect ratios for default boxes To handle different object scales, some methods [4,9] suggest processing the image at different sizes and combining the results afterwards. However, by utilizing feature maps from several different layers in a single network for prediction we can mimic the same effect, while also sharing parame- ters across all object scales. Previous works [10,11] have shown that using feature maps from the lower layers can improve semantic segmentation quality because the lower layers capture more ï¬ne details of the input objects. Similarly, [12] showed that adding global context pooled from a feature map can help smooth the segmentation results.
6 Liu et al. | 1512.02325#16 | SSD: Single Shot MultiBox Detector | We present a method for detecting objects in images using a single deep
neural network. Our approach, named SSD, discretizes the output space of
bounding boxes into a set of default boxes over different aspect ratios and
scales per feature map location. At prediction time, the network generates
scores for the presence of each object category in each default box and
produces adjustments to the box to better match the object shape. Additionally,
the network combines predictions from multiple feature maps with different
resolutions to naturally handle objects of various sizes. Our SSD model is
simple relative to methods that require object proposals because it completely
eliminates proposal generation and subsequent pixel or feature resampling stage
and encapsulates all computation in a single network. This makes SSD easy to
train and straightforward to integrate into systems that require a detection
component. Experimental results on the PASCAL VOC, MS COCO, and ILSVRC datasets
confirm that SSD has comparable accuracy to methods that utilize an additional
object proposal step and is much faster, while providing a unified framework
for both training and inference. Compared to other single stage methods, SSD
has much better accuracy, even with a smaller input image size. For $300\times
300$ input, SSD achieves 72.1% mAP on VOC2007 test at 58 FPS on a Nvidia Titan
X and for $500\times 500$ input, SSD achieves 75.1% mAP, outperforming a
comparable state of the art Faster R-CNN model. Code is available at
https://github.com/weiliu89/caffe/tree/ssd . | http://arxiv.org/pdf/1512.02325 | Wei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott Reed, Cheng-Yang Fu, Alexander C. Berg | cs.CV | ECCV 2016 | null | cs.CV | 20151208 | 20161229 | [] |
1512.02595 | 16 | # 3.1 Preliminaries
Figure 1 shows the architecture of the DS2 system which at its core is similar to the previous DS1 system [26]: a recurrent neural network (RNN) trained to ingest speech spectrograms and generate text transcriptions.
Let a single utterance x(i) and label y(i) be sampled from a training set = . Each utterance, x(i), is a time-series of length T (i) where every (x(1), y(1)), (x(2), y(2)), . . . } { time-slice is a vector of audio features, x(i) 1. We use a spectrogram of power t normalized audio clips as the features to the system, so x(i) t,p denotes the power of the pâth frequency bin in the audio frame at time t. The goal of the RNN is to convert an input sequence x(i) into a ï¬nal transcription y(i). For notational convenience, we drop the superscripts and use x to denote a chosen utterance and y the corresponding label. | 1512.02595#16 | Deep Speech 2: End-to-End Speech Recognition in English and Mandarin | We show that an end-to-end deep learning approach can be used to recognize
either English or Mandarin Chinese speech--two vastly different languages.
Because it replaces entire pipelines of hand-engineered components with neural
networks, end-to-end learning allows us to handle a diverse variety of speech
including noisy environments, accents and different languages. Key to our
approach is our application of HPC techniques, resulting in a 7x speedup over
our previous system. Because of this efficiency, experiments that previously
took weeks now run in days. This enables us to iterate more quickly to identify
superior architectures and algorithms. As a result, in several cases, our
system is competitive with the transcription of human workers when benchmarked
on standard datasets. Finally, using a technique called Batch Dispatch with
GPUs in the data center, we show that our system can be inexpensively deployed
in an online setting, delivering low latency when serving users at scale. | http://arxiv.org/pdf/1512.02595 | Dario Amodei, Rishita Anubhai, Eric Battenberg, Carl Case, Jared Casper, Bryan Catanzaro, Jingdong Chen, Mike Chrzanowski, Adam Coates, Greg Diamos, Erich Elsen, Jesse Engel, Linxi Fan, Christopher Fougner, Tony Han, Awni Hannun, Billy Jun, Patrick LeGresley, Libby Lin, Sharan Narang, Andrew Ng, Sherjil Ozair, Ryan Prenger, Jonathan Raiman, Sanjeev Satheesh, David Seetapun, Shubho Sengupta, Yi Wang, Zhiqian Wang, Chong Wang, Bo Xiao, Dani Yogatama, Jun Zhan, Zhenyao Zhu | cs.CL | null | null | cs.CL | 20151208 | 20151208 | [] |
1512.02325 | 17 | 6 Liu et al.
Motivated by these methods, we use both the lower and upper feature maps for detec- 4) which are used in the tion. Figure 1 shows two exemplar feature maps (8 framework. In practice, we can use many more with small computational overhead.
Feature maps from different levels within a network are known to have different (empirical) receptive ï¬eld sizes [13]. Fortunately, within the SSD framework, the de- fault boxes do not necessary need to correspond to the actual receptive ï¬elds of each layer. We design the tiling of default boxes so that speciï¬c feature maps learn to be responsive to particular scales of the objects. Suppose we want to use m feature maps for prediction. The scale of the default boxes for each feature map is computed as:
sk = smin + smax m smin 1 (k â 1), k â [1, m] (4)
â â | 1512.02325#17 | SSD: Single Shot MultiBox Detector | We present a method for detecting objects in images using a single deep
neural network. Our approach, named SSD, discretizes the output space of
bounding boxes into a set of default boxes over different aspect ratios and
scales per feature map location. At prediction time, the network generates
scores for the presence of each object category in each default box and
produces adjustments to the box to better match the object shape. Additionally,
the network combines predictions from multiple feature maps with different
resolutions to naturally handle objects of various sizes. Our SSD model is
simple relative to methods that require object proposals because it completely
eliminates proposal generation and subsequent pixel or feature resampling stage
and encapsulates all computation in a single network. This makes SSD easy to
train and straightforward to integrate into systems that require a detection
component. Experimental results on the PASCAL VOC, MS COCO, and ILSVRC datasets
confirm that SSD has comparable accuracy to methods that utilize an additional
object proposal step and is much faster, while providing a unified framework
for both training and inference. Compared to other single stage methods, SSD
has much better accuracy, even with a smaller input image size. For $300\times
300$ input, SSD achieves 72.1% mAP on VOC2007 test at 58 FPS on a Nvidia Titan
X and for $500\times 500$ input, SSD achieves 75.1% mAP, outperforming a
comparable state of the art Faster R-CNN model. Code is available at
https://github.com/weiliu89/caffe/tree/ssd . | http://arxiv.org/pdf/1512.02325 | Wei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott Reed, Cheng-Yang Fu, Alexander C. Berg | cs.CV | ECCV 2016 | null | cs.CV | 20151208 | 20161229 | [] |
1512.02595 | 17 | The outputs of the network are the graphemes of each language. At each output time-step t, the RNN makes a prediction over characters, p(¢;|), where ¢, is either a character in the alphabet or the blank symbol. In English we have ¢; ⬠{a, b, c, ...,Z, space, apostrophe, blank}, where we have added the apostrophe as well as a space symbol to denote word boundaries. For the Mandarin system the network outputs simplified Chinese characters. We describe this in more detail in Section|3.9]
The RNN model is composed of several layers of hidden units. The architectures we experiment with consist of one or more convolutional layers, followed by one or more recurrent layers, followed by one or more fully connected layers.
The hidden representation at layer l is given by hl with the convention that h0 represents the input x. The bottom of the network is one or more convolutions over the time dimension of the input. For a context window of size c, the i-th activation at time-step t of the convolutional layer is given by | 1512.02595#17 | Deep Speech 2: End-to-End Speech Recognition in English and Mandarin | We show that an end-to-end deep learning approach can be used to recognize
either English or Mandarin Chinese speech--two vastly different languages.
Because it replaces entire pipelines of hand-engineered components with neural
networks, end-to-end learning allows us to handle a diverse variety of speech
including noisy environments, accents and different languages. Key to our
approach is our application of HPC techniques, resulting in a 7x speedup over
our previous system. Because of this efficiency, experiments that previously
took weeks now run in days. This enables us to iterate more quickly to identify
superior architectures and algorithms. As a result, in several cases, our
system is competitive with the transcription of human workers when benchmarked
on standard datasets. Finally, using a technique called Batch Dispatch with
GPUs in the data center, we show that our system can be inexpensively deployed
in an online setting, delivering low latency when serving users at scale. | http://arxiv.org/pdf/1512.02595 | Dario Amodei, Rishita Anubhai, Eric Battenberg, Carl Case, Jared Casper, Bryan Catanzaro, Jingdong Chen, Mike Chrzanowski, Adam Coates, Greg Diamos, Erich Elsen, Jesse Engel, Linxi Fan, Christopher Fougner, Tony Han, Awni Hannun, Billy Jun, Patrick LeGresley, Libby Lin, Sharan Narang, Andrew Ng, Sherjil Ozair, Ryan Prenger, Jonathan Raiman, Sanjeev Satheesh, David Seetapun, Shubho Sengupta, Yi Wang, Zhiqian Wang, Chong Wang, Bo Xiao, Dani Yogatama, Jun Zhan, Zhenyao Zhu | cs.CL | null | null | cs.CL | 20151208 | 20151208 | [] |
1512.02325 | 18 | where Sin is 0.2 and sx is 0.9, meaning the lowest layer has a scale of 0.2 and the highest layer has a scale of 0.9, and all layers in between are regularly spaced. We impose different aspect ratios for the default boxes, and denote them as a, ⬠{1, 2,3, 3, 3}. We can compute the width (wf = s,,./a;) and height (hf = s,/./a,) for each default box. For the aspect ratio of 1, we also add a default box whose scale is 8, = \/8kSk41) resulting in 6 default boxes per feature map location. We set the center of each default box to (ep ie), where | f,| is the size of the k-th square feature map, i,j ⬠[0,|fx|). In practice, one can also design a distribution of default boxes to best fit a specific dataset. How to design the optimal tiling is an open question as well. By combining predictions for all default boxes with different scales and aspect ratios from all locations of many feature maps, we have a diverse set of predictions, covering various input object sizes and shapes. For example, in Fig. [I] the dog is matched to a default box in the | 1512.02325#18 | SSD: Single Shot MultiBox Detector | We present a method for detecting objects in images using a single deep
neural network. Our approach, named SSD, discretizes the output space of
bounding boxes into a set of default boxes over different aspect ratios and
scales per feature map location. At prediction time, the network generates
scores for the presence of each object category in each default box and
produces adjustments to the box to better match the object shape. Additionally,
the network combines predictions from multiple feature maps with different
resolutions to naturally handle objects of various sizes. Our SSD model is
simple relative to methods that require object proposals because it completely
eliminates proposal generation and subsequent pixel or feature resampling stage
and encapsulates all computation in a single network. This makes SSD easy to
train and straightforward to integrate into systems that require a detection
component. Experimental results on the PASCAL VOC, MS COCO, and ILSVRC datasets
confirm that SSD has comparable accuracy to methods that utilize an additional
object proposal step and is much faster, while providing a unified framework
for both training and inference. Compared to other single stage methods, SSD
has much better accuracy, even with a smaller input image size. For $300\times
300$ input, SSD achieves 72.1% mAP on VOC2007 test at 58 FPS on a Nvidia Titan
X and for $500\times 500$ input, SSD achieves 75.1% mAP, outperforming a
comparable state of the art Faster R-CNN model. Code is available at
https://github.com/weiliu89/caffe/tree/ssd . | http://arxiv.org/pdf/1512.02325 | Wei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott Reed, Cheng-Yang Fu, Alexander C. Berg | cs.CV | ECCV 2016 | null | cs.CV | 20151208 | 20161229 | [] |
1512.02595 | 18 | hlâ1 tâc:t+c) where denotes the element-wise product between the i-th ï¬lter and the context window of the previous layers activations, and f denotes a unary nonlinear function. We use the clipped rectiï¬ed- linear (ReLU) function Ï(x) = min x, 0 max as our nonlinearity. In some layers, usually } { { the ï¬rst, we sub-sample by striding the convolution by s frames. The goal is to shorten the number of time-steps for the recurrent layers above.
i â¦
Following the convolutional layers are one or more bidirectional recurrent layers [57]. The forward in time ââh l and backward in time ââh l recurrent layer activations are computed as
t = g(hlâ1 t = g(hlâ1 ââh l ââh l t t , ââh l , ââh l tâ1) t+1) (2)
4
CTC Fully Connected ry ~~~ @@G200000 Recurrent or GRU Batch (Bidirectional) Normalization @@@00000 x~-=--o @@@000000 â 1D or 2D @eeoae0000 Invariant Convolution Spectrogram | 1512.02595#18 | Deep Speech 2: End-to-End Speech Recognition in English and Mandarin | We show that an end-to-end deep learning approach can be used to recognize
either English or Mandarin Chinese speech--two vastly different languages.
Because it replaces entire pipelines of hand-engineered components with neural
networks, end-to-end learning allows us to handle a diverse variety of speech
including noisy environments, accents and different languages. Key to our
approach is our application of HPC techniques, resulting in a 7x speedup over
our previous system. Because of this efficiency, experiments that previously
took weeks now run in days. This enables us to iterate more quickly to identify
superior architectures and algorithms. As a result, in several cases, our
system is competitive with the transcription of human workers when benchmarked
on standard datasets. Finally, using a technique called Batch Dispatch with
GPUs in the data center, we show that our system can be inexpensively deployed
in an online setting, delivering low latency when serving users at scale. | http://arxiv.org/pdf/1512.02595 | Dario Amodei, Rishita Anubhai, Eric Battenberg, Carl Case, Jared Casper, Bryan Catanzaro, Jingdong Chen, Mike Chrzanowski, Adam Coates, Greg Diamos, Erich Elsen, Jesse Engel, Linxi Fan, Christopher Fougner, Tony Han, Awni Hannun, Billy Jun, Patrick LeGresley, Libby Lin, Sharan Narang, Andrew Ng, Sherjil Ozair, Ryan Prenger, Jonathan Raiman, Sanjeev Satheesh, David Seetapun, Shubho Sengupta, Yi Wang, Zhiqian Wang, Chong Wang, Bo Xiao, Dani Yogatama, Jun Zhan, Zhenyao Zhu | cs.CL | null | null | cs.CL | 20151208 | 20151208 | [] |
1512.02595 | 19 | Figure 1: Architecture of the DS2 system used to train on both English and Mandarin speech. We explore variants of this architecture by varying the number of convolutional layers from 1 to 3 and the number of recurrent or GRU layers from 1 to 7.
The two sets of activations are summed to form the output activations for the layer hl = ââh l + ââh l. ) can be the standard recurrent operation The function g( ·
t = f (W lhlâ1 ââh l t + ââU lââh l tâ1 + bl) (3)
where W l is the input-hidden weight matrix, ââU l is the recurrent weight matrix and bl is a bias term. In this case the input-hidden weights are shared for both directions of the recurrence. The function ) can also represent more complex recurrence operations such as the Long Short-Term Memory g( · (LSTM) units [30] and the gated recurrent units (GRU) [11].
After the bidirectional recurrent layers we apply one or more fully connected layers with
t = f (W lhlâ1 hl t + bl) (4)
The output layer L is a softmax computing a probability distribution over characters given by
exp(wk - hfâ!) Do exp(wF - ne) (5) (ly = kz) | 1512.02595#19 | Deep Speech 2: End-to-End Speech Recognition in English and Mandarin | We show that an end-to-end deep learning approach can be used to recognize
either English or Mandarin Chinese speech--two vastly different languages.
Because it replaces entire pipelines of hand-engineered components with neural
networks, end-to-end learning allows us to handle a diverse variety of speech
including noisy environments, accents and different languages. Key to our
approach is our application of HPC techniques, resulting in a 7x speedup over
our previous system. Because of this efficiency, experiments that previously
took weeks now run in days. This enables us to iterate more quickly to identify
superior architectures and algorithms. As a result, in several cases, our
system is competitive with the transcription of human workers when benchmarked
on standard datasets. Finally, using a technique called Batch Dispatch with
GPUs in the data center, we show that our system can be inexpensively deployed
in an online setting, delivering low latency when serving users at scale. | http://arxiv.org/pdf/1512.02595 | Dario Amodei, Rishita Anubhai, Eric Battenberg, Carl Case, Jared Casper, Bryan Catanzaro, Jingdong Chen, Mike Chrzanowski, Adam Coates, Greg Diamos, Erich Elsen, Jesse Engel, Linxi Fan, Christopher Fougner, Tony Han, Awni Hannun, Billy Jun, Patrick LeGresley, Libby Lin, Sharan Narang, Andrew Ng, Sherjil Ozair, Ryan Prenger, Jonathan Raiman, Sanjeev Satheesh, David Seetapun, Shubho Sengupta, Yi Wang, Zhiqian Wang, Chong Wang, Bo Xiao, Dani Yogatama, Jun Zhan, Zhenyao Zhu | cs.CL | null | null | cs.CL | 20151208 | 20151208 | [] |
1512.02325 | 20 | Hard negative mining After the matching step, most of the default boxes are nega- tives, especially when the number of possible default boxes is large. This introduces a signiï¬cant imbalance between the positive and negative training examples. Instead of using all the negative examples, we sort them using the highest conï¬dence loss for each default box and pick the top ones so that the ratio between the negatives and positives is at most 3:1. We found that this leads to faster optimization and a more stable training.
Data augmentation To make the model more robust to various input object sizes and shapes, each training image is randomly sampled by one of the following options:
â Use the entire original input image. â Sample a patch so that the minimum jaccard overlap with the objects is 0.1, 0.3,
0.5, 0.7, or 0.9.
â Randomly sample a patch.
The size of each sampled patch is [0.1, 1] of the original image size, and the aspect ratio is between 1 2 and 2. We keep the overlapped part of the ground truth box if the center of it is in the sampled patch. After the aforementioned sampling step, each sampled patch is resized to ï¬xed size and is horizontally ï¬ipped with probability of 0.5, in addition to applying some photo-metric distortions similar to those described in [14]. | 1512.02325#20 | SSD: Single Shot MultiBox Detector | We present a method for detecting objects in images using a single deep
neural network. Our approach, named SSD, discretizes the output space of
bounding boxes into a set of default boxes over different aspect ratios and
scales per feature map location. At prediction time, the network generates
scores for the presence of each object category in each default box and
produces adjustments to the box to better match the object shape. Additionally,
the network combines predictions from multiple feature maps with different
resolutions to naturally handle objects of various sizes. Our SSD model is
simple relative to methods that require object proposals because it completely
eliminates proposal generation and subsequent pixel or feature resampling stage
and encapsulates all computation in a single network. This makes SSD easy to
train and straightforward to integrate into systems that require a detection
component. Experimental results on the PASCAL VOC, MS COCO, and ILSVRC datasets
confirm that SSD has comparable accuracy to methods that utilize an additional
object proposal step and is much faster, while providing a unified framework
for both training and inference. Compared to other single stage methods, SSD
has much better accuracy, even with a smaller input image size. For $300\times
300$ input, SSD achieves 72.1% mAP on VOC2007 test at 58 FPS on a Nvidia Titan
X and for $500\times 500$ input, SSD achieves 75.1% mAP, outperforming a
comparable state of the art Faster R-CNN model. Code is available at
https://github.com/weiliu89/caffe/tree/ssd . | http://arxiv.org/pdf/1512.02325 | Wei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott Reed, Cheng-Yang Fu, Alexander C. Berg | cs.CV | ECCV 2016 | null | cs.CV | 20151208 | 20161229 | [] |
1512.02595 | 20 | exp(wk - hfâ!) Do exp(wF - ne) (5) (ly = kz)
The model is trained using the CTC loss function [22]. Given an input-output pair (x, y) and the (x, y; θ) and its derivative with current parameters of the network θ, we compute the loss function (x, y; θ). This derivative is then used to update the respect to the parameters of the network network parameters through the backpropagation through time algorithm.
In the following subsections we describe the architectural and algorithmic improvements made rel- ative to DS1 [26]. Unless otherwise stated these improvements are language agnostic. We report results on an English speaker held out development set which is an internal dataset containing 2048 utterances of primarily read speech. All models are trained on datasets described in Section 5. We report Word Error Rate (WER) for the English system and Character Error Rate (CER) for the Mandarin system. In both cases we integrate a language model in a beam search decoding step as described in Section 3.8.
5 | 1512.02595#20 | Deep Speech 2: End-to-End Speech Recognition in English and Mandarin | We show that an end-to-end deep learning approach can be used to recognize
either English or Mandarin Chinese speech--two vastly different languages.
Because it replaces entire pipelines of hand-engineered components with neural
networks, end-to-end learning allows us to handle a diverse variety of speech
including noisy environments, accents and different languages. Key to our
approach is our application of HPC techniques, resulting in a 7x speedup over
our previous system. Because of this efficiency, experiments that previously
took weeks now run in days. This enables us to iterate more quickly to identify
superior architectures and algorithms. As a result, in several cases, our
system is competitive with the transcription of human workers when benchmarked
on standard datasets. Finally, using a technique called Batch Dispatch with
GPUs in the data center, we show that our system can be inexpensively deployed
in an online setting, delivering low latency when serving users at scale. | http://arxiv.org/pdf/1512.02595 | Dario Amodei, Rishita Anubhai, Eric Battenberg, Carl Case, Jared Casper, Bryan Catanzaro, Jingdong Chen, Mike Chrzanowski, Adam Coates, Greg Diamos, Erich Elsen, Jesse Engel, Linxi Fan, Christopher Fougner, Tony Han, Awni Hannun, Billy Jun, Patrick LeGresley, Libby Lin, Sharan Narang, Andrew Ng, Sherjil Ozair, Ryan Prenger, Jonathan Raiman, Sanjeev Satheesh, David Seetapun, Shubho Sengupta, Yi Wang, Zhiqian Wang, Chong Wang, Bo Xiao, Dani Yogatama, Jun Zhan, Zhenyao Zhu | cs.CL | null | null | cs.CL | 20151208 | 20151208 | [] |
1512.02325 | 21 | SSD: Single Shot MultiBox Detector
# 3 Experimental Results
Base network Our experiments are all based on VGG16 [15], which is pre-trained on the ILSVRC CLS-LOC dataset [16]. Similar to DeepLab-LargeFOV [17], we convert fc6 and fc7 to convolutional layers, subsample parameters from fc6 and fc7, change s1, and use the `a trous algorithm [18] to ï¬ll the pool5 from 2 âholesâ. We remove all the dropout layers and the fc8 layer. We ï¬ne-tune the resulting model using SGD with initial learning rate 10â3, 0.9 momentum, 0.0005 weight decay, and batch size 32. The learning rate decay policy is slightly different for each dataset, and we will describe details later. The full training and testing code is built on Caffe [19] and is open source at: https://github.com/weiliu89/caffe/tree/ssd .
# 3.1 PASCAL VOC2007 | 1512.02325#21 | SSD: Single Shot MultiBox Detector | We present a method for detecting objects in images using a single deep
neural network. Our approach, named SSD, discretizes the output space of
bounding boxes into a set of default boxes over different aspect ratios and
scales per feature map location. At prediction time, the network generates
scores for the presence of each object category in each default box and
produces adjustments to the box to better match the object shape. Additionally,
the network combines predictions from multiple feature maps with different
resolutions to naturally handle objects of various sizes. Our SSD model is
simple relative to methods that require object proposals because it completely
eliminates proposal generation and subsequent pixel or feature resampling stage
and encapsulates all computation in a single network. This makes SSD easy to
train and straightforward to integrate into systems that require a detection
component. Experimental results on the PASCAL VOC, MS COCO, and ILSVRC datasets
confirm that SSD has comparable accuracy to methods that utilize an additional
object proposal step and is much faster, while providing a unified framework
for both training and inference. Compared to other single stage methods, SSD
has much better accuracy, even with a smaller input image size. For $300\times
300$ input, SSD achieves 72.1% mAP on VOC2007 test at 58 FPS on a Nvidia Titan
X and for $500\times 500$ input, SSD achieves 75.1% mAP, outperforming a
comparable state of the art Faster R-CNN model. Code is available at
https://github.com/weiliu89/caffe/tree/ssd . | http://arxiv.org/pdf/1512.02325 | Wei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott Reed, Cheng-Yang Fu, Alexander C. Berg | cs.CV | ECCV 2016 | null | cs.CV | 20151208 | 20161229 | [] |
1512.02595 | 21 | 5
Architecture Hidden Units Train Dev Baseline BatchNorm Baseline BatchNorm 1 RNN, 5 total 3 RNN, 5 total 5 RNN, 7 total 7 RNN, 9 total 2400 1880 1510 1280 10.55 9.55 8.59 8.76 11.99 8.29 7.61 7.68 13.55 11.61 10.77 10.83 14.40 10.56 9.78 9.52
Table 1: Comparison of WER on a training and development set for various depths of RNN, with and without BatchNorm. The number of parameters is kept constant as the depth increases, thus the number of hidden units per layer decreases. All networks have 38 million parameters. The architecture âM RNN, N totalâ implies 1 layer of 1D convolution at the input, M consecutive bidirectional RNN layers, and the rest as fully-connected layers with N total layers in the network.
# 3.2 Batch Normalization for Deep RNNs | 1512.02595#21 | Deep Speech 2: End-to-End Speech Recognition in English and Mandarin | We show that an end-to-end deep learning approach can be used to recognize
either English or Mandarin Chinese speech--two vastly different languages.
Because it replaces entire pipelines of hand-engineered components with neural
networks, end-to-end learning allows us to handle a diverse variety of speech
including noisy environments, accents and different languages. Key to our
approach is our application of HPC techniques, resulting in a 7x speedup over
our previous system. Because of this efficiency, experiments that previously
took weeks now run in days. This enables us to iterate more quickly to identify
superior architectures and algorithms. As a result, in several cases, our
system is competitive with the transcription of human workers when benchmarked
on standard datasets. Finally, using a technique called Batch Dispatch with
GPUs in the data center, we show that our system can be inexpensively deployed
in an online setting, delivering low latency when serving users at scale. | http://arxiv.org/pdf/1512.02595 | Dario Amodei, Rishita Anubhai, Eric Battenberg, Carl Case, Jared Casper, Bryan Catanzaro, Jingdong Chen, Mike Chrzanowski, Adam Coates, Greg Diamos, Erich Elsen, Jesse Engel, Linxi Fan, Christopher Fougner, Tony Han, Awni Hannun, Billy Jun, Patrick LeGresley, Libby Lin, Sharan Narang, Andrew Ng, Sherjil Ozair, Ryan Prenger, Jonathan Raiman, Sanjeev Satheesh, David Seetapun, Shubho Sengupta, Yi Wang, Zhiqian Wang, Chong Wang, Bo Xiao, Dani Yogatama, Jun Zhan, Zhenyao Zhu | cs.CL | null | null | cs.CL | 20151208 | 20151208 | [] |
1512.02325 | 22 | On this dataset, we compare against Fast R-CNN [6] and Faster R-CNN [2] on VOC2007 test (4952 images). All methods ï¬ne-tune on the same pre-trained VGG16 network. Figure 2 shows the architecture details of the SSD300 model. We use conv4 3, conv7 (fc7), conv8 2, conv9 2, conv10 2, and conv11 2 to predict both location and conï¬dences. We set default box with scale 0.1 on conv4 33. We initialize the parameters for all the newly added convolutional layers with the âxavierâ method [20]. For conv4 3, conv10 2 and conv11 2, we only associate 4 default boxes at each feature map location â omitting aspect ratios of 1 3 and 3. For all other layers, we put 6 default boxes as described in Sec. 2.2. Since, as pointed out in [12], conv4 3 has a different feature scale compared to the other layers, we use the L2 normalization technique introduced in [12] to scale the feature norm at each location in the feature map to 20 and learn the scale during back propagation. We use the 10â3 learning rate for 40k iterations, then continue training for 10k | 1512.02325#22 | SSD: Single Shot MultiBox Detector | We present a method for detecting objects in images using a single deep
neural network. Our approach, named SSD, discretizes the output space of
bounding boxes into a set of default boxes over different aspect ratios and
scales per feature map location. At prediction time, the network generates
scores for the presence of each object category in each default box and
produces adjustments to the box to better match the object shape. Additionally,
the network combines predictions from multiple feature maps with different
resolutions to naturally handle objects of various sizes. Our SSD model is
simple relative to methods that require object proposals because it completely
eliminates proposal generation and subsequent pixel or feature resampling stage
and encapsulates all computation in a single network. This makes SSD easy to
train and straightforward to integrate into systems that require a detection
component. Experimental results on the PASCAL VOC, MS COCO, and ILSVRC datasets
confirm that SSD has comparable accuracy to methods that utilize an additional
object proposal step and is much faster, while providing a unified framework
for both training and inference. Compared to other single stage methods, SSD
has much better accuracy, even with a smaller input image size. For $300\times
300$ input, SSD achieves 72.1% mAP on VOC2007 test at 58 FPS on a Nvidia Titan
X and for $500\times 500$ input, SSD achieves 75.1% mAP, outperforming a
comparable state of the art Faster R-CNN model. Code is available at
https://github.com/weiliu89/caffe/tree/ssd . | http://arxiv.org/pdf/1512.02325 | Wei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott Reed, Cheng-Yang Fu, Alexander C. Berg | cs.CV | ECCV 2016 | null | cs.CV | 20151208 | 20161229 | [] |
1512.02595 | 22 | # 3.2 Batch Normalization for Deep RNNs
To efï¬ciently scale our model as we scale the training set, we increase the depth of the networks by adding more hidden layers, rather than making each layer larger. Previous work has examined doing so by increasing the number of consecutive bidirectional recurrent layers [24]. We explore Batch Normalization (BatchNorm) as a technique to accelerate training for such networks [63] since they often suffer from optimization issues.
Recent research has shown that BatchNorm improves the speed of convergence of recurrent nets, without showing any improvement in generalization performance [37]. In contrast, we demonstrate that when applied to very deep networks of simple RNNs on large data sets, batch normalization substantially improves ï¬nal generalization error while greatly accelerating training.
In a typical feed-forward layer containing an afï¬ne transformation followed by a non-linearity f ( we insert a BatchNorm transformation by applying f (
# B E[x]
B(x) =7â El re : (Var [ax + 6. (6) | 1512.02595#22 | Deep Speech 2: End-to-End Speech Recognition in English and Mandarin | We show that an end-to-end deep learning approach can be used to recognize
either English or Mandarin Chinese speech--two vastly different languages.
Because it replaces entire pipelines of hand-engineered components with neural
networks, end-to-end learning allows us to handle a diverse variety of speech
including noisy environments, accents and different languages. Key to our
approach is our application of HPC techniques, resulting in a 7x speedup over
our previous system. Because of this efficiency, experiments that previously
took weeks now run in days. This enables us to iterate more quickly to identify
superior architectures and algorithms. As a result, in several cases, our
system is competitive with the transcription of human workers when benchmarked
on standard datasets. Finally, using a technique called Batch Dispatch with
GPUs in the data center, we show that our system can be inexpensively deployed
in an online setting, delivering low latency when serving users at scale. | http://arxiv.org/pdf/1512.02595 | Dario Amodei, Rishita Anubhai, Eric Battenberg, Carl Case, Jared Casper, Bryan Catanzaro, Jingdong Chen, Mike Chrzanowski, Adam Coates, Greg Diamos, Erich Elsen, Jesse Engel, Linxi Fan, Christopher Fougner, Tony Han, Awni Hannun, Billy Jun, Patrick LeGresley, Libby Lin, Sharan Narang, Andrew Ng, Sherjil Ozair, Ryan Prenger, Jonathan Raiman, Sanjeev Satheesh, David Seetapun, Shubho Sengupta, Yi Wang, Zhiqian Wang, Chong Wang, Bo Xiao, Dani Yogatama, Jun Zhan, Zhenyao Zhu | cs.CL | null | null | cs.CL | 20151208 | 20151208 | [] |
1512.02325 | 23 | norm at each location in the feature map to 20 and learn the scale during back propagation. We use the 10â3 learning rate for 40k iterations, then continue training for 10k iterations with 10â4 and 10â5. When training on VOC2007 trainval, Table 1 shows that our low resolution SSD300 model is already more accurate than Fast R-CNN. When we train SSD on a larger 512 512 input image, it is even more accurate, surpassing Faster R-CNN by 1.7% mAP. If we train SSD with more (i.e. 07+12) data, we see that SSD300 is already better than Faster R-CNN by 1.1% and that SSD512 is 3.6% better. If we take models trained on COCO trainval35k as described in Sec. 3.4 and ï¬ne-tuning them on the 07+12 dataset with SSD512, we achieve the best results: 81.6% mAP. | 1512.02325#23 | SSD: Single Shot MultiBox Detector | We present a method for detecting objects in images using a single deep
neural network. Our approach, named SSD, discretizes the output space of
bounding boxes into a set of default boxes over different aspect ratios and
scales per feature map location. At prediction time, the network generates
scores for the presence of each object category in each default box and
produces adjustments to the box to better match the object shape. Additionally,
the network combines predictions from multiple feature maps with different
resolutions to naturally handle objects of various sizes. Our SSD model is
simple relative to methods that require object proposals because it completely
eliminates proposal generation and subsequent pixel or feature resampling stage
and encapsulates all computation in a single network. This makes SSD easy to
train and straightforward to integrate into systems that require a detection
component. Experimental results on the PASCAL VOC, MS COCO, and ILSVRC datasets
confirm that SSD has comparable accuracy to methods that utilize an additional
object proposal step and is much faster, while providing a unified framework
for both training and inference. Compared to other single stage methods, SSD
has much better accuracy, even with a smaller input image size. For $300\times
300$ input, SSD achieves 72.1% mAP on VOC2007 test at 58 FPS on a Nvidia Titan
X and for $500\times 500$ input, SSD achieves 75.1% mAP, outperforming a
comparable state of the art Faster R-CNN model. Code is available at
https://github.com/weiliu89/caffe/tree/ssd . | http://arxiv.org/pdf/1512.02325 | Wei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott Reed, Cheng-Yang Fu, Alexander C. Berg | cs.CV | ECCV 2016 | null | cs.CV | 20151208 | 20161229 | [] |
1512.02595 | 23 | # B E[x]
B(x) =7â El re : (Var [ax + 6. (6)
The terms E and Var are the empirical mean and variance over a minibatch. The bias b of the layer is dropped since its effect is cancelled by mean removal. The learnable parameters y and 3 allow the layer to scale and shift each hidden unit as desired. The constant ⬠is small and positive, and is included only for numerical stability. In our convolutional layers the mean and variance are estimated over all the temporal output units for a given convolutional filter on a minibatch. The BatchNorm transformation reduces internal covariate shift by insulating a given layer from potentially uninteresting changes in the mean and variance of the layerâs input.
We consider two methods of extending BatchNorm to bidirectional RNNs [37]. A natural extension is to insert a BatchNorm transformation immediately before every non-linearity. Equation 3 then becomes
(W lhlâ1 ââh l t + ââU lââh l t = f ( tâ1)). (7)
# B | 1512.02595#23 | Deep Speech 2: End-to-End Speech Recognition in English and Mandarin | We show that an end-to-end deep learning approach can be used to recognize
either English or Mandarin Chinese speech--two vastly different languages.
Because it replaces entire pipelines of hand-engineered components with neural
networks, end-to-end learning allows us to handle a diverse variety of speech
including noisy environments, accents and different languages. Key to our
approach is our application of HPC techniques, resulting in a 7x speedup over
our previous system. Because of this efficiency, experiments that previously
took weeks now run in days. This enables us to iterate more quickly to identify
superior architectures and algorithms. As a result, in several cases, our
system is competitive with the transcription of human workers when benchmarked
on standard datasets. Finally, using a technique called Batch Dispatch with
GPUs in the data center, we show that our system can be inexpensively deployed
in an online setting, delivering low latency when serving users at scale. | http://arxiv.org/pdf/1512.02595 | Dario Amodei, Rishita Anubhai, Eric Battenberg, Carl Case, Jared Casper, Bryan Catanzaro, Jingdong Chen, Mike Chrzanowski, Adam Coates, Greg Diamos, Erich Elsen, Jesse Engel, Linxi Fan, Christopher Fougner, Tony Han, Awni Hannun, Billy Jun, Patrick LeGresley, Libby Lin, Sharan Narang, Andrew Ng, Sherjil Ozair, Ryan Prenger, Jonathan Raiman, Sanjeev Satheesh, David Seetapun, Shubho Sengupta, Yi Wang, Zhiqian Wang, Chong Wang, Bo Xiao, Dani Yogatama, Jun Zhan, Zhenyao Zhu | cs.CL | null | null | cs.CL | 20151208 | 20151208 | [] |
1512.02325 | 24 | To understand the performance of our two SSD models in more details, we used the detection analysis tool from [21]. Figure 3 shows that SSD can detect various object categories with high quality (large white area). The majority of its conï¬dent detections are correct. The recall is around 85-90%, and is much higher with âweakâ (0.1 jaccard overlap) criteria. Compared to R-CNN [22], SSD has less localization error, indicating that SSD can localize objects better because it directly learns to regress the object shape and classify object categories instead of using two decoupled steps. However, SSD has more confusions with similar object categories (especially for animals), partly because we share locations for multiple categories. Figure 4 shows that SSD is very sensitive to the bounding box size. In other words, it has much worse performance on smaller
3 For SSD512 model, we add extra conv12 2 for prediction, set smin to 0.15, and 0.07 on conv4 3.
7
8
8 Liu et al. | 1512.02325#24 | SSD: Single Shot MultiBox Detector | We present a method for detecting objects in images using a single deep
neural network. Our approach, named SSD, discretizes the output space of
bounding boxes into a set of default boxes over different aspect ratios and
scales per feature map location. At prediction time, the network generates
scores for the presence of each object category in each default box and
produces adjustments to the box to better match the object shape. Additionally,
the network combines predictions from multiple feature maps with different
resolutions to naturally handle objects of various sizes. Our SSD model is
simple relative to methods that require object proposals because it completely
eliminates proposal generation and subsequent pixel or feature resampling stage
and encapsulates all computation in a single network. This makes SSD easy to
train and straightforward to integrate into systems that require a detection
component. Experimental results on the PASCAL VOC, MS COCO, and ILSVRC datasets
confirm that SSD has comparable accuracy to methods that utilize an additional
object proposal step and is much faster, while providing a unified framework
for both training and inference. Compared to other single stage methods, SSD
has much better accuracy, even with a smaller input image size. For $300\times
300$ input, SSD achieves 72.1% mAP on VOC2007 test at 58 FPS on a Nvidia Titan
X and for $500\times 500$ input, SSD achieves 75.1% mAP, outperforming a
comparable state of the art Faster R-CNN model. Code is available at
https://github.com/weiliu89/caffe/tree/ssd . | http://arxiv.org/pdf/1512.02325 | Wei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott Reed, Cheng-Yang Fu, Alexander C. Berg | cs.CV | ECCV 2016 | null | cs.CV | 20151208 | 20161229 | [] |
1512.02595 | 24 | (W lhlâ1 ââh l t + ââU lââh l t = f ( tâ1)). (7)
# B
In this case the mean and variance statistics are accumulated over a single time-step of the minibatch. The sequential dependence between time-steps prevents averaging over all time-steps. We ï¬nd that this technique does not lead to improvements in optimization. We also tried accumulating an average over successive time-steps, so later time-steps are normalized over all present and previous time- steps. This also proved ineffective and greatly complicated backpropagation.
We ï¬nd that sequence-wise normalization [37] overcomes these issues. The recurrent computation is given by
ââh l t = f ( (W lhlâ1 t ) + ââU lââh l tâ1). (8)
# B
For each hidden unit, we compute the mean and variance statistics over all items in the minibatch over the length of the sequence. Figure 2 shows that deep networks converge faster with sequence- wise normalization. Table 1 shows that the performance improvement from sequence-wise normal- ization increases with the depth of the network, with a 12% performance difference for the deepest network. When comparing depth, in order to control for model size we hold constant the total
6 | 1512.02595#24 | Deep Speech 2: End-to-End Speech Recognition in English and Mandarin | We show that an end-to-end deep learning approach can be used to recognize
either English or Mandarin Chinese speech--two vastly different languages.
Because it replaces entire pipelines of hand-engineered components with neural
networks, end-to-end learning allows us to handle a diverse variety of speech
including noisy environments, accents and different languages. Key to our
approach is our application of HPC techniques, resulting in a 7x speedup over
our previous system. Because of this efficiency, experiments that previously
took weeks now run in days. This enables us to iterate more quickly to identify
superior architectures and algorithms. As a result, in several cases, our
system is competitive with the transcription of human workers when benchmarked
on standard datasets. Finally, using a technique called Batch Dispatch with
GPUs in the data center, we show that our system can be inexpensively deployed
in an online setting, delivering low latency when serving users at scale. | http://arxiv.org/pdf/1512.02595 | Dario Amodei, Rishita Anubhai, Eric Battenberg, Carl Case, Jared Casper, Bryan Catanzaro, Jingdong Chen, Mike Chrzanowski, Adam Coates, Greg Diamos, Erich Elsen, Jesse Engel, Linxi Fan, Christopher Fougner, Tony Han, Awni Hannun, Billy Jun, Patrick LeGresley, Libby Lin, Sharan Narang, Andrew Ng, Sherjil Ozair, Ryan Prenger, Jonathan Raiman, Sanjeev Satheesh, David Seetapun, Shubho Sengupta, Yi Wang, Zhiqian Wang, Chong Wang, Bo Xiao, Dani Yogatama, Jun Zhan, Zhenyao Zhu | cs.CL | null | null | cs.CL | 20151208 | 20151208 | [] |
1512.02325 | 25 | mAP aero Method 66.9 74.5 78.3 69.2 53.2 36.6 77.3 78.2 82.0 40.7 72.7 67.9 79.6 79.2 73.0 69.0 30.1 65.4 70.2 75.8 65.8 Fast [6] 70.0 77.0 78.1 69.3 59.4 38.3 81.6 78.6 86.7 42.8 78.8 68.9 84.7 82.0 76.6 69.9 31.8 70.1 74.8 80.4 70.4 Fast [6] 69.9 70.0 80.6 70.1 57.3 49.9 78.2 80.4 82.0 52.2 75.3 67.2 80.3 79.8 75.0 76.3 39.1 68.3 67.3 81.1 67.6 Faster [2] 73.2 76.5 79.0 70.9 65.5 52.1 83.1 84.7 86.4 52.0 81.9 65.7 84.8 84.6 77.5 76.7 38.8 73.6 73.9 83.0 72.6 Faster [2] Faster [2] 07+12+COCO 78.8 84.3 82.0 77.7 | 1512.02325#25 | SSD: Single Shot MultiBox Detector | We present a method for detecting objects in images using a single deep
neural network. Our approach, named SSD, discretizes the output space of
bounding boxes into a set of default boxes over different aspect ratios and
scales per feature map location. At prediction time, the network generates
scores for the presence of each object category in each default box and
produces adjustments to the box to better match the object shape. Additionally,
the network combines predictions from multiple feature maps with different
resolutions to naturally handle objects of various sizes. Our SSD model is
simple relative to methods that require object proposals because it completely
eliminates proposal generation and subsequent pixel or feature resampling stage
and encapsulates all computation in a single network. This makes SSD easy to
train and straightforward to integrate into systems that require a detection
component. Experimental results on the PASCAL VOC, MS COCO, and ILSVRC datasets
confirm that SSD has comparable accuracy to methods that utilize an additional
object proposal step and is much faster, while providing a unified framework
for both training and inference. Compared to other single stage methods, SSD
has much better accuracy, even with a smaller input image size. For $300\times
300$ input, SSD achieves 72.1% mAP on VOC2007 test at 58 FPS on a Nvidia Titan
X and for $500\times 500$ input, SSD achieves 75.1% mAP, outperforming a
comparable state of the art Faster R-CNN model. Code is available at
https://github.com/weiliu89/caffe/tree/ssd . | http://arxiv.org/pdf/1512.02325 | Wei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott Reed, Cheng-Yang Fu, Alexander C. Berg | cs.CV | ECCV 2016 | null | cs.CV | 20151208 | 20161229 | [] |
1512.02595 | 25 | 6
60 5-1 BN ââ §-1No BN Cost 50 100 150 200 250 300 Iteration (x 10°)
Figure 2: Training curves of two models trained with and without BatchNorm. We start the plot after the ï¬rst epoch of training as the curve is more difï¬cult to interpret due to the SortaGrad curriculum method mentioned in Section 3.3
Train Dev Baseline BatchNorm Baseline BatchNorm Not Sorted Sorted 10.71 8.76 8.04 7.68 11.96 10.83 9.78 9.52
Table 2: Comparison of WER on a training and development set with and without SortaGrad, and with and without batch normalization.
number of parameters and still see strong performance gains. We would expect to see even larger improvements from depth if we held constant the number of activations per layer and added lay- ers. We also ï¬nd that BatchNorm harms generalization error for the shallowest network just as it converges slower for shallower networks. | 1512.02595#25 | Deep Speech 2: End-to-End Speech Recognition in English and Mandarin | We show that an end-to-end deep learning approach can be used to recognize
either English or Mandarin Chinese speech--two vastly different languages.
Because it replaces entire pipelines of hand-engineered components with neural
networks, end-to-end learning allows us to handle a diverse variety of speech
including noisy environments, accents and different languages. Key to our
approach is our application of HPC techniques, resulting in a 7x speedup over
our previous system. Because of this efficiency, experiments that previously
took weeks now run in days. This enables us to iterate more quickly to identify
superior architectures and algorithms. As a result, in several cases, our
system is competitive with the transcription of human workers when benchmarked
on standard datasets. Finally, using a technique called Batch Dispatch with
GPUs in the data center, we show that our system can be inexpensively deployed
in an online setting, delivering low latency when serving users at scale. | http://arxiv.org/pdf/1512.02595 | Dario Amodei, Rishita Anubhai, Eric Battenberg, Carl Case, Jared Casper, Bryan Catanzaro, Jingdong Chen, Mike Chrzanowski, Adam Coates, Greg Diamos, Erich Elsen, Jesse Engel, Linxi Fan, Christopher Fougner, Tony Han, Awni Hannun, Billy Jun, Patrick LeGresley, Libby Lin, Sharan Narang, Andrew Ng, Sherjil Ozair, Ryan Prenger, Jonathan Raiman, Sanjeev Satheesh, David Seetapun, Shubho Sengupta, Yi Wang, Zhiqian Wang, Chong Wang, Bo Xiao, Dani Yogatama, Jun Zhan, Zhenyao Zhu | cs.CL | null | null | cs.CL | 20151208 | 20151208 | [] |
1512.02325 | 26 | 73.9 83.0 72.6 Faster [2] Faster [2] 07+12+COCO 78.8 84.3 82.0 77.7 68.9 65.7 88.1 88.4 88.9 63.6 86.3 70.8 85.9 87.6 80.1 82.3 53.6 80.4 75.8 86.6 78.9 68.0 73.4 77.5 64.1 59.0 38.9 75.2 80.8 78.5 46.0 67.8 69.2 76.6 82.1 77.0 72.5 41.2 64.2 69.1 78.0 68.5 SSD300 74.3 75.5 80.2 72.3 66.3 47.6 83.0 84.2 86.1 54.7 78.3 73.9 84.5 85.3 82.6 76.2 48.6 73.9 76.0 83.4 74.0 SSD300 07+12+COCO 79.6 80.9 86.3 79.0 76.2 57.6 87.3 88.2 88.6 60.5 85.4 76.7 87.5 89.2 84.5 81.4 55.0 81.9 81.5 85.9 78.9 SSD300 71.6 | 1512.02325#26 | SSD: Single Shot MultiBox Detector | We present a method for detecting objects in images using a single deep
neural network. Our approach, named SSD, discretizes the output space of
bounding boxes into a set of default boxes over different aspect ratios and
scales per feature map location. At prediction time, the network generates
scores for the presence of each object category in each default box and
produces adjustments to the box to better match the object shape. Additionally,
the network combines predictions from multiple feature maps with different
resolutions to naturally handle objects of various sizes. Our SSD model is
simple relative to methods that require object proposals because it completely
eliminates proposal generation and subsequent pixel or feature resampling stage
and encapsulates all computation in a single network. This makes SSD easy to
train and straightforward to integrate into systems that require a detection
component. Experimental results on the PASCAL VOC, MS COCO, and ILSVRC datasets
confirm that SSD has comparable accuracy to methods that utilize an additional
object proposal step and is much faster, while providing a unified framework
for both training and inference. Compared to other single stage methods, SSD
has much better accuracy, even with a smaller input image size. For $300\times
300$ input, SSD achieves 72.1% mAP on VOC2007 test at 58 FPS on a Nvidia Titan
X and for $500\times 500$ input, SSD achieves 75.1% mAP, outperforming a
comparable state of the art Faster R-CNN model. Code is available at
https://github.com/weiliu89/caffe/tree/ssd . | http://arxiv.org/pdf/1512.02325 | Wei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott Reed, Cheng-Yang Fu, Alexander C. Berg | cs.CV | ECCV 2016 | null | cs.CV | 20151208 | 20161229 | [] |
1512.02595 | 26 | The BatchNorm approach works well in training, but is difï¬cult to implement for a deployed ASR system, since it is often necessary to evaluate a single utterance in deployment rather than a batch. We ï¬nd that normalizing each neuron to its mean and variance over just the sequence degrades performance. Instead, we store a running average of the mean and variance for the neuron collected during training, and use these for evaluation in deployment [63]. Using this technique, we can evaluate a single utterance at a time with better results than evaluating with a large batch.
# 3.3 SortaGrad
Training on examples of varying length pose some algorithmic challenges. One possible solution is truncating backpropagation through time [68], so that all examples have the same sequence length during training [52]. However, this can inhibit the ability to learn longer term dependencies. Other works have found that presenting examples in order of difï¬culty can accelerate online learning [6, 70]. A common theme in many sequence learning problems including machine translation and speech recognition is that longer examples tend to be more challenging [11].
The CTC cost function that we use implicitly depends on the length of the utterance,
T L(a,y;0)=-log YS [I peel te|2;9). (9) EAlign(x,y) | 1512.02595#26 | Deep Speech 2: End-to-End Speech Recognition in English and Mandarin | We show that an end-to-end deep learning approach can be used to recognize
either English or Mandarin Chinese speech--two vastly different languages.
Because it replaces entire pipelines of hand-engineered components with neural
networks, end-to-end learning allows us to handle a diverse variety of speech
including noisy environments, accents and different languages. Key to our
approach is our application of HPC techniques, resulting in a 7x speedup over
our previous system. Because of this efficiency, experiments that previously
took weeks now run in days. This enables us to iterate more quickly to identify
superior architectures and algorithms. As a result, in several cases, our
system is competitive with the transcription of human workers when benchmarked
on standard datasets. Finally, using a technique called Batch Dispatch with
GPUs in the data center, we show that our system can be inexpensively deployed
in an online setting, delivering low latency when serving users at scale. | http://arxiv.org/pdf/1512.02595 | Dario Amodei, Rishita Anubhai, Eric Battenberg, Carl Case, Jared Casper, Bryan Catanzaro, Jingdong Chen, Mike Chrzanowski, Adam Coates, Greg Diamos, Erich Elsen, Jesse Engel, Linxi Fan, Christopher Fougner, Tony Han, Awni Hannun, Billy Jun, Patrick LeGresley, Libby Lin, Sharan Narang, Andrew Ng, Sherjil Ozair, Ryan Prenger, Jonathan Raiman, Sanjeev Satheesh, David Seetapun, Shubho Sengupta, Yi Wang, Zhiqian Wang, Chong Wang, Bo Xiao, Dani Yogatama, Jun Zhan, Zhenyao Zhu | cs.CL | null | null | cs.CL | 20151208 | 20151208 | [] |
1512.02325 | 27 | 85.4 76.7 87.5 89.2 84.5 81.4 55.0 81.9 81.5 85.9 78.9 SSD300 71.6 75.1 81.4 69.8 60.8 46.3 82.6 84.7 84.1 48.5 75.0 67.4 82.3 83.9 79.4 76.6 44.9 69.9 69.1 78.1 71.8 SSD512 76.8 82.4 84.7 78.4 73.8 53.2 86.2 87.5 86.0 57.8 83.1 70.2 84.9 85.2 83.9 79.7 50.3 77.9 73.9 82.5 75.3 SSD512 07+12+COCO 81.6 86.6 88.3 82.4 76.0 66.3 88.6 88.9 89.1 65.1 88.4 73.6 86.5 88.9 85.3 84.6 59.1 85.0 80.4 87.4 81.2 SSD512 Table 1: PASCAL VOC2007 test detection results. Both Fast and Faster R-CNN use input images whose minimum dimension is 600. The two SSD models have exactly 512). It the same settings except that they have different input sizes (300 is obvious | 1512.02325#27 | SSD: Single Shot MultiBox Detector | We present a method for detecting objects in images using a single deep
neural network. Our approach, named SSD, discretizes the output space of
bounding boxes into a set of default boxes over different aspect ratios and
scales per feature map location. At prediction time, the network generates
scores for the presence of each object category in each default box and
produces adjustments to the box to better match the object shape. Additionally,
the network combines predictions from multiple feature maps with different
resolutions to naturally handle objects of various sizes. Our SSD model is
simple relative to methods that require object proposals because it completely
eliminates proposal generation and subsequent pixel or feature resampling stage
and encapsulates all computation in a single network. This makes SSD easy to
train and straightforward to integrate into systems that require a detection
component. Experimental results on the PASCAL VOC, MS COCO, and ILSVRC datasets
confirm that SSD has comparable accuracy to methods that utilize an additional
object proposal step and is much faster, while providing a unified framework
for both training and inference. Compared to other single stage methods, SSD
has much better accuracy, even with a smaller input image size. For $300\times
300$ input, SSD achieves 72.1% mAP on VOC2007 test at 58 FPS on a Nvidia Titan
X and for $500\times 500$ input, SSD achieves 75.1% mAP, outperforming a
comparable state of the art Faster R-CNN model. Code is available at
https://github.com/weiliu89/caffe/tree/ssd . | http://arxiv.org/pdf/1512.02325 | Wei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott Reed, Cheng-Yang Fu, Alexander C. Berg | cs.CV | ECCV 2016 | null | cs.CV | 20151208 | 20161229 | [] |
1512.02595 | 27 | T L(a,y;0)=-log YS [I peel te|2;9). (9) EAlign(x,y)
where Align(x, y) is the set of all possible alignments of the characters of the transcription y to frames of input x under the CTC operator. In equation[9] the inner term is a product over time-steps of the sequence, which shrinks with the length of the sequence since pere(4:|z; 0) < 1. This moti- vates a curriculum learning strategy we title SortaGrad. SortaGrad uses the length of the utterance as a heuristic for difficulty, since long utterances have higher cost than short utterances.
7
Architecture Simple RNN GRU 5 layers, 1 Recurrent 5 layers, 3 Recurrent 7 layers, 5 Recurrent 9 layers, 7 Recurrent 14.40 10.56 9.78 9.52 10.53 8.00 7.79 8.19
Table 3: Comparison of development set WER for networks with either simple RNN or GRU, for various depths. All models have batch normalization, one layer of 1D-invariant convolution, and approximately 38 million parameters. | 1512.02595#27 | Deep Speech 2: End-to-End Speech Recognition in English and Mandarin | We show that an end-to-end deep learning approach can be used to recognize
either English or Mandarin Chinese speech--two vastly different languages.
Because it replaces entire pipelines of hand-engineered components with neural
networks, end-to-end learning allows us to handle a diverse variety of speech
including noisy environments, accents and different languages. Key to our
approach is our application of HPC techniques, resulting in a 7x speedup over
our previous system. Because of this efficiency, experiments that previously
took weeks now run in days. This enables us to iterate more quickly to identify
superior architectures and algorithms. As a result, in several cases, our
system is competitive with the transcription of human workers when benchmarked
on standard datasets. Finally, using a technique called Batch Dispatch with
GPUs in the data center, we show that our system can be inexpensively deployed
in an online setting, delivering low latency when serving users at scale. | http://arxiv.org/pdf/1512.02595 | Dario Amodei, Rishita Anubhai, Eric Battenberg, Carl Case, Jared Casper, Bryan Catanzaro, Jingdong Chen, Mike Chrzanowski, Adam Coates, Greg Diamos, Erich Elsen, Jesse Engel, Linxi Fan, Christopher Fougner, Tony Han, Awni Hannun, Billy Jun, Patrick LeGresley, Libby Lin, Sharan Narang, Andrew Ng, Sherjil Ozair, Ryan Prenger, Jonathan Raiman, Sanjeev Satheesh, David Seetapun, Shubho Sengupta, Yi Wang, Zhiqian Wang, Chong Wang, Bo Xiao, Dani Yogatama, Jun Zhan, Zhenyao Zhu | cs.CL | null | null | cs.CL | 20151208 | 20151208 | [] |
1512.02325 | 28 | R-CNN use input images whose minimum dimension is 600. The two SSD models have exactly 512). It the same settings except that they have different input sizes (300 is obvious that larger input size leads to better results, and more data always helps. Data: â07â: VOC2007 trainval, â07+12â: union of VOC2007 and VOC2012 trainval. â07+12+COCOâ: ï¬rst train on COCO trainval35k then ï¬ne-tune on 07+12. | 1512.02325#28 | SSD: Single Shot MultiBox Detector | We present a method for detecting objects in images using a single deep
neural network. Our approach, named SSD, discretizes the output space of
bounding boxes into a set of default boxes over different aspect ratios and
scales per feature map location. At prediction time, the network generates
scores for the presence of each object category in each default box and
produces adjustments to the box to better match the object shape. Additionally,
the network combines predictions from multiple feature maps with different
resolutions to naturally handle objects of various sizes. Our SSD model is
simple relative to methods that require object proposals because it completely
eliminates proposal generation and subsequent pixel or feature resampling stage
and encapsulates all computation in a single network. This makes SSD easy to
train and straightforward to integrate into systems that require a detection
component. Experimental results on the PASCAL VOC, MS COCO, and ILSVRC datasets
confirm that SSD has comparable accuracy to methods that utilize an additional
object proposal step and is much faster, while providing a unified framework
for both training and inference. Compared to other single stage methods, SSD
has much better accuracy, even with a smaller input image size. For $300\times
300$ input, SSD achieves 72.1% mAP on VOC2007 test at 58 FPS on a Nvidia Titan
X and for $500\times 500$ input, SSD achieves 75.1% mAP, outperforming a
comparable state of the art Faster R-CNN model. Code is available at
https://github.com/weiliu89/caffe/tree/ssd . | http://arxiv.org/pdf/1512.02325 | Wei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott Reed, Cheng-Yang Fu, Alexander C. Berg | cs.CV | ECCV 2016 | null | cs.CV | 20151208 | 20161229 | [] |
1512.02595 | 28 | In the ï¬rst training epoch, we iterate through the training set in increasing order of the length of the longest utterance in the minibatch. After the ï¬rst epoch, training reverts back to a random order over minibatches. Table 2 shows a comparison of training cost with and without SortaGrad on the 9 layer model with 7 recurrent layers. This effect is particularly pronounced for networks without BatchNorm, since they are numerically less stable. In some sense the two techniques substitute for one another, though we still ï¬nd gains when applying SortaGrad and BatchNorm together. Even with BatchNorm we ï¬nd that this curriculum improves numerical stability and sensitivity to small changes in training. Numerical instability can arise from different transcendental function imple- mentations in the CPU and the GPU, especially when computing the CTC cost. This curriculum gives comparable results for both implementations.
We suspect that these beneï¬ts occur primarily because long utterances tend to have larger gradients, yet we use a ï¬xed learning rate independent of utterance length. Furthermore, longer utterances are more likely to cause the internal state of the RNNs to explode at an early stage in training.
# 3.4 Comparison of simple RNNs and GRUs | 1512.02595#28 | Deep Speech 2: End-to-End Speech Recognition in English and Mandarin | We show that an end-to-end deep learning approach can be used to recognize
either English or Mandarin Chinese speech--two vastly different languages.
Because it replaces entire pipelines of hand-engineered components with neural
networks, end-to-end learning allows us to handle a diverse variety of speech
including noisy environments, accents and different languages. Key to our
approach is our application of HPC techniques, resulting in a 7x speedup over
our previous system. Because of this efficiency, experiments that previously
took weeks now run in days. This enables us to iterate more quickly to identify
superior architectures and algorithms. As a result, in several cases, our
system is competitive with the transcription of human workers when benchmarked
on standard datasets. Finally, using a technique called Batch Dispatch with
GPUs in the data center, we show that our system can be inexpensively deployed
in an online setting, delivering low latency when serving users at scale. | http://arxiv.org/pdf/1512.02595 | Dario Amodei, Rishita Anubhai, Eric Battenberg, Carl Case, Jared Casper, Bryan Catanzaro, Jingdong Chen, Mike Chrzanowski, Adam Coates, Greg Diamos, Erich Elsen, Jesse Engel, Linxi Fan, Christopher Fougner, Tony Han, Awni Hannun, Billy Jun, Patrick LeGresley, Libby Lin, Sharan Narang, Andrew Ng, Sherjil Ozair, Ryan Prenger, Jonathan Raiman, Sanjeev Satheesh, David Seetapun, Shubho Sengupta, Yi Wang, Zhiqian Wang, Chong Wang, Bo Xiao, Dani Yogatama, Jun Zhan, Zhenyao Zhu | cs.CL | null | null | cs.CL | 20151208 | 20151208 | [] |
1512.02325 | 29 | objects than bigger objects. This is not surprising because those small objects may not even have any information at the very top layers. Increasing the input size (e.g. from 512) can help improve detecting small objects, but there is still a lot 300 of room to improve. On the positive side, we can clearly see that SSD performs really well on large objects. And it is very robust to different object aspect ratios because we use default boxes of various aspect ratios per feature map location.
# 3.2 Model analysis
To understand SSD better, we carried out controlled experiments to examine how each component affects performance. For all the experiments, we use the same settings and input size (300
Ã
SSD300 more data augmentation? Vv Vv v include {4,2} box? | W vv Vv include {2,3} box? | 7 vv useatrous?|} W~ WY Wo Vv VOC2007 test mAP | 65.5 71.6 73.7 74.2 74.3
more data augmentation? Vv Vv v include {4,2} box? | W vv Vv include {2,3} box? | 7 vv useatrous?|} W~ WY Wo Vv VOC2007 test mAP | 65.5 71.6 73.7 74.2 74.3 Table 2: Effects of various design choices and components on SSD performance. | 1512.02325#29 | SSD: Single Shot MultiBox Detector | We present a method for detecting objects in images using a single deep
neural network. Our approach, named SSD, discretizes the output space of
bounding boxes into a set of default boxes over different aspect ratios and
scales per feature map location. At prediction time, the network generates
scores for the presence of each object category in each default box and
produces adjustments to the box to better match the object shape. Additionally,
the network combines predictions from multiple feature maps with different
resolutions to naturally handle objects of various sizes. Our SSD model is
simple relative to methods that require object proposals because it completely
eliminates proposal generation and subsequent pixel or feature resampling stage
and encapsulates all computation in a single network. This makes SSD easy to
train and straightforward to integrate into systems that require a detection
component. Experimental results on the PASCAL VOC, MS COCO, and ILSVRC datasets
confirm that SSD has comparable accuracy to methods that utilize an additional
object proposal step and is much faster, while providing a unified framework
for both training and inference. Compared to other single stage methods, SSD
has much better accuracy, even with a smaller input image size. For $300\times
300$ input, SSD achieves 72.1% mAP on VOC2007 test at 58 FPS on a Nvidia Titan
X and for $500\times 500$ input, SSD achieves 75.1% mAP, outperforming a
comparable state of the art Faster R-CNN model. Code is available at
https://github.com/weiliu89/caffe/tree/ssd . | http://arxiv.org/pdf/1512.02325 | Wei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott Reed, Cheng-Yang Fu, Alexander C. Berg | cs.CV | ECCV 2016 | null | cs.CV | 20151208 | 20161229 | [] |
1512.02595 | 29 | # 3.4 Comparison of simple RNNs and GRUs
The models we have shown so far are simple RNNs that have bidirectional recurrent layers with the recurrence for both the forward in time and backward in time directions modeled by Equation 3. Current research in speech and language processing has shown that having a more complex re- currence can allow the network to remember state over more time-steps while making them more computationally expensive to train [52, 8, 62, 2]. Two commonly used recurrent architectures are the Long Short-Term Memory (LSTM) units [30] and the Gated Recurrent Units (GRU) [11], though many other variations exist. A recent comprehensive study of thousands of variations of LSTM and GRU architectures showed that a GRU is comparable to an LSTM with a properly initialized forget gate bias, and their best variants are competitive with each other [32]. We decided to examine GRUs because experiments on smaller data sets showed the GRU and LSTM reach similar accuracy for the same number of parameters, but the GRUs were faster to train and less likely to diverge.
The GRUs we use are computed by | 1512.02595#29 | Deep Speech 2: End-to-End Speech Recognition in English and Mandarin | We show that an end-to-end deep learning approach can be used to recognize
either English or Mandarin Chinese speech--two vastly different languages.
Because it replaces entire pipelines of hand-engineered components with neural
networks, end-to-end learning allows us to handle a diverse variety of speech
including noisy environments, accents and different languages. Key to our
approach is our application of HPC techniques, resulting in a 7x speedup over
our previous system. Because of this efficiency, experiments that previously
took weeks now run in days. This enables us to iterate more quickly to identify
superior architectures and algorithms. As a result, in several cases, our
system is competitive with the transcription of human workers when benchmarked
on standard datasets. Finally, using a technique called Batch Dispatch with
GPUs in the data center, we show that our system can be inexpensively deployed
in an online setting, delivering low latency when serving users at scale. | http://arxiv.org/pdf/1512.02595 | Dario Amodei, Rishita Anubhai, Eric Battenberg, Carl Case, Jared Casper, Bryan Catanzaro, Jingdong Chen, Mike Chrzanowski, Adam Coates, Greg Diamos, Erich Elsen, Jesse Engel, Linxi Fan, Christopher Fougner, Tony Han, Awni Hannun, Billy Jun, Patrick LeGresley, Libby Lin, Sharan Narang, Andrew Ng, Sherjil Ozair, Ryan Prenger, Jonathan Raiman, Sanjeev Satheesh, David Seetapun, Shubho Sengupta, Yi Wang, Zhiqian Wang, Chong Wang, Bo Xiao, Dani Yogatama, Jun Zhan, Zhenyao Zhu | cs.CL | null | null | cs.CL | 20151208 | 20151208 | [] |
1512.02325 | 30 | Data augmentation is crucial. Fast and Faster R-CNN use the original image and the horizontal ï¬ip to train. We use a more extensive sampling strategy, similar to YOLO [5]. Table 2 shows that we can improve 8.8% mAP with this sampling strategy. We do not know how much our sampling strategy will beneï¬t Fast and Faster R-CNN, but they are likely to beneï¬t less because they use a feature pooling step during classiï¬cation that is relatively robust to object translation by design.
SSD: Single Shot MultiBox Detector | 1512.02325#30 | SSD: Single Shot MultiBox Detector | We present a method for detecting objects in images using a single deep
neural network. Our approach, named SSD, discretizes the output space of
bounding boxes into a set of default boxes over different aspect ratios and
scales per feature map location. At prediction time, the network generates
scores for the presence of each object category in each default box and
produces adjustments to the box to better match the object shape. Additionally,
the network combines predictions from multiple feature maps with different
resolutions to naturally handle objects of various sizes. Our SSD model is
simple relative to methods that require object proposals because it completely
eliminates proposal generation and subsequent pixel or feature resampling stage
and encapsulates all computation in a single network. This makes SSD easy to
train and straightforward to integrate into systems that require a detection
component. Experimental results on the PASCAL VOC, MS COCO, and ILSVRC datasets
confirm that SSD has comparable accuracy to methods that utilize an additional
object proposal step and is much faster, while providing a unified framework
for both training and inference. Compared to other single stage methods, SSD
has much better accuracy, even with a smaller input image size. For $300\times
300$ input, SSD achieves 72.1% mAP on VOC2007 test at 58 FPS on a Nvidia Titan
X and for $500\times 500$ input, SSD achieves 75.1% mAP, outperforming a
comparable state of the art Faster R-CNN model. Code is available at
https://github.com/weiliu89/caffe/tree/ssd . | http://arxiv.org/pdf/1512.02325 | Wei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott Reed, Cheng-Yang Fu, Alexander C. Berg | cs.CV | ECCV 2016 | null | cs.CV | 20151208 | 20161229 | [] |
1512.02595 | 30 | The GRUs we use are computed by
zt = Ï(Wzxt + Uzhtâ1 + bz) rt = Ï(Wrxt + Urhtâ1 + br) Ëht = f (Whxt + rt ⦠ht = (1 Uhhtâ1 + bh) Ëht zt)htâ1 + zt (10)
â
) is the sigmoid function, z and r represent the update and reset gates respectively, and where Ï( · we drop the layer superscripts for simplicity. We differ slightly from the standard GRU in that we multiply the hidden state htâ1 by Uh prior to scaling by the reset gate. This allows for all operations on htâ1 to be computed in a single matrix multiplication. The output nonlinearity f ( ) is typically · the hyperbolic tangent function tanh. However, we ï¬nd similar performance for tanh and clipped- ReLU nonlinearities and choose to use the clipped-ReLU for simplicity and uniformity with the rest of the network. | 1512.02595#30 | Deep Speech 2: End-to-End Speech Recognition in English and Mandarin | We show that an end-to-end deep learning approach can be used to recognize
either English or Mandarin Chinese speech--two vastly different languages.
Because it replaces entire pipelines of hand-engineered components with neural
networks, end-to-end learning allows us to handle a diverse variety of speech
including noisy environments, accents and different languages. Key to our
approach is our application of HPC techniques, resulting in a 7x speedup over
our previous system. Because of this efficiency, experiments that previously
took weeks now run in days. This enables us to iterate more quickly to identify
superior architectures and algorithms. As a result, in several cases, our
system is competitive with the transcription of human workers when benchmarked
on standard datasets. Finally, using a technique called Batch Dispatch with
GPUs in the data center, we show that our system can be inexpensively deployed
in an online setting, delivering low latency when serving users at scale. | http://arxiv.org/pdf/1512.02595 | Dario Amodei, Rishita Anubhai, Eric Battenberg, Carl Case, Jared Casper, Bryan Catanzaro, Jingdong Chen, Mike Chrzanowski, Adam Coates, Greg Diamos, Erich Elsen, Jesse Engel, Linxi Fan, Christopher Fougner, Tony Han, Awni Hannun, Billy Jun, Patrick LeGresley, Libby Lin, Sharan Narang, Andrew Ng, Sherjil Ozair, Ryan Prenger, Jonathan Raiman, Sanjeev Satheesh, David Seetapun, Shubho Sengupta, Yi Wang, Zhiqian Wang, Chong Wang, Bo Xiao, Dani Yogatama, Jun Zhan, Zhenyao Zhu | cs.CL | null | null | cs.CL | 20151208 | 20151208 | [] |
1512.02325 | 31 | SSD: Single Shot MultiBox Detector
animals vehicles furniture 80 80 a & & = = = 5 5 5 $ 60 8 60 Ay 5 3 o 3 x} 3. g 40 g 40 ry 8 8 8 = < < § 5 Fa 2 20 2 20 2 20 Es a & oO 0 0. 0.125 0.25 05 1 2 0.125025 05 1 2 0.125025 0.5 1 2 total detections (x 357) total detections (x 415) total detections (x 400) animals vehicles furniture 100 80 80 80 g & & = = = S 60 & 60 & 60 8 8 8 £-) x) ic] Fy 40 Fa 40 S 40 5 = = 2 20 8 20 8 20 Fy 5 5 ES ES a oO 25 50 100 200 400 800 16003200 total false positives 0 25 50 100 200 400 800 16003200 total false positives 0 25 50 100 200 400 800 16003200 total false positives | 1512.02325#31 | SSD: Single Shot MultiBox Detector | We present a method for detecting objects in images using a single deep
neural network. Our approach, named SSD, discretizes the output space of
bounding boxes into a set of default boxes over different aspect ratios and
scales per feature map location. At prediction time, the network generates
scores for the presence of each object category in each default box and
produces adjustments to the box to better match the object shape. Additionally,
the network combines predictions from multiple feature maps with different
resolutions to naturally handle objects of various sizes. Our SSD model is
simple relative to methods that require object proposals because it completely
eliminates proposal generation and subsequent pixel or feature resampling stage
and encapsulates all computation in a single network. This makes SSD easy to
train and straightforward to integrate into systems that require a detection
component. Experimental results on the PASCAL VOC, MS COCO, and ILSVRC datasets
confirm that SSD has comparable accuracy to methods that utilize an additional
object proposal step and is much faster, while providing a unified framework
for both training and inference. Compared to other single stage methods, SSD
has much better accuracy, even with a smaller input image size. For $300\times
300$ input, SSD achieves 72.1% mAP on VOC2007 test at 58 FPS on a Nvidia Titan
X and for $500\times 500$ input, SSD achieves 75.1% mAP, outperforming a
comparable state of the art Faster R-CNN model. Code is available at
https://github.com/weiliu89/caffe/tree/ssd . | http://arxiv.org/pdf/1512.02325 | Wei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott Reed, Cheng-Yang Fu, Alexander C. Berg | cs.CV | ECCV 2016 | null | cs.CV | 20151208 | 20161229 | [] |
1512.02595 | 31 | Both GRU and simple RNN architectures beneï¬t from batch normalization and show strong re- sults with deep networks. However, Table 3 shows that for a ï¬xed number of parameters, the GRU architectures achieve better WER for all network depths. This is clear evidence of the long term dependencies inherent in the speech recognition task present both within individual words and be8
Architecture Channels Filter dimension Stride 1-layer 1D 2-layer 1D 3-layer 1D 1-layer 2D 2-layer 2D 3-layer 2D 1280 640, 640 512, 512, 512 32 32, 32 32, 32, 96 11 5, 5 5, 5, 5 41x11 41x11, 21x11 41x11, 21x11, 21x11 2 1, 2 1, 1, 2 2x2 2x2, 2x1 2x2, 2x1, 2x1 9.52 9.67 9.20 8.94 9.06 8.61 19.36 19.21 20.22 16.22 15.71 14.74 | 1512.02595#31 | Deep Speech 2: End-to-End Speech Recognition in English and Mandarin | We show that an end-to-end deep learning approach can be used to recognize
either English or Mandarin Chinese speech--two vastly different languages.
Because it replaces entire pipelines of hand-engineered components with neural
networks, end-to-end learning allows us to handle a diverse variety of speech
including noisy environments, accents and different languages. Key to our
approach is our application of HPC techniques, resulting in a 7x speedup over
our previous system. Because of this efficiency, experiments that previously
took weeks now run in days. This enables us to iterate more quickly to identify
superior architectures and algorithms. As a result, in several cases, our
system is competitive with the transcription of human workers when benchmarked
on standard datasets. Finally, using a technique called Batch Dispatch with
GPUs in the data center, we show that our system can be inexpensively deployed
in an online setting, delivering low latency when serving users at scale. | http://arxiv.org/pdf/1512.02595 | Dario Amodei, Rishita Anubhai, Eric Battenberg, Carl Case, Jared Casper, Bryan Catanzaro, Jingdong Chen, Mike Chrzanowski, Adam Coates, Greg Diamos, Erich Elsen, Jesse Engel, Linxi Fan, Christopher Fougner, Tony Han, Awni Hannun, Billy Jun, Patrick LeGresley, Libby Lin, Sharan Narang, Andrew Ng, Sherjil Ozair, Ryan Prenger, Jonathan Raiman, Sanjeev Satheesh, David Seetapun, Shubho Sengupta, Yi Wang, Zhiqian Wang, Chong Wang, Bo Xiao, Dani Yogatama, Jun Zhan, Zhenyao Zhu | cs.CL | null | null | cs.CL | 20151208 | 20151208 | [] |
1512.02325 | 32 | Fig. 3: Visualization of performance for SSD512 on animals, vehicles, and furni- ture from VOC2007 test. The top row shows the cumulative fraction of detections that are correct (Cor) or false positive due to poor localization (Loc), confusion with similar categories (Sim), with others (Oth), or with background (BG). The solid red line reï¬ects the change of recall with strong criteria (0.5 jaccard overlap) as the num- ber of detections increases. The dashed red line is using the weak criteria (0.1 jaccard overlap). The bottom row shows the distribution of top-ranked false positive types.
sitio aoe soit Arto = eee sh pte en egg neg os} yt os aoe âae Aa + ° ities ie eRe os = 08 â= a nie in rq * 06 os =a On 0 MYO MIRE BIKE OCW
XSSMUXL XSSMLXL XSSMLXL XSSMLXL XSSMLXL XSSMLXL XSSMLXL
XTT XTT RWW XT RIWRWE XT REVI XT XT TT | 1512.02325#32 | SSD: Single Shot MultiBox Detector | We present a method for detecting objects in images using a single deep
neural network. Our approach, named SSD, discretizes the output space of
bounding boxes into a set of default boxes over different aspect ratios and
scales per feature map location. At prediction time, the network generates
scores for the presence of each object category in each default box and
produces adjustments to the box to better match the object shape. Additionally,
the network combines predictions from multiple feature maps with different
resolutions to naturally handle objects of various sizes. Our SSD model is
simple relative to methods that require object proposals because it completely
eliminates proposal generation and subsequent pixel or feature resampling stage
and encapsulates all computation in a single network. This makes SSD easy to
train and straightforward to integrate into systems that require a detection
component. Experimental results on the PASCAL VOC, MS COCO, and ILSVRC datasets
confirm that SSD has comparable accuracy to methods that utilize an additional
object proposal step and is much faster, while providing a unified framework
for both training and inference. Compared to other single stage methods, SSD
has much better accuracy, even with a smaller input image size. For $300\times
300$ input, SSD achieves 72.1% mAP on VOC2007 test at 58 FPS on a Nvidia Titan
X and for $500\times 500$ input, SSD achieves 75.1% mAP, outperforming a
comparable state of the art Faster R-CNN model. Code is available at
https://github.com/weiliu89/caffe/tree/ssd . | http://arxiv.org/pdf/1512.02325 | Wei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott Reed, Cheng-Yang Fu, Alexander C. Berg | cs.CV | ECCV 2016 | null | cs.CV | 20151208 | 20161229 | [] |
1512.02595 | 32 | Table 4: Comparison of WER for various arrangements of convolutional layers. In all cases, the convolutions are followed by 7 recurrent layers and 1 fully connected layer. For 2D-invariant convolutions the ï¬rst dimen- sion is frequency and the second dimension is time. All models have BatchNorm, SortaGrad, and 35 million parameters.
tween words. As we discuss in Section 3.8, even simple RNNs are able to implicitly learn a language model due to the large amount of training data. Interestingly, the GRU networks with 5 or more re- current layers do not signiï¬cantly improve performance. We attribute this to the thinning from 1728 hidden units per layer for 1 recurrent layer to 768 hidden units per layer for 7 recurrent layers, to keep the total number of parameters constant.
The GRU networks outperform the simple RNNs in Table 3. However, in later results (Section 6) we ï¬nd that as we scale up the model size, for a ï¬xed computational budget the simple RNN networks perform slightly better. Given this, most of the remaining experiments use the simple RNN layers rather than the GRUs.
# 3.5 Frequency Convolutions | 1512.02595#32 | Deep Speech 2: End-to-End Speech Recognition in English and Mandarin | We show that an end-to-end deep learning approach can be used to recognize
either English or Mandarin Chinese speech--two vastly different languages.
Because it replaces entire pipelines of hand-engineered components with neural
networks, end-to-end learning allows us to handle a diverse variety of speech
including noisy environments, accents and different languages. Key to our
approach is our application of HPC techniques, resulting in a 7x speedup over
our previous system. Because of this efficiency, experiments that previously
took weeks now run in days. This enables us to iterate more quickly to identify
superior architectures and algorithms. As a result, in several cases, our
system is competitive with the transcription of human workers when benchmarked
on standard datasets. Finally, using a technique called Batch Dispatch with
GPUs in the data center, we show that our system can be inexpensively deployed
in an online setting, delivering low latency when serving users at scale. | http://arxiv.org/pdf/1512.02595 | Dario Amodei, Rishita Anubhai, Eric Battenberg, Carl Case, Jared Casper, Bryan Catanzaro, Jingdong Chen, Mike Chrzanowski, Adam Coates, Greg Diamos, Erich Elsen, Jesse Engel, Linxi Fan, Christopher Fougner, Tony Han, Awni Hannun, Billy Jun, Patrick LeGresley, Libby Lin, Sharan Narang, Andrew Ng, Sherjil Ozair, Ryan Prenger, Jonathan Raiman, Sanjeev Satheesh, David Seetapun, Shubho Sengupta, Yi Wang, Zhiqian Wang, Chong Wang, Bo Xiao, Dani Yogatama, Jun Zhan, Zhenyao Zhu | cs.CL | null | null | cs.CL | 20151208 | 20151208 | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.