id
stringlengths
12
15
title
stringlengths
8
162
content
stringlengths
1
17.6k
prechunk_id
stringlengths
0
15
postchunk_id
stringlengths
0
15
arxiv_id
stringlengths
10
10
references
listlengths
1
1
1611.10012#1
Speed/accuracy trade-offs for modern convolutional object detectors
# 1. Introduction A lot of progress has been made in recent years on object detection due to the use of convolutional neural networks (CNNs). Modern object detectors based on these networks â such as Faster R-CNN [31], R-FCN [6], Multibox [40], SSD [26] and YOLO [29] â are now good enough to be deployed in consumer products (e.g., Google Photos, Pin- terest Visual Search) and some have been shown to be fast enough to be run on mobile devices.
1611.10012#0
1611.10012#2
1611.10012
[ "1512.00567" ]
1611.10012#2
Speed/accuracy trade-offs for modern convolutional object detectors
However, it can be difï¬ cult for practitioners to decide what architecture is best suited to their application. Stan- dard accuracy metrics, such as mean average precision (mAP), do not tell the entire story, since for real deploy- ments of computer vision systems, running time and mem- ory usage are also critical. For example, mobile devices often require a small memory footprint, and self driving cars require real time performance. Server-side production systems, like those used in Google, Facebook or Snapchat, have more leeway to optimize for accuracy, but are still sub- ject to throughput constraints. While the methods that win competitions, such as the COCO challenge [25], are opti- mized for accuracy, they often rely on model ensembling and multicrop methods which are too slow for practical us- age. Unfortunately, only a small subset of papers (e.g., R- FCN [6], SSD [26] YOLO [29]) discuss running time in any detail. Furthermore, these papers typically only state that they achieve some frame-rate, but do not give a full picture of the speed/accuracy trade-off, which depends on many other factors, such as which feature extractor is used, input image sizes, etc. In this paper, we seek to explore the speed/accuracy trade-off of modern detection systems in an exhaustive and fair way. While this has been studied for full image clas- siï¬ cation( (e.g., [3]), detection models tend to be signif- icantly more complex. We primarily investigate single- model/single-pass detectors, by which we mean models that do not use ensembling, multi-crop methods, or other â
1611.10012#1
1611.10012#3
1611.10012
[ "1512.00567" ]
1611.10012#3
Speed/accuracy trade-offs for modern convolutional object detectors
tricksâ such as horizontal ï¬ ipping. In other words, we only pass a single image through a single network. For simplicity (and because it is more important for users of this technol- ogy), we focus only on test-time performance and not on how long these models take to train. Though it is impractical to compare every recently pro- posed detection system, we are fortunate that many of the leading state of the art approaches have converged on a common methodology (at least at a high level). This has allowed us to implement and compare a large number of de- tection systems in a uniï¬
1611.10012#2
1611.10012#4
1611.10012
[ "1512.00567" ]
1611.10012#4
Speed/accuracy trade-offs for modern convolutional object detectors
ed manner. In particular, we have created implementations of the Faster R-CNN, R-FCN and SSD meta-architectures, which at a high level consist of a single convolutional network, trained with a mixed regres- sion and classiï¬ cation objective, and use sliding window style predictions. To summarize, our main contributions are as follows: â ¢ We provide a concise survey of modern convolutional 1 detection systems, and describe how the leading ones follow very similar designs. We describe our ï¬ exible and uniï¬ ed implementation of three meta-architectures (Faster R-CNN, R-FCN and SSD) in Tensorï¬ ow which we use to do exten- sive experiments that trace the accuracy/speed trade- off curve for different detection systems, varying meta- architecture, feature extractor, image resolution, etc. â
1611.10012#3
1611.10012#5
1611.10012
[ "1512.00567" ]
1611.10012#5
Speed/accuracy trade-offs for modern convolutional object detectors
¢ Our ï¬ ndings show that using fewer proposals for Faster R-CNN can speed it up signiï¬ cantly without a big loss in accuracy, making it competitive with its faster cousins, SSD and RFCN. We show that SSDs performance is less sensitive to the quality of the fea- ture extractor than Faster R-CNN and R-FCN. And we identify sweet spots on the accuracy/speed trade-off curve where gains in accuracy are only possible by sac- riï¬ cing speed (within the family of detectors presented here).
1611.10012#4
1611.10012#6
1611.10012
[ "1512.00567" ]
1611.10012#6
Speed/accuracy trade-offs for modern convolutional object detectors
â ¢ Several of the meta-architecture and feature-extractor combinations that we report have never appeared be- fore in literature. We discuss how we used some of these novel combinations to train the winning entry of the 2016 COCO object detection challenge. # 2. Meta-architectures Neural nets have become the leading method for high quality object detection in recent years. In this section we survey some of the highlights of this literature. The R-CNN paper by Girshick et al. [11] was among the ï¬ rst modern incarnations of convolutional network based detection. In- spired by recent successes on image classiï¬ cation [20], the R-CNN method took the straightforward approach of crop- ping externally computed box proposals out of an input im- age and running a neural net classiï¬ er on these crops. This approach can be expensive however because many crops are necessary, leading to signiï¬ cant duplicated computation from overlapping crops. Fast R-CNN [10] alleviated this problem by pushing the entire image once through a feature extractor then cropping from an intermediate layer so that crops share the computation load of feature extraction. While both R-CNN and Fast R-CNN relied on an exter- nal proposal generator, recent works have shown that it is possible to generate box proposals using neural networks as well [41, 40, 8, 31]. In these works, it is typical to have a collection of boxes overlaid on the image at different spatial locations, scales and aspect ratios that act as â
1611.10012#5
1611.10012#7
1611.10012
[ "1512.00567" ]
1611.10012#7
Speed/accuracy trade-offs for modern convolutional object detectors
anchorsâ (sometimes called â priorsâ or â default boxesâ ). A model is then trained to make two predictions for each anchor: (1) a discrete class prediction for each anchor, and (2) a continuous prediction of an offset by which the anchor needs to be shifted to ï¬ t the groundtruth bounding box. Papers that follow this anchors methodology then 2 minimize a combined classiï¬ cation and regression loss that we now describe. For each anchor a, we ï¬ rst ï¬ nd the best matching groundtruth box b (if one exists). If such a match can be found, we call a a â positive anchorâ , and assign it (1) a class label ya â {1 . . . K} and (2) a vector encoding of box b with respect to anchor a (called the box encoding If no match is found, we call a a â negative Ï (ba; a)). anchorâ and we set the class label to be ya = 0. If for the anchor a we predict box encoding floc(I; a, θ) and corresponding class fcls(I; a, θ), where I is the image and θ the model parameters, then the loss for a is measured as a weighted sum of a location-based loss and a classiï¬ cation loss: L(a,Z; 0) = a: U[ais positive] « Lioc(¢(ba; a) â fioc(Z; a, 4)) +8 bets(Ya, feis(Z; a, 4)), (dd) where α, β are weights balancing localization and classi- ï¬ cation losses. To train the model, Equation 1 is averaged over anchors and minimized with respect to parameters θ. The choice of anchors has signiï¬ cant implications both for accuracy and computation. In the (ï¬ rst) Multibox paper [8], these anchors (called â box priorsâ by the au- thors) were generated by clustering groundtruth boxes in the dataset. In more recent works, anchors are generated by tiling a collection of boxes at different scales and aspect ratios regularly across the image.
1611.10012#6
1611.10012#8
1611.10012
[ "1512.00567" ]
1611.10012#8
Speed/accuracy trade-offs for modern convolutional object detectors
The advantage of hav- ing a regular grid of anchors is that predictions for these boxes can be written as tiled predictors on the image with shared parameters (i.e., convolutions) and are reminiscent of traditional sliding window methods, e.g. [44]. The Faster R-CNN [31] paper and the (second) Multibox paper [40] (which called these tiled anchors â convolutional priorsâ ) were the ï¬ rst papers to take this new approach. # 2.1. Meta-architectures In our paper we focus primarily on three recent (meta)- architectures: SSD (Single Shot Multibox Detector [26]), Faster R-CNN [31] and R-FCN (Region-based Fully Con- volutional Networks [6]). While these papers were orig- inally presented with a particular feature extractor (e.g., VGG, Resnet, etc), we now review these three methods, de- coupling the choice of meta-architecture from feature ex- tractor so that conceptually, any feature extractor can be used with SSD, Faster R-CNN or R-FCN. # 2.1.1 Single Shot Detector (SSD). Though the SSD paper was published only recently (Liu et al., [26]), we use the term SSD to refer broadly to archi- tectures that use a single feed-forward convolutional net- work to directly predict classes and anchor offsets without requiring a second stage per-proposal classiï¬ cation oper- ation (Figure 1a). Under this deï¬ nition, the SSD meta- architecture has been explored in a number of precursors to [26]. Both Multibox and the Region Proposal Network Paper Szegedy et al. [40] Redmon et al. [29] Ren et al. [31] He et al. [13] Liu et al. [26] (v1) Liu et al. [26] (v2, v3) Dai et al [6] Meta-architecture SSD SSD Faster R-CNN Faster R-CNN SSD SSD R-FCN Feature Extractor InceptionV3 Custom (GoogLeNet inspired) VGG ResNet-101 InceptionV3 VGG ResNet-101 Matching Bipartite Box Center Argmax Argmax Argmax Argmax Argmax Box Encoding Ï
1611.10012#7
1611.10012#9
1611.10012
[ "1512.00567" ]
1611.10012#9
Speed/accuracy trade-offs for modern convolutional object detectors
(ba, a) [x0, y0, x1, y1] â â [xc, yc, h] , yc ha , yc ha [x0, y0, x1, y1] , yc ha , yc ha w, [ xc wa [ xc wa , log w, log h] , log w, log h] [ xc wa [ xc wa , log w, log h] , log w, log h] Location Loss functions L2 L2 SmoothL1 SmoothL1 L2 SmoothL1 SmoothL1 Table 1: Convolutional detection models that use one of the meta-architectures described in Section 2. Boxes are encoded with respect to a matching anchor a via a function Ï (Equation 1), where [x0, y0, x1, y1] are min/max coordinates of a box, xc, yc are its center coordinates, and w, h its width and height. In some cases, wa, ha, width and height of the matching anchor are also used. Notes: (1) We include an early arXiv version of [26], which used a different conï¬ guration from that published at ECCV 2016; (2) [29] uses a fast feature extractor described as being inspired by GoogLeNet [39], which we do not compare to; (3) YOLO matches a groundtruth box to an anchor if its center falls inside the anchor (we refer to this as BoxCenter). (a) SSD. (b) Faster RCNN. (c) R-FCN.
1611.10012#8
1611.10012#10
1611.10012
[ "1512.00567" ]
1611.10012#10
Speed/accuracy trade-offs for modern convolutional object detectors
; a AS: iP â â _ Gg animate p sete are Figure 1: High level diagrams of the detection meta-architectures compared in this paper. (RPN) stage of Faster R-CNN [40, 31] use this approach to predict class-agnostic box proposals. [33, 29, 30, 9] use SSD-like architectures to predict ï¬ nal (1 of K) class labels. And Poirson et al., [28] extended this idea to predict boxes, classes and pose.
1611.10012#9
1611.10012#11
1611.10012
[ "1512.00567" ]
1611.10012#11
Speed/accuracy trade-offs for modern convolutional object detectors
ticularly inï¬ uential, and has led to a number of follow-up works [2, 35, 34, 46, 13, 5, 19, 45, 24, 47] (including SSD and R-FCN). Notably, half of the submissions to the COCO object detection server as of November 2016 are reported to be based on the Faster R-CNN system in some way. # 2.1.2 Faster R-CNN. In the Faster R-CNN setting, detection happens in two stages (Figure 1b).
1611.10012#10
1611.10012#12
1611.10012
[ "1512.00567" ]
1611.10012#12
Speed/accuracy trade-offs for modern convolutional object detectors
In the ï¬ rst stage, called the region pro- posal network (RPN), images are processed by a feature extractor (e.g., VGG-16), and features at some selected in- termediate level (e.g., â conv5â ) are used to predict class- agnostic box proposals. The loss function for this ï¬ rst stage takes the form of Equation 1 using a grid of anchors tiled in space, scale and aspect ratio. In the second stage, these (typically 300) box proposals are used to crop features from the same intermediate feature map which are subsequently fed to the remainder of the fea- ture extractor (e.g., â
1611.10012#11
1611.10012#13
1611.10012
[ "1512.00567" ]
1611.10012#13
Speed/accuracy trade-offs for modern convolutional object detectors
fc6â followed by â fc7â ) in order to pre- dict a class and class-speciï¬ c box reï¬ nement for each pro- posal. The loss function for this second stage box classiï¬ er also takes the form of Equation 1 using the proposals gener- ated from the RPN as anchors. Notably, one does not crop proposals directly from the image and re-run crops through the feature extractor, which would be duplicated computa- tion. However there is part of the computation that must be run once per region, and thus the running time depends on the number of regions proposed by the RPN.
1611.10012#12
1611.10012#14
1611.10012
[ "1512.00567" ]
1611.10012#14
Speed/accuracy trade-offs for modern convolutional object detectors
# 2.2. R-FCN While Faster R-CNN is an order of magnitude faster than Fast R-CNN, the fact that the region-speciï¬ c component must be applied several hundred times per image led Dai et al. [6] to propose the R-FCN (Region-based Fully Con- volutional Networks) method which is like Faster R-CNN, but instead of cropping features from the same layer where region proposals are predicted, crops are taken from the last layer of features prior to prediction (Figure 1c). This approach of pushing cropping to the last layer minimizes the amount of per-region computation that must be done. Dai et al. argue that the object detection task needs local- ization representations that respect translation variance and thus propose a position-sensitive cropping mechanism that is used instead of the more standard ROI pooling operations used in [10, 31] and the differentiable crop mechanism of [5]. They show that the R-FCN model (using Resnet 101) could achieve comparable accuracy to Faster R-CNN often at faster running times. Recently, the R-FCN model was also adapted to do instance segmentation in the recent TA- FCN model [22], which won the 2016 COCO instance seg- mentation challenge. Since appearing in 2015, Faster R-CNN has been par-
1611.10012#13
1611.10012#15
1611.10012
[ "1512.00567" ]
1611.10012#15
Speed/accuracy trade-offs for modern convolutional object detectors
3 # 3. Experimental setup The introduction of standard benchmarks such as Im- agenet [32] and COCO [25] has made it easier in recent years to compare detection methods with respect to ac- curacy. However, when it comes to speed and memory, apples-to-apples comparisons have been harder to come by. Prior works have relied on different deep learning frame- works (e.g., DistBelief [7], Caffe [18], Torch [4]) and dif- ferent hardware. Some papers have optimized for accuracy; others for speed.
1611.10012#14
1611.10012#16
1611.10012
[ "1512.00567" ]
1611.10012#16
Speed/accuracy trade-offs for modern convolutional object detectors
And ï¬ nally, in some cases, metrics are reported using slightly different training sets (e.g., COCO training set vs. combined training+validation sets). In order to better perform apples-to-apples comparisons, we have created a detection platform in Tensorï¬ ow [1] and have recreated training pipelines for SSD, Faster R-CNN and R-FCN meta-architectures on this platform. Having a uniï¬ ed framework has allowed us to easily swap feature ex- tractor architectures, loss functions, and having it in Ten- sorï¬ ow allows for easy portability to diverse platforms for deployment. In the following we discuss ways to conï¬ gure model architecture, loss function and input on our platform â
1611.10012#15
1611.10012#17
1611.10012
[ "1512.00567" ]
1611.10012#17
Speed/accuracy trade-offs for modern convolutional object detectors
knobs that can be used to trade speed and accuracy. # 3.1. Architectural conï¬ guration # 3.1.1 Feature extractors. In all of the meta-architectures, we ï¬ rst apply a convolu- tional feature extractor to the input image to obtain high- level features. The choice of feature extractor is crucial as the number of parameters and types of layers directly affect memory, speed, and performance of the detector. We have selected six representative feature extractors to compare in this paper and, with the exception of MobileNet [14], all have open source Tensorï¬ ow implementations and have had sizeable inï¬ uence on the vision community. In more detail, we consider the following six feature ex- tractors. We use VGG-16 [37] and Resnet-101 [13], both of which have won many competitions such as ILSVRC and COCO 2015 (classiï¬ cation, detection and segmentation). We also use Inception v2 [16], which set the state of the art in the ILSVRC 2014 classiï¬ cation and detection challenges, as well as its successor Inception v3 [42]. Both of the In- ception networks employed â Inception unitsâ which made it possible to increase the depth and width of a network with- out increasing its computational budget. Recently, Szegedy et al. [38] proposed Inception Resnet (v2), which combines the optimization beneï¬ ts conferred by residual connections with the computation efï¬ ciency of Inception units. Fi- nally, we compare against the new MobileNet network [14], which has been shown to achieve VGG-16 level accuracy on Imagenet with only 1/30 of the computational cost and model size. MobileNet is designed for efï¬ cient inference in various mobile vision applications.
1611.10012#16
1611.10012#18
1611.10012
[ "1512.00567" ]
1611.10012#18
Speed/accuracy trade-offs for modern convolutional object detectors
Its building blocks are 4 depthwise separable convolutions which factorize a stan- dard convolution into a depthwise convolution and a 1 Ã 1 convolution, effectively reducing both computational cost and number of parameters. For each feature extractor, there are choices to be made in order to use it within a meta-architecture. For both Faster R-CNN and R-FCN, one must choose which layer to use for predicting region proposals. In our experiments, we use the choices laid out in the original papers when possible. For example, we use the â conv5â
1611.10012#17
1611.10012#19
1611.10012
[ "1512.00567" ]
1611.10012#19
Speed/accuracy trade-offs for modern convolutional object detectors
layer from VGG-16 [31] and the last layer of conv 4 x layers in Resnet-101 [13]. For other feature extractors, we have made analogous choices. See supplementary materials for more details. Liu et al. [26] showed that in the SSD setting, using multiple feature maps to make location and conï¬ dence pre- dictions at multiple scales is critical for good performance. For VGG feature extractors, they used conv4 3, fc7 (con- verted to a convolution layer), as well as a sequence of added layers. In our experiments, we follow their method- ology closely, always selecting the topmost convolutional feature map and a higher resolution feature map at a lower level, then adding a sequence of convolutional layers with spatial resolution decaying by a factor of 2 with each addi- tional layer used for prediction. However unlike [26], we use batch normalization in all additional layers. For comparison, feature extractors used in previous works are shown in Table 1. In this work, we evaluate all combinations of meta-architectures and feature extractors, most of which are novel. Notably, Inception networks have never been used in Faster R-CNN frameworks and until re- cently were not open sourced [36]. Inception Resnet (v2) and MobileNet have not appeared in the detection literature to date.
1611.10012#18
1611.10012#20
1611.10012
[ "1512.00567" ]
1611.10012#20
Speed/accuracy trade-offs for modern convolutional object detectors
# 3.1.2 Number of proposals. For Faster R-CNN and R-FCN, we can also choose the number of region proposals to be sent to the box classiï¬ er at test time. Typically, this number is 300 in both settings, but an easy way to save computation is to send fewer boxes po- tentially at the risk of reducing recall. In our experiments, we vary this number of proposals between 10 and 300 in order to explore this trade-off. # 3.1.3 Output stride settings for Resnet and Inception Resnet. Our implementation of Resnet-101 is slightly modiï¬ ed from the original to have an effective output stride of 16 instead of 32; we achieve this by modifying the conv5 1 layer to have stride 1 instead of 2 (and compensating for re- duced stride by using atrous convolutions in further layers) as in [6]. For Faster R-CNN and R-FCN, in addition to the default stride of 16, we also experiment with a (more ex- pensive) stride 8 Resnet-101 in which the conv4 1 block is additionally modiï¬ ed to have stride 1. Likewise, we exper- iment with stride 16 and stride 8 versions of the Inception Resnet network.
1611.10012#19
1611.10012#21
1611.10012
[ "1512.00567" ]
1611.10012#21
Speed/accuracy trade-offs for modern convolutional object detectors
We ï¬ nd that using stride 8 instead of 16 improves the mAP by a factor of 5%1, but increased run- ning time by a factor of 63%. # 3.2. Loss function conï¬ guration Beyond selecting a feature extractor, there are choices in conï¬ guring the loss function (Equation 1) which can impact training stability and ï¬ nal performance. Here we describe the choices that we have made in our experiments and Ta- ble 1 again compares how similar loss functions are conï¬ g- ured in other works. # 3.2.1 Matching. Determining classiï¬ cation and regression targets for each anchor requires matching anchors to groundtruth instances. Common approaches include greedy bipartite matching (e.g., based on Jaccard overlap) or many-to-one matching strategies in which bipartite-ness is not required, but match- ings are discarded if Jaccard overlap between an anchor and groundtruth is too low. We refer to these strategies as Bipartite or Argmax, respectively. In our experiments we use Argmax matching throughout with thresholds set as suggested in the original paper for each meta-architecture. After matching, there is typically a sampling procedure de- signed to bring the number of positive anchors and negative anchors to some desired ratio. In our experiments, we also ï¬ x these ratios to be those recommended by the paper for each meta-architecture. # 3.2.2 Box encoding. To encode a groundtruth box with respect to its matching anchor, we use the box encoding function Ï (ba; a) = [10 · xc , 5·log w, 5·log h] (also used by [11, 10, 31, 26]). wa Note that the scalar multipliers 10 and 5 are typically used in all of these prior works, even if not explicitly mentioned. # 3.2.3 Location loss (¢;,,.). Following [10, 31, 26], we use the Smooth L1 (or Hu- ber [15]) loss function in all experiments. # 3.3. Input size conï¬ guration.
1611.10012#20
1611.10012#22
1611.10012
[ "1512.00567" ]
1611.10012#22
Speed/accuracy trade-offs for modern convolutional object detectors
In Faster R-CNN and R-FCN, models are trained on im- ages scaled to M pixels on the shorter edge whereas in SSD, images are always resized to a ï¬ xed shape M à M . We explore evaluating each model on downscaled images as 1 i.e., (map8 - map16) / map16 = 0.05. 5 a way to trade accuracy for speed. In particular, we have trained high and low-resolution versions of each model. In the â high-resolutionâ settings, we set M = 600, and in the â low-resolutionâ setting, we set M = 300. In both cases, this means that the SSD method processes fewer pix- els on average than a Faster R-CNN or R-FCN model with all other variables held constant. # 3.4. Training and hyperparameter tuning We jointly train all models end-to-end using asyn- chronous gradient updates on a distributed cluster [7]. For Faster RCNN and R-FCN, we use SGD with momentum with batch sizes of 1 (due to these models being trained using different image sizes) and for SSD, we use RM- SProp [43] with batch sizes of 32 (in a few exceptions we reduced the batch size for memory reasons). Finally we manually tune learning rate schedules individually for each feature extractor.
1611.10012#21
1611.10012#23
1611.10012
[ "1512.00567" ]
1611.10012#23
Speed/accuracy trade-offs for modern convolutional object detectors
For the model conï¬ gurations that match works in literature ([31, 6, 13, 26]), we have reproduced or surpassed the reported mAP results.2 Note that for Faster R-CNN and R-FCN, this end-to- end approach is slightly different from the 4-stage train- ing procedure that is typically used. Additionally, in- stead of using the ROI Pooling layer and Position-sensitive ROI Pooling layers used by [31, 6], we use Tensorï¬
1611.10012#22
1611.10012#24
1611.10012
[ "1512.00567" ]
1611.10012#24
Speed/accuracy trade-offs for modern convolutional object detectors
owâ s â crop and resizeâ operation which uses bilinear interpola- tion to resample part of an image onto a ï¬ xed sized grid. This is similar to the differentiable cropping mechanism of [5], the attention model of [12] as well as the Spatial Transformer Network [17]. However we disable backpropa- gation with respect to bounding box coordinates as we have found this to be unstable during training. Our networks are trained on the COCO dataset, using all training images as well as a subset of validation images, holding out 8000 examples for validation.3 Finally at test time, we post-process detections with non-max suppression using an IOU threshold of 0.6 and clip all boxes to the image window.
1611.10012#23
1611.10012#25
1611.10012
[ "1512.00567" ]
1611.10012#25
Speed/accuracy trade-offs for modern convolutional object detectors
To evaluate our ï¬ nal detections, we use the ofï¬ cial COCO API [23], which measures mAP averaged over IOU thresholds in [0.5 : 0.05 : 0.95], amongst other metrics. # 3.5. Benchmarking procedure To time our models, we use a machine with 32GB RAM, Intel Xeon E5-1650 v2 processor and an Nvidia GeForce GTX Titan X GPU card. Timings are reported on GPU for a batch size of one. The images used for timing are resized so that the smallest size is at least k and then cropped to 2In the case of SSD with VGG, we have reproduced the number re- ported in the ECCV version of the paper, but the most recent version on ArXiv uses an improved data augmentation scheme to obtain somewhat higher numbers, which we have not yet experimented with. 3We remark that this dataset is similar but slightly smaller than the trainval35k set that has been used in several papers, e.g., [2, 26].
1611.10012#24
1611.10012#26
1611.10012
[ "1512.00567" ]
1611.10012#26
Speed/accuracy trade-offs for modern convolutional object detectors
k à k where k is either 300 or 600 based on the model. We average the timings over 500 images. We include postprocessing in our timing (which includes non-max suppression and currently runs only on the CPU). Postprocessing can take up the bulk of the running time for the fastest models at â ¼ 40ms and currently caps our maximum framerate to 25 frames per second. Among other things, this means that while our timing results are compa- rable amongst each other, they may not be directly compara- ble to other reported speeds in the literature. Other potential differences include hardware, software drivers, framework (Tensorï¬ ow in our case), and batch size (e.g., the Liu et al. [26] report timings using batch sizes of 8). Finally, we use tfprof [27] to measure the total memory demand of the models during inference; this gives a more platform inde- pendent measure of memory demand. We also average the memory measurements over three images. # 3.6. Model Details Table 2 summarizes the feature extractors that we use. All models are pretrained on ImageNet-CLS. We give de- tails on how we train the object detectors using these feature extractors below. # 3.6.1 Faster R-CNN implementation of Faster We use Tensorï¬
1611.10012#25
1611.10012#27
1611.10012
[ "1512.00567" ]
1611.10012#27
Speed/accuracy trade-offs for modern convolutional object detectors
owâ s RCNN [31] closely, â crop and resizeâ operation instead of standard ROI pooling . Except for VGG, all the feature extractors use batch normalization after convolutional layers. We freeze the batch normalization parameters to be those estimated during ImageNet pretraining. We train faster RCNN with asynchronous SGD with momentum of 0.9. The initial learning rates depend on which feature extractor we used, as explained below. We reduce the learning rate by 10x after 900K iterations and another 10x after 1.2M iterations. 9 GPU workers are used during asynchronous training. Each GPU worker takes a single image per iteration; the minibatch size for RPN training is 256, while the minibatch size for box classiï¬ er training is 64.
1611.10012#26
1611.10012#28
1611.10012
[ "1512.00567" ]
1611.10012#28
Speed/accuracy trade-offs for modern convolutional object detectors
â ¢ VGG [37]: We extract features from the â conv5â layer whose stride size is 16 pixels. Similar to [5], we crop and resize feature maps to 14x14 then maxpool to 7x7. The initial learning rate is 5e-4. â ¢ Resnet 101 [13]: We extract features from the last layer of the â conv4â block. When operating in atrous mode, the stride size is 8 pixels, otherwise it is 16 pix- els. Feature maps are cropped and resized to 14x14 then maxpooled to 7x7. The initial learning rate is 3e- 4.
1611.10012#27
1611.10012#29
1611.10012
[ "1512.00567" ]
1611.10012#29
Speed/accuracy trade-offs for modern convolutional object detectors
â ¢ Inception V2 [16]: We extract features from the â Mixed 4eâ layer whose stride size is 16 pixels. Fea- 6 Model VGG-16 MobileNet Inception V2 ResNet-101 Inception V3 Inception Resnet V2 14,714,688 3,191,072 10,173,112 42,605,504 21,802,784 54,336,736 Table 2: Properties of the 6 feature extractors that we use. Top-1 accuracy is the classiï¬ cation accuracy on ImageNet. ture maps are cropped and resized to 14x14. The initial learning rate is 2e-4.
1611.10012#28
1611.10012#30
1611.10012
[ "1512.00567" ]
1611.10012#30
Speed/accuracy trade-offs for modern convolutional object detectors
â ¢ Inception V3 [42]: We extract features from the â Mixed 6eâ layer whose stride size is 16 pixels. Fea- ture maps are cropped and resized to 17x17. The initial learning rate is 3e-4. â ¢ Inception Resnet [38]: We extract features the from â Mixed 6aâ layer including its associated residual lay- ers. When operating in atrous mode, the stride size is 8 pixels, otherwise is 16 pixels. Feature maps are cropped and resized to 17x17. The initial learning rate is 1e-3. â ¢ MobileNet features from the â Conv2d 11â layer whose stride size is 16 pixels. Fea- ture maps are cropped and resized to 14x14.
1611.10012#29
1611.10012#31
1611.10012
[ "1512.00567" ]
1611.10012#31
Speed/accuracy trade-offs for modern convolutional object detectors
The initial learning rate is 3e-3. # 3.6.2 R-FCN We follow the implementation of R-FCN [6] closely, but use Tensorï¬ owâ s â crop and resizeâ operation instead of ROI pooling to crop regions from the position-sensitive score maps. All feature extractors use batch normalization after convolutional layers. We freeze the batch normalization pa- rameters to be those estimated during ImageNet pretraining. We train R-FCN with asynchronous SGD with momentum of 0.9. 9 GPU workers are used during asynchronous train- ing. Each GPU worker takes a single image per iteration; the minibatch size for RPN training is 256. As of the time of this submission, we do not have R-FCN results for VGG or Inception V3 feature extractors.
1611.10012#30
1611.10012#32
1611.10012
[ "1512.00567" ]
1611.10012#32
Speed/accuracy trade-offs for modern convolutional object detectors
â ¢ Resnet 101 [13]: We extract features from â block3â layer. When operating in atrous mode, the stride size is 8 pixels, otherwise it is 16 pixels. Position-sensitive score maps are cropped with spatial bins of size 7x7 and resized to 21x21. We use online hard example mining to sample a minibatch of size 128 for training the box classiï¬ er. The initial learning rate is 3e-4. It is reduced by 10x after 1M steps and another 10x after 1.2M steps.
1611.10012#31
1611.10012#33
1611.10012
[ "1512.00567" ]
1611.10012#33
Speed/accuracy trade-offs for modern convolutional object detectors
from â Mixed 4eâ layer whose stride size is 16 pixels. Position-sensitive score maps are cropped with spatial bins of size 3x3 and resized to 12x12. We use online hard example mining to sample a minibatch of size 128 for training the box classiï¬ er. The initial learning rate is 2e-4. It is reduced by 10x after 1.8M steps and an- other 10x after 2M steps.
1611.10012#32
1611.10012#34
1611.10012
[ "1512.00567" ]
1611.10012#34
Speed/accuracy trade-offs for modern convolutional object detectors
â ¢ Inception Resnet [38]: We extract features from â Mixed 6aâ layer including its associated residual lay- ers. When operating in atrous mode, the stride size is 8 pixels, otherwise it is 16 pixels. Position-sensitive score maps are cropped with spatial bins of size 7x7 and resized to 21x21. We use all proposals from RPN for box classiï¬ er training. The initial learning rate is 7e-4. It is reduced by 10x after 1M steps and another 10x after 1.2M steps.
1611.10012#33
1611.10012#35
1611.10012
[ "1512.00567" ]
1611.10012#35
Speed/accuracy trade-offs for modern convolutional object detectors
from â Conv2d 11â layer whose stride size is 16 pix- els. Position-sensitive score maps are cropped with spatial bins of size 3x3 and resized to 12x12. We use online hard example mining to sample a minibatch of size 128 for training the box classiï¬ er. The initial learning rate is 2e-3. Learning rate is reduced by 10x after 1.6M steps and another 10x after 1.8M steps. # 3.6.3 SSD As described in the main paper, we follow the methodol- ogy of [26] closely, generating anchors in the same way and selecting the topmost convolutional feature map and a higher resolution feature map at a lower level, then adding a sequence of convolutional layers with spatial resolution decaying by a factor of 2 with each additional layer used for prediction. The feature map selection for Resnet101 is slightly different, as described below. Unlike [26], we use batch normalization in all additional layers, and initialize weights with a truncated normal distri- bution with a standard deviation of Ï
1611.10012#34
1611.10012#36
1611.10012
[ "1512.00567" ]
1611.10012#36
Speed/accuracy trade-offs for modern convolutional object detectors
= .03. With the ex- ception of VGG, we also do not perform â layer normaliza- tionâ (as suggested in [26]) as we found it not to be neces- sary for the other feature extractors. Finally, we employ dis- tributed training with asynchronous SGD using 11 worker machines. Below we discuss the speciï¬ cs for each feature extractor that we have considered. As of the time of this submission, we do not have SSD results for the Inception V3 feature extractor and we only have results for high reso- lution SSD models using the Resnet 101 and Inception V2 feature extractors.
1611.10012#35
1611.10012#37
1611.10012
[ "1512.00567" ]
1611.10012#37
Speed/accuracy trade-offs for modern convolutional object detectors
â ¢ VGG [37]: Following the paper, we use conv4 3, and fc7 layers, appending ï¬ ve additional convolutional lay- ers with decaying spatial resolution with depths 512, 7 256, 256, 256, 256, respectively. We apply L2 normal- ization to the conv4 3 layer, scaling the feature norm at each location in the feature map to a learnable scale, s, which is initialized to 20.0. During training, we use a base learning rate of lrbase = .0003, but use a warm-up learning rate scheme in which we ï¬ rst train with a learning rate of 0.82 · lrbase for 10K iterations followed by 0.8 · lrbase for another 10K iterations.
1611.10012#36
1611.10012#38
1611.10012
[ "1512.00567" ]
1611.10012#38
Speed/accuracy trade-offs for modern convolutional object detectors
â ¢ Resnet 101 [13]: We use the feature map from the last layer of the â conv4â block. When operating in atrous mode, the stride size is 8 pixels, otherwise it is 16 pix- els. Five additional convolutional layers with decay- ing spatial resolution are appended, which have depths 512, 512, 256, 256, 128, respectively. We have exper- imented with including the feature map from the last layer of the â conv5â block. With â conv5â
1611.10012#37
1611.10012#39
1611.10012
[ "1512.00567" ]
1611.10012#39
Speed/accuracy trade-offs for modern convolutional object detectors
features, the mAP numbers are very similar, but the computational costs are higher. Therefore we choose to use the last layer of the â conv4â block. During training, a base learning rate of 3e-4 is used. We use a learning rate warm up strategy similar to the VGG one. â ¢ Inception V2 [16]: We use Mixed 4c and Mixed 5c, appending four additional convolutional layers with decaying resolution with depths 512, 256, 256, 128 re- spectively. We use ReLU6 as the non-linear activation function for each conv layer. During training, we use a base learning rate of 0.002, followed by learning rate decay of 0.95 every 800k steps.
1611.10012#38
1611.10012#40
1611.10012
[ "1512.00567" ]
1611.10012#40
Speed/accuracy trade-offs for modern convolutional object detectors
[38]: We use Mixed 6a and Conv2d 7b, appending three additional convolutional layers with decaying resolution with depths 512, 256, 128 respectively. We use ReLU as the non-linear acti- vation function for each conv layer. During training, we use a base learning rate of 0.0005, followed by learning rate decay of 0.95 every 800k steps. ⠢ MobileNet [14]: We use conv 11 and conv 13, ap- pending four additional convolutional layers with de- caying resolution with depths 512, 256, 256, 128 re- spectively. The non-linear activation function we use is ReLU6 and both batch norm parameters β and γ are trained. During training, we use a base learning rate of 0.004, followed by learning rate decay of 0.95 every 800k steps.
1611.10012#39
1611.10012#41
1611.10012
[ "1512.00567" ]
1611.10012#41
Speed/accuracy trade-offs for modern convolutional object detectors
# 4. Results In this section we analyze the data that we have collected by training and benchmarking detectors, sweeping over model conï¬ gurations as described in Section 3. Each such model conï¬ guration includes a choice of meta-architecture, feature extractor, stride (for Resnet and Inception Resnet) as 40 Faster R-CNN w/ResNet, Hi Res, 50 Proposals @ Faster RCNN 35 renw/ ResNet, Hi Res, 100 Proposals Be oe © % ce Ce 30 fe @ E 2 =25 + ea o id e > oP, 20 ? I SSD w/Inception V2, Lo Res 15 SSD w/MobileNet, Lo Res 10 0 200 400 Meta Architecture R-FCN HS rT OTT TOT Te @ ssD Faster R-CNN w/Inception Resnet, Hi Res, 300 Proposals, Stride 8 Feature Extractor Inception Resnet V2 Inception V2 Inception V3 MobileNet Resnet 101 VGG 600 800 1000 GPU Time Figure 2: Accuracy vs time, with marker shapes indicating meta-architecture and colors indicating feature extractor. Each (meta-architecture, feature extractor) pair can correspond to multiple points on this plot due to changing input sizes, stride, etc.
1611.10012#40
1611.10012#42
1611.10012
[ "1512.00567" ]
1611.10012#42
Speed/accuracy trade-offs for modern convolutional object detectors
minival mAP 19.3 22 32 30.4 35.7 test-dev mAP 18.8 21.6 31.9 30.3 35.6 # Table 3: Test-dev performance of the â criticalâ points along our optimality frontier. well as input resolution and number of proposals (for Faster R-CNN and R-FCN). For each such model conï¬ guration, we measure timings on GPU, memory demand, number of parameters and ï¬ oat- ing point operations as described below. We make the entire table of results available in the supplementary material, not- ing that as of the time of this submission, we have included 147 model conï¬ gurations; models for a small subset of ex- perimental conï¬ gurations (namely some of the high resolu- tion SSD models) have yet to converge, so we have for now omitted them from analysis. to almost 1 second. Generally we observe that R-FCN and SSD models are faster on average while Faster R-CNN tends to lead to slower but more accurate models, requir- ing at least 100 ms per image. However, as we discuss be- low, Faster R-CNN models can be just as fast if we limit the number of regions proposed.
1611.10012#41
1611.10012#43
1611.10012
[ "1512.00567" ]
1611.10012#43
Speed/accuracy trade-offs for modern convolutional object detectors
We have also overlaid an imaginary â optimality frontierâ representing points at which better accuracy can only be attained within this fam- ily of detectors by sacriï¬ cing speed. In the following, we highlight some of the key points along the optimality fron- tier as the best detectors to use and discuss the effect of the various model conï¬ guration options in isolation. # 4.1. Analyses # 4.1.1 Accuracy vs time # 4.1.2 Critical points on the optimality frontier. Figure 2 is a scatterplot visualizing the mAP of each of our model conï¬ gurations, with colors representing feature ex- tractors, and marker shapes representing meta-architecture.
1611.10012#42
1611.10012#44
1611.10012
[ "1512.00567" ]
1611.10012#44
Speed/accuracy trade-offs for modern convolutional object detectors
Running time per image ranges from tens of milliseconds (Fastest: SSD w/MobileNet): On the fastest end of this op- timality frontier, we see that SSD models with Inception v2 and Mobilenet feature extractors are most accurate of the fastest models. Note that if we ignore postprocessing 8 32 Meta Architecture @ Faster RCNN 28 @ R-FCN 30 e ssD a, 26 < E24 = 5 22 8 e g js s © 20 e 5 gz 3 18 3 8 3 £ e ° 16 3) 2 ° 14 70 72 74 e a nN 3 = 2 â g 5 3 $ cre ome 5 ono he â g g = = E J: 76 78 80 82 Feature Extractor Accuracy Figure 3: Accuracy of detector (mAP on COCO) vs accuracy of feature extractor (as measured by top-1 accuracy on ImageNet-CLS). To avoid crowding the plot, we show only the low resolution models. 60 TE Overall mAP TE A? (large) a AP (medium) a mA? (small) 50 40 30 20 10 0 Faster Faster Faster Faster Faster RCNN | ssD | RCNN | RFCN | ssD | RCNN | R-FCN | ssp | RCNN | RFCN | ssD | RCNN | R-FCN | SSD VGG MobileNet Inception V2 Resnet 101 Inception Resnet V2 Figure 4: Accuracy stratiï¬ ed by object size, meta-architecture and feature extractor, We ï¬ x the image resolution to 300. costs, Mobilenet seems to be roughly twice as fast as In- ception v2 while being slightly worse in accuracy. (Sweet Spot: R-FCN w/Resnet or Faster R-CNN w/Resnet and only 50 proposals):
1611.10012#43
1611.10012#45
1611.10012
[ "1512.00567" ]
1611.10012#45
Speed/accuracy trade-offs for modern convolutional object detectors
There is an â elbowâ in the middle of the optimality frontier occupied by R-FCN models using Residual Network feature extractors which seem to strike the best balance between speed and accuracy among our model conï¬ gurations. As we discuss below, Faster R-CNN w/Resnet models can attain similar speeds if we limit the number of proposals to 50. (Most Accurate: Faster R-CNN w/Inception Resnet at stride 8): Finally Faster R-CNN with dense output Inception Resnet models attain the best pos- sible accuracy on our optimality frontier, achieving, to our knowledge, the state-of-the-art single model performance. However these models are slow, requiring nearly a second of processing time. The overall mAP numbers for these 5 models are shown in Table 3. # 4.1.3 The effect of the feature extractor. Intuitively, stronger performance on classiï¬ cation should be positively correlated with stronger performance on COCO detection. To verify this, we investigate the relationship be- tween overall mAP of different models and the Top-1 Ima- genet classiï¬ cation accuracy attained by the pretrained fea-
1611.10012#44
1611.10012#46
1611.10012
[ "1512.00567" ]
1611.10012#46
Speed/accuracy trade-offs for modern convolutional object detectors
9 40 Meta Architecture @ Faster RCNN fi R-FCN @ ssp @ C) e 35 ow e@ ee o * Ms 30 ge e t +4 ° = 25 ec a g Cd r) e fs) o,! $ 20 @ 8 15 e O Resolution @ 300 @ 600 10 ie} 200 400 600 800 1000 GPU Time Figure 5: Effect of image resolution. ture extractor used to initialize each model. Figure 3 in- dicates that there is indeed an overall correlation between classiï¬ cation and detection performance. However this cor- relation appears to only be signiï¬ cant for Faster R-CNN and R-FCN while the performance of SSD appears to be less re- liant on its feature extractorâ s classiï¬ cation accuracy. objects, conï¬ rms that high resolution models lead to signif- icantly better mAP results on small objects (by a factor of 2 in many cases) and somewhat better mAP results on large objects as well. We also see that strong performance on small objects implies strong performance on large objects in our models, (but not vice-versa as SSD models do well on large objects but not small). 4.1.4 The effect of object size. Figure 4 shows performance for different models on dif- ferent sizes of objects. Not surprisingly, all methods do much better on large objects. We also see that even though SSD models typically have (very) poor performance on small objects, they are competitive with Faster RCNN and R-FCN on large objects, even outperforming these meta- architectures for the faster and more lightweight feature ex- tractors. # 4.1.5 The effect of image size. It has been observed by other authors that input resolution can signiï¬ cantly impact detection accuracy. From our ex- periments, we observe that decreasing resolution by a fac- tor of two in both dimensions consistently lowers accuracy (by 15.88% on average) but also reduces inference time by a relative factor of 27.4% on average. One reason for this effect is that high resolution inputs allow for small objects to be resolved. Figure 5 compares detector performance on large objects against that on small # 4.1.6 The effect of the number of proposals.
1611.10012#45
1611.10012#47
1611.10012
[ "1512.00567" ]
1611.10012#47
Speed/accuracy trade-offs for modern convolutional object detectors
For Faster R-CNN and R-FCN, we can adjust the number of proposals computed by the region proposal network. The authors in both papers use 300 boxes, however, our experi- ments suggest that this number can be signiï¬ cantly reduced without harming mAP (by much). In some feature extrac- tors where the â box classiï¬ erâ portion of Faster R-CNN is expensive, this can lead to signiï¬ cant computational sav- ings. Figure 6a visualizes this trade-off curve for Faster R- CNN models with high resolution inputs for different fea- ture extractors. We see that Inception Resnet, which has 35.4% mAP with 300 proposals can still have surprisingly high accuracy (29% mAP) with only 10 proposals. The sweet spot is probably at 50 proposals, where we are able to obtain 96% of the accuracy of using 300 proposals while reducing running time by a factor of 3. While the compu- tational savings are most pronounced for Inception Resnet, we see that similar tradeoffs hold for all feature extractors. Figure 6b visualizes the same trade-off curves for R-
1611.10012#46
1611.10012#48
1611.10012
[ "1512.00567" ]
1611.10012#48
Speed/accuracy trade-offs for modern convolutional object detectors
10 (a) FRCNN (b) RFCN Figure 6: Effect of proposing increasing number of regions on mAP accuracy (solid lines) and GPU inference time (dotted). Surprisingly, for Faster R-CNN with Inception Resnet, we obtain 96% of the accuracy of using 300 proposals by using only 50 proposals, which reduces running time by a factor of 3. 400 200 150 101 3} w ° Faster RCNN Faster RCNN Faster R-FCN RCNN VGG MobileNet R-FCN Inception v2 GPU time (ms) for Resolution=300 fm GPU Time Faster RCNN Faster R-FCN RCNN R-FCN Resnet 101 Inception Resnet V2
1611.10012#47
1611.10012#49
1611.10012
[ "1512.00567" ]
1611.10012#49
Speed/accuracy trade-offs for modern convolutional object detectors
Figure 7: GPU time (milliseconds) for each model, for image resolution of 300. FCN models and shows that the computational savings from using fewer proposals in the R-FCN setting are minimal â this is not surprising as the box classiï¬ er (the expen- sive part) is only run once per image. We see in fact that at 100 proposals, the speed and accuracy for Faster R-CNN models with ResNet becomes roughly comparable to that of equivalent R-FCN models which use 300 proposals in both mAP and GPU speed. 4.1.7 FLOPs analysis. Figure 7 plots the GPU time for each model combination. However, this is very platform dependent. Counting FLOPs (multiply-adds) gives us a platform independent measure of computation, which may or may not be linear with respect to actual running times due to a number of issues such as caching, I/O, hardware optimization etc, Figures 8a and 8b plot the FLOP count against observed wallclock times on the GPU and CPU respectively. Inter- estingly, we observe in the GPU plot (Figure 8a) that each
1611.10012#48
1611.10012#50
1611.10012
[ "1512.00567" ]
1611.10012#50
Speed/accuracy trade-offs for modern convolutional object detectors
11 Meta Architecture 800 @ Faster RCNN @ R-FCN @ ssD e @ Q @ 600 e 2 100 cis = = 8 al ry e oom Feature Extractor 200 â Sap a © _ Inception Resnet V2 oo 8 @ = Inception v2 @ = Inception V3 8 8 © MobileNet oO @ = Resnet 101 @ vGG oO 200 400 600 800 1000 GPU Time Meta Architecture 800 @ = Faster RCNN m R-FCN @ ssD e @ Q e 600 e 2 | Ly Feature Extractor @ Inception Resnet V2 @ Inception v2 @ = Inception V3 © MobileNet @ = Resnet 101 @ vGG o 2000 4000 6000 8000 10000 12000 CPU Time Meta Architecture Meta Architecture 800 @ Faster RCNN @ R-FCN @ ssD e 800 @ = Faster RCNN m R-FCN @ ssD e @ @ Q @ Q e 600 600 e e 2 100 = = 8 al | Ly ry e oom Feature Extractor Feature Extractor 200 â Sap a © _ Inception Resnet V2 @ Inception Resnet oo 8 @ = Inception v2 @ Inception v2 @ = Inception V3 @ = Inception V3 8 8 © MobileNet © MobileNet oO @ = Resnet 101 @ = Resnet 101 @ vGG @ vGG oO 200 400 600 800 1000 o 2000 4000 6000 8000 10000 GPU Time CPU Time (a) GPU. (b) CPU. # (a) GPU. (b) CPU. Figure 8: FLOPS vs time. # Memory (MB) for Resolution=300 10000 a Memory 8000 6000 4000 2000 Faster Faster Faster | Faster Faster RCNN SsD RCNN R-FCN ssD RCNN R-FCN ssD RCNN R-FCN ssD RCNN R-FCN ssD VGG MobileNet Inception V2 Resnet 101 Inception Resnet V2 Figure 9: Memory (Mb) usage for each model.
1611.10012#49
1611.10012#51
1611.10012
[ "1512.00567" ]
1611.10012#51
Speed/accuracy trade-offs for modern convolutional object detectors
Note that we measure total memory usage rather than peak memory usage. Moreover, we include all data points corresponding to the low-resolution models here. The error bars reï¬ ect variance in memory usage by using different numbers of proposals for the Faster R-CNN and R-FCN models (which leads to the seemingly considerable variance in the Faster-RCNN with Inception Resnet bar). model has a different average ratio of ï¬ ops to observed run- ning time in milliseconds. For denser block models such as Resnet 101, FLOPs/GPU time is typically greater than 1, perhaps due to efï¬ ciency in caching. For Inception and Mo- bilenet models, this ratio is typically less than 1 â we con- jecture that this could be that factorization reduces FLOPs, but adds more overhead in memory I/O or potentially that current GPU instructions (cuDNN) are more optimized for dense convolution. Figure 9 plots some of the same information in more detail, drilling down by meta-architecture and feature extractor se- lection. As with speed, Mobilenet is again the cheapest, re- quiring less than 1Gb (total) memory in almost all settings. # 4.1.9 Good localization at .75 IOU means good local- ization at all IOU thresholds. # 4.1.8 Memory analysis. For memory benchmarking, we measure total usage rather than peak usage. Figures 10a, 10b plot memory usage against GPU and CPU wallclock times. Overall, we observe high correlation with running time with larger and more powerful feature extractors requiring much more memory. While slicing the data by object size leads to interesting insights, it is also worth nothing that slicing data by IOU threshold does not give much additional information. Fig- ure 11 shows in fact that both [email protected] and [email protected] performances are almost perfectly linearly correlated with mAP@[.5:.95]. Thus detectors that have poor performance at the higher IOU thresholds always also show poor perfor- mance at the lower IOU thresholds. This being said, we also observe that [email protected] is slightly more tightly corre-
1611.10012#50
1611.10012#52
1611.10012
[ "1512.00567" ]
1611.10012#52
Speed/accuracy trade-offs for modern convolutional object detectors
12 (a) GPU (b) CPU . . Figure 10: Memory (Mb) vs time. 20000 Meta Architecture @ FasterRCNN mm RFCN @ SSD 15000 e 10000 -- Memory (MB) Feature Extractor Inception Resnet V2 Inception V2 Inception V3. MobileNet Resnet 101 ves 600 700 800 900 20000 Meta Architecture @ FasterRCNN mm RFCN @ SSD CS) 15000 ° a = 2 10000 -- 5 £ 5 © Inception Resnet V2 5000 @ Inception v2 @ Inception V3 @ MobileNet @ Reset 101 @ vcG 0 0 2000 4000 6000 8000 10000 12000 CPU Time lated with mAP@[.5:.95] (with R2 > .99), so if we were to replace the standard COCO metric with mAP at a single IOU threshold, we would likely choose IOU=.75. COCO category for each model and declared two models to be too similar if their category-wise AP vectors had cosine distance greater than some threshold. # 4.2. State-of-the-art detection on COCO
1611.10012#51
1611.10012#53
1611.10012
[ "1512.00567" ]
1611.10012#53
Speed/accuracy trade-offs for modern convolutional object detectors
Finally, we brieï¬ y describe how we ensembled some of our models to achieve the current state of the art perfor- mance on the 2016 COCO object detection challenge. Our model attains 41.3% mAP@[.5, .95] on the COCO test set and is an ensemble of ï¬ ve Faster R-CNN models based on Resnet and Inception Resnet feature extractors. This outper- forms the previous best result (37.1% mAP@[.5, .95]) by MSRA, which used an ensemble of three Resnet-101 mod- els [13]. Table 4 summarizes the performance of our model and highlights how our model has improved on the state-of- the-art across all COCO metrics. Most notably, our model achieves a relative improvement of nearly 60% on small ob- ject recall over the previous best result. Even though this ensemble with state-of-the-art numbers could be viewed as an extreme point on the speed/accuracy tradeoff curves (re- quires â ¼50 end-to-end network evaluations per image), we have chosen to present this model in isolation since it is not comparable to the â single modelâ results that we focused on in the rest of the paper. To construct our ensemble, we selected a set of ï¬ ve mod- els from our collection of Faster R-CNN models. Each of the models was based on Resnet and Inception Resnet fea- ture extractors with varying output stride conï¬ gurations, re- trained using variations on the loss functions, and different random orderings of the training data. Models were se- lected greedily using their performance on a held-out val- idation set. However, in order to take advantage of models with complementary strengths, we also explicitly encour- age diversity by pruning away models that are too similar to previously selected models (c.f., [21]). To do this, we computed the vector of average precision results across each Table 5 summarizes the ï¬ nal selected model speciï¬ ca- tions as well as their individual performance on COCO as single models.4 Ensembling these ï¬ ve models using the procedure described in [13] (Appendix A) and using multi- crop inference then yielded our ï¬ nal model. Note that we do not use multiscale training, horizontal ï¬
1611.10012#52
1611.10012#54
1611.10012
[ "1512.00567" ]
1611.10012#54
Speed/accuracy trade-offs for modern convolutional object detectors
ipping, box reï¬ ne- ment, box voting, or global context which are sometimes used in the literature. Table 6 compares a single modelâ s performance against two ways of ensembling, and shows that (1) encouraging for diversity did help against a hand selected ensemble, and (2) ensembling and multicrop were responsible for almost 7 points of improvement over a sin- gle model. # 4.3. Example detections In Figures 12 to 17 we visualize detections on images from the COCO dataset, showing side-by-side comparisons of ï¬ ve of the detectors that lie on the â optimality frontierâ of the speed-accuracy trade-off plot. To visualize, we select detections with score greater than a threshold and plot the top 20 detections in each image. We use a threshold of .5 for Faster R-CNN and R-FCN and .3 for SSD. These thresh- olds were hand-tuned for (subjective) visual attractiveness and not using rigorous criteria so we caution viewers from reading too much into the tea leaves from these visualiza- tions. This being said, we see that across our examples, all of the detectors perform reasonably well on large objects â SSD only shows its weakness on small objects, missing some of the smaller kites and people in the ï¬ rst image as well as the smaller cups and bottles on the dining table in 4Note that these numbers were computed on a held-out validation set and are not strictly comparable to the ofï¬ cial COCO test-dev data results (though they are expected to be very close).
1611.10012#53
1611.10012#55
1611.10012
[ "1512.00567" ]
1611.10012#55
Speed/accuracy trade-offs for modern convolutional object detectors
13 60 MAP Subset ooâ 50. @ [email protected] co © [email protected] e 40 COP < 30 20 10 0 10 15 20 25 30 35 40 Overall mAP Figure 11: Overall COCO mAP (@[.5:.95]) for all experiments plotted against corresponding [email protected] and [email protected]. It is unsurprising that these numbers are correlated, but it is interesting that they are almost perfectly correlated so for these models, it is never the case that a model has strong performance at 50% IOU but weak performance at 75% IOU. Ours MSRA2015 Trimps-Soushen AP 0.413 0.371 0.359 [email protected] 0.62 0.588 0.58 [email protected] 0.45 0.398 0.383 APsmall 0.231 0.173 0.158 APmed 0.436 0.415 0.407 APlarge 0.547 0.525 0.509 AR@100 0.604 0.489 0.497 ARsmall 0.424 0.267 0.269 ARmed 0.641 0.552 0.557 ARlarge 0.748 0.679 0.683 Table 4: Performance on the 2016 COCO test-challenge dataset. AP and AR refer to (mean) average precision and average recall respectively. Our model achieves a relative improvement of nearly 60% on small objects recall over the previous state-of-the-art COCO detector. AP 32.93 33.3 34.75 35.0 35.64 Feature Extractor Resnet 101 Resnet 101 Inception Resnet (v2) Inception Resnet (v2) Inception Resnet (v2) Output stride 8 8 16 16 8 loss ratio 3:1 1:1 1:1 2:1 1:1
1611.10012#54
1611.10012#56
1611.10012
[ "1512.00567" ]
1611.10012#56
Speed/accuracy trade-offs for modern convolutional object detectors
Table 5: Summary of single models that were automatically selected to be part of the diverse ensemble. Loss ratio refers to the multipliers α, β for location and classiï¬ cation losses, respectively. Faster RCNN with Inception Resnet (v2) Hand selected Faster RCNN ensemble w/multicrop Diverse Faster RCNN ensemble w/multicrop AP 0.347 0.41 0.416 [email protected] 0.555 0.617 0.619 [email protected] 0.367 0.449 0.454 APsmall 0.135 0.236 0.239 APmed 0.381 0.43 0.435 APlarge 0.52 0.542 0.549 Table 6: Effects of ensembling and multicrop inference. Numbers reported on COCO test-dev dataset. Second row (hand selected ensemble) consists of 6 Faster RCNN models with 3 Resnet 101 (v1) and 3 Inception Resnet (v2) and the third row (diverse ensemble) is described in detail in Table 5. the last image.
1611.10012#55
1611.10012#57
1611.10012
[ "1512.00567" ]
1611.10012#57
Speed/accuracy trade-offs for modern convolutional object detectors
# Acknowledgements # 5. Conclusion We would like to thank the following people for their advice and sup- port throughout this project: Tom Duerig, Dumitru Erhan, Jitendra Ma- lik, George Papandreou, Dominik Roblek, Chuck Rosenberg, Nathan Sil- berman, Abhinav Srivastava, Rahul Sukthankar, Christian Szegedy, Jasper Uijlings, Jay Yagnik, Xiangxin Zhu. We have performed an experimental comparison of some of the main aspects that inï¬ uence the speed and accuracy of modern object detectors. We hope this will help practi- tioners choose an appropriate method when deploying ob- ject detection in the real world. We have also identiï¬ ed some new techniques for improving speed without sacri- ï¬ cing much accuracy, such as using many fewer proposals than is usual for Faster R-CNN.
1611.10012#56
1611.10012#58
1611.10012
[ "1512.00567" ]
1611.10012#58
Speed/accuracy trade-offs for modern convolutional object detectors
# References [1] M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin, et al. Tensorï¬ ow: Large-scale machine learning on heterogeneous systems, 2015. Software available from tensorï¬ ow. org, 1, 2015. 4 14 (a) SSD+Mobilenet, lowres (b) SSD+InceptionV2, lowres (c) FRCNN+Resnet101, 100 proposals (d) RFCN+Resnet10, 300 proposals (e) FRCNN+IncResnetV2, 300 proposals
1611.10012#57
1611.10012#59
1611.10012
[ "1512.00567" ]
1611.10012#59
Speed/accuracy trade-offs for modern convolutional object detectors
Figure 12: Example detections from 5 different models. Inside- outside net: Detecting objects in context with skip arXiv preprint pooling and recurrent neural networks. arXiv:1512.04143, 2015. 3, 5 [3] A. Canziani, A. Paszke, and E. Culurciello. An analysis of deep neural network models for practical applications. arXiv preprint arXiv:1605.07678, 2016. 1 [6] J. Dai, Y. Li, K. He, and J. Sun. R-fcn:
1611.10012#58
1611.10012#60
1611.10012
[ "1512.00567" ]
1611.10012#60
Speed/accuracy trade-offs for modern convolutional object detectors
Object detection via region-based fully convolutional networks. arXiv preprint arXiv:1605.06409, 2016. 1, 2, 3, 4, 5, 6 [7] J. Dean, G. Corrado, R. Monga, K. Chen, M. Devin, M. Mao, A. Senior, P. Tucker, K. Yang, Q. V. Le, et al. Large scale dis- tributed deep networks. In Advances in neural information processing systems, pages 1223â 1231, 2012. 4, 5 [4] R. Collobert, K. Kavukcuoglu, and C.
1611.10012#59
1611.10012#61
1611.10012
[ "1512.00567" ]
1611.10012#61
Speed/accuracy trade-offs for modern convolutional object detectors
Farabet. Torch7: A matlab-like environment for machine learning. In BigLearn, NIPS Workshop, number EPFL-CONF-192376, 2011. 4 [8] D. Erhan, C. Szegedy, A. Toshev, and D. Anguelov. Scal- In Pro- able object detection using deep neural networks. ceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2147â 2154, 2014. 2 Instance-aware semantic seg- mentation via multi-task network cascades. arXiv preprint arXiv:1512.04412, 2015. 3, 5, 6 [9] C.-Y. Fu, W. Liu, A. Ranga, A. Tyagi, and A. C. Berg. Dssd:
1611.10012#60
1611.10012#62
1611.10012
[ "1512.00567" ]
1611.10012#62
Speed/accuracy trade-offs for modern convolutional object detectors
Deconvolutional single shot detector. arXiv preprint arXiv:1701.06659, 2017. 3 15 (a) SSD+Mobilenet, lowres (b) SSD+InceptionV2, lowres (c) FRCNN+Resnet101, 100 proposals (d) RFCN+Resnet10, 300 proposals (e) FRCNN+IncResnetV2, 300 proposals Figure 13: Example detections from 5 different models. [10] R. Girshick. Fast r-cnn. In Proceedings of the IEEE Inter- national Conference on Computer Vision, pages 1440â 1448, 2015. 2, 3, 5
1611.10012#61
1611.10012#63
1611.10012
[ "1512.00567" ]
1611.10012#63
Speed/accuracy trade-offs for modern convolutional object detectors
generation. In Proceedings of The 32nd International Con- ference on Machine Learning, pages 1462â 1471, 2015. 5 [11] R. Girshick, J. Donahue, T. Darrell, and J. Malik. Rich fea- ture hierarchies for accurate object detection and semantic In Proceedings of the IEEE conference on segmentation. computer vision and pattern recognition, pages 580â 587, 2014. 2, 5 I. Danihelka, A. Graves, D. Rezende, and D. Wierstra. Draw:
1611.10012#62
1611.10012#64
1611.10012
[ "1512.00567" ]
1611.10012#64
Speed/accuracy trade-offs for modern convolutional object detectors
A recurrent neural network for image [13] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learn- ing for image recognition. arXiv preprint arXiv:1512.03385, 2015. 3, 4, 5, 6, 7, 13 [14] A. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, and H. Adam. Mobilenets: Efï¬ - cient convolutional neural networks for mobile vision appli- cations. arXiv preprint arXiv:1704.04861, 2017. 4, 6, 7
1611.10012#63
1611.10012#65
1611.10012
[ "1512.00567" ]
1611.10012#65
Speed/accuracy trade-offs for modern convolutional object detectors
16 (a) SSD+Mobilenet, lowres (b) SSD+InceptionV2, lowres (c) FRCNN+Resnet101, 100 proposals (d) RFCN+Resnet10, 300 proposals (e) FRCNN+IncResnetV2, 300 proposals Figure 14: Example detections from 5 different models. [15] P. J. Huber et al. Robust estimation of a location parameter. The Annals of Mathematical Statistics, 35(1):73â
1611.10012#64
1611.10012#66
1611.10012
[ "1512.00567" ]
1611.10012#66
Speed/accuracy trade-offs for modern convolutional object detectors
101, 1964. 5 [19] K.-H. Kim, S. Hong, B. Roh, Y. Cheon, and M. Park. Pvanet: Deep but lightweight neural networks for real-time object de- tection. arXiv preprint arXiv:1608.08021, 2016. 3 [16] S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015. 4, 6, 7
1611.10012#65
1611.10012#67
1611.10012
[ "1512.00567" ]
1611.10012#67
Speed/accuracy trade-offs for modern convolutional object detectors
[17] M. Jaderberg, K. Simonyan, A. Zisserman, et al. Spatial In Advances in Neural Information transformer networks. Processing Systems, pages 2017â 2025, 2015. 5 [18] Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Gir- shick, S. Guadarrama, and T. Darrell.
1611.10012#66
1611.10012#68
1611.10012
[ "1512.00567" ]
1611.10012#68
Speed/accuracy trade-offs for modern convolutional object detectors
Caffe: Convolu- tional architecture for fast feature embedding. In Proceed- ings of the 22nd ACM international conference on Multime- dia, pages 675â 678. ACM, 2014. 4 Imagenet classiï¬ cation with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097â 1105, 2012. 2 [21] S. Lee, S. Purushwalkam, M. Cogswell, D. Crandall, and D.
1611.10012#67
1611.10012#69
1611.10012
[ "1512.00567" ]
1611.10012#69
Speed/accuracy trade-offs for modern convolutional object detectors
Batra. Why M heads are better than one: Training a di- verse ensemble of deep networks. 19 Nov. 2015. 13 [22] Y. Li, H. Qi, J. Dai, X. Ji, and W. Yichen. Translation- aware fully convolutional instance segmentation. https: //github.com/daijifeng001/TA-FCN, 2016. 3 17 (a) SSD+Mobilenet, lowres (b) SSD+InceptionV2, lowres (c) FRCNN+Resnet101, 100 proposals (d) RFCN+Resnet10, 300 proposals (e) FRCNN+IncResnetV2, 300 proposals
1611.10012#68
1611.10012#70
1611.10012
[ "1512.00567" ]
1611.10012#70
Speed/accuracy trade-offs for modern convolutional object detectors
Figure 15: Example detections from 5 different models. [23] T. Y. Lin and P. Dollar. Ms coco api. https://github. com/pdollar/coco, 2016. 5 [24] T.-Y. Lin, P. Doll´ar, R. Girshick, K. He, B. Hariharan, and S. Belongie. Feature pyramid networks for object detection. arXiv preprint arXiv:1612.03144, 2016. 3 [25] T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ra- manan, P. Doll´ar, and C.
1611.10012#69
1611.10012#71
1611.10012
[ "1512.00567" ]
1611.10012#71
Speed/accuracy trade-offs for modern convolutional object detectors
Lawrence Zitnick. Microsoft COCO: Common objects in context. In ECCV, 1 May 2014. 1, 4 [26] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C.- Y. Fu, and A. C. Berg. Ssd: Single shot multibox detector. In European Conference on Computer Vision, pages 21â 37. Springer, 2016. 1, 2, 3, 4, 5, 6, 7 tfprof: A proï¬ ling tool for tensorï¬ ow mod- [27] X. Pan. https://github.com/tensorflow/ els. tensorflow/tree/master/tensorflow/tools/ tfprof, 2016. 6 [28] P. Poirson, P. Ammirato, C.-Y. Fu, W. Liu, J. Kosecka, and A. C. Berg.
1611.10012#70
1611.10012#72
1611.10012
[ "1512.00567" ]
1611.10012#72
Speed/accuracy trade-offs for modern convolutional object detectors
Fast single shot detection and pose estimation. 18 bird: 35% i (a) SSD+Mobilenet, lowres (b) SSD+InceptionV2, lowres rar go: 90% lap i aaa aa iol i (c) FRCNN+Resnet101, 100 proposals (d) RFCN+Resnet10, 300 proposals fraser filer 0% lap (e) FRCNN+IncResnetV2, 300 proposals Figure 16: Example detections from 5 different models. arXiv preprint arXiv:1609.05590, 2016. 3 [29] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi. You only look once: Uniï¬ ed, real-time object detection. arXiv preprint arXiv:1506.02640, 2015. 1, 3 [30] J. Redmon and A. Farhadi.
1611.10012#71
1611.10012#73
1611.10012
[ "1512.00567" ]
1611.10012#73
Speed/accuracy trade-offs for modern convolutional object detectors
Yolo9000: Better, faster, stronger. arXiv preprint arXiv:1612.08242, 2016. 3 [31] S. Ren, K. He, R. Girshick, and J. Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. In Advances in neural information processing systems, pages 91â 99, 2015. 1, 2, 3, 4, 5, 6 [32] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M.
1611.10012#72
1611.10012#74
1611.10012
[ "1512.00567" ]
1611.10012#74
Speed/accuracy trade-offs for modern convolutional object detectors
Bernstein, Imagenet large scale visual recognition challenge. et al. International Journal of Computer Vision, 115(3):211â 252, 2015. 4 [33] P. Sermanet, D. Eigen, X. Zhang, M. Mathieu, R. Fergus, and Y. LeCun. Overfeat: Integrated recognition, localization and detection using convolutional networks. arXiv preprint arXiv:1312.6229, 2013. 3 [34] A. Shrivastava and A. Gupta.
1611.10012#73
1611.10012#75
1611.10012
[ "1512.00567" ]
1611.10012#75
Speed/accuracy trade-offs for modern convolutional object detectors
Contextual priming and feed- back for faster r-cnn. In European Conference on Computer Vision, pages 330â 348. Springer, 2016. 3 [35] A. Shrivastava, A. Gupta, and R. Girshick. Training region- based object detectors with online hard example mining. arXiv preprint arXiv:1604.03540, 2016. 3 Tf-slim: A high library to deï¬ ne complex models in tensorï¬
1611.10012#74
1611.10012#76
1611.10012
[ "1512.00567" ]
1611.10012#76
Speed/accuracy trade-offs for modern convolutional object detectors
ow. 19 (a) SSD+Mobilenet, lowres (b) SSD+InceptionV2, lowres (c) FRCNN+Resnet101, 100 proposals (d) RFCN+Resnet10, 300 proposals (e) FRCNN+IncResnetV2, 300 proposals Figure 17: Example detections from 5 different models. https://research.googleblog.com/2016/08/ tf-slim-high-level-library-to-define. html, 2016. [Online; accessed 6-November-2016]. 4 [40] C. Szegedy, S. Reed, D. Erhan, and D.
1611.10012#75
1611.10012#77
1611.10012
[ "1512.00567" ]
1611.10012#77
Speed/accuracy trade-offs for modern convolutional object detectors
Anguelov. arXiv preprint Scalable, high-quality object detection. arXiv:1412.1441, 2014. 1, 2, 3 [37] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014. 4, 6, 7 [41] C. Szegedy, A. Toshev, and D.
1611.10012#76
1611.10012#78
1611.10012
[ "1512.00567" ]
1611.10012#78
Speed/accuracy trade-offs for modern convolutional object detectors
Erhan. Deep neural networks for object detection. In Advances in Neural Information Pro- cessing Systems, pages 2553â 2561, 2013. 2 Inception-v4, inception-resnet and the impact of residual connections on learning. arXiv preprint arXiv:1602.07261, 2016. 4, 6, 7 [42] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z.
1611.10012#77
1611.10012#79
1611.10012
[ "1512.00567" ]
1611.10012#79
Speed/accuracy trade-offs for modern convolutional object detectors
Wojna. Rethinking the inception architecture for computer vision. arXiv preprint arXiv:1512.00567, 2015. 4, 6 [39] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1â
1611.10012#78
1611.10012#80
1611.10012
[ "1512.00567" ]
1611.10012#80
Speed/accuracy trade-offs for modern convolutional object detectors
9, 2015. 3 [43] T. Tieleman and G. Hinton. Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural Networks for Machine Learning, 4(2), 2012. 5 20 [44] P. Viola and M. J. Jones. Robust real-time face detection. International journal of computer vision, 57(2):137â 154, 2004. 2 [45] B. Yang, J. Yan, Z. Lei, and S. Z. Li.
1611.10012#79
1611.10012#81
1611.10012
[ "1512.00567" ]
1611.10012#81
Speed/accuracy trade-offs for modern convolutional object detectors
Craft objects from im- ages. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6043â 6051, 2016. 3 [46] S. Zagoruyko, A. Lerer, T.-Y. Lin, P. O. Pinheiro, S. Gross, S. Chintala, and P. Doll´ar. A multipath network for object detection. arXiv preprint arXiv:1604.02135, 2016. 3 [47] A. Zhai, D. Kislyuk, Y. Jing, M. Feng, E. Tzeng, J. Donahue, Y. L. Du, and T. Darrell.
1611.10012#80
1611.10012#82
1611.10012
[ "1512.00567" ]
1611.10012#82
Speed/accuracy trade-offs for modern convolutional object detectors
Visual discovery at pinterest. arXiv preprint arXiv:1702.04680, 2017. 3 21
1611.10012#81
1611.10012
[ "1512.00567" ]
1611.09823#0
Dialogue Learning With Human-In-The-Loop
7 1 0 2 # n a J 3 1 ] I A . s c [ 3 v 3 2 8 9 0 . 1 1 6 1 : v i X r a # Under review as a conference paper at ICLR 2017 # DIALOGUE LEARNING WITH HUMAN-IN-THE-LOOP Jiwei Li, Alexander H. Miller, Sumit Chopra, Marcâ Aurelio Ranzato, Jason Weston Facebook AI Research, New York, USA {jiwel,ahm,spchopra,ranzato,jase}@fb.com # ABSTRACT
1611.09823#1
1611.09823
[ "1511.06931" ]
1611.09823#1
Dialogue Learning With Human-In-The-Loop
An important aspect of developing conversational agents is to give a bot the ability to improve through communicating with humans and to learn from the mistakes that it makes. Most research has focused on learning from ï¬ xed training sets of labeled data rather than interacting with a dialogue partner in an online fashion. In this paper we explore this direction in a reinforcement learning setting where the bot improves its question-answering ability from feedback a teacher gives fol- lowing its generated responses. We build a simulator that tests various aspects of such learning in a synthetic environment, and introduce models that work in this regime. Finally, real experiments with Mechanical Turk validate the approach. # INTRODUCTION A good conversational agent (which we sometimes refer to as a learner or bot1) should have the ability to learn from the online feedback from a teacher: adapting its model when making mistakes and reinforcing the model when the teacherâ s feedback is positive. This is particularly important in the situation where the bot is initially trained in a supervised way on a ï¬ xed synthetic, domain- speciï¬ c or pre-built dataset before release, but will be exposed to a different environment after release (e.g., more diverse natural language utterance usage when talking with real humans, different distributions, special cases, etc.). Most recent research has focused on training a bot from ï¬ xed training sets of labeled data but seldom on how the bot can improve through online interaction with humans. Human (rather than machine) language learning happens during communication (Bassiri, 2011; Werts et al., 1995), and not from labeled datasets, hence making this an important subject to study. In this work, we explore this direction by training a bot through interaction with teachers in an online fashion. The task is formalized under the general framework of reinforcement learning via the teacherâ s (dialogue partnerâ s) feedback to the dialogue actions from the bot. The dialogue takes place in the context of question-answering tasks and the bot has to, given either a short story or a set of facts, answer a set of questions from the teacher. We consider two types of feedback: explicit numerical rewards as in conventional reinforcement learning, and textual feedback which is more natural in human dialogue, following (Weston, 2016).
1611.09823#0
1611.09823#2
1611.09823
[ "1511.06931" ]
1611.09823#2
Dialogue Learning With Human-In-The-Loop
We consider two online training scenarios: (i) where the task is built with a dialogue simulator allowing for easy analysis and repeatability of experiments; and (ii) where the teachers are real humans using Amazon Mechanical Turk. We explore important issues involved in online learning such as how a bot can be most efï¬ ciently trained using a minimal amount of teacherâ s feedback, how a bot can harness different types of feedback signal, how to avoid pitfalls such as instability during online learing with different types of feedback via data balancing and exploration, and how to make learning with real humans feasible via data batching. Our ï¬ ndings indicate that it is feasible to build a pipeline that starts from a model trained with ï¬ xed data and then learns from interactions with humans to improve itself. 1In this paper, we refer to a learner (either a human or a bot/dialogue agent which is a machine learning algorithm) as the student, and their more knowledgeable dialogue partner as the teacher.
1611.09823#1
1611.09823#3
1611.09823
[ "1511.06931" ]
1611.09823#3
Dialogue Learning With Human-In-The-Loop
1 # Under review as a conference paper at ICLR 2017 # 2 RELATED WORK Reinforcement learning has been widely applied to dialogue, especially in slot ï¬ lling to solve domain-speciï¬ c tasks (Walker, 2000; Schatzmann et al., 2006; Singh et al., 2000; 2002). Efforts include Markov Decision Processes (MDPs) (Levin et al., 1997; 2000; Walker et al., 2003; Pierac- cini et al., 2009), POMDP models (Young et al., 2010; 2013; GaË sic et al., 2013; 2014) and policy learning (Su et al., 2016). Such a line of research focuses mainly on frames with slots to ï¬ ll, where the bot will use reinforcement learning to model a state transition pattern, generating dialogue ut- terances to prompt the appropriate user responses to put in the desired slots. This goal is different from ours, where we study end-to-end learning systems and also consider non-reward based setups via textual feedback. Our work is related to the line of research that focuses on supervised learning for question answering (QA) from dialogues (Dodge et al., 2015; Weston, 2016), either given a database of knowledge (Bordes et al., 2015; Miller et al., 2016) or short texts (Weston et al., 2015; Hermann et al., 2015; Rajpurkar et al., 2016). In our work, the discourse includes the statements made in the past, the question and answer, and crucially the response from the teacher. The latter is what makes the setting different from the standard QA setting, i.e. we use methods that leverage this response also, not just answering questions. Further, QA works only consider ï¬ xed datasets with gold annotations, i.e. they do not consider a reinforcement learning setting. Our work is closely related to a recent work from Weston (2016) that learns through conducting conversations where supervision is given naturally in the response during the conversation. That work introduced the use of forward prediction that learns by predicting the teacherâ s feedback, in addition to using reward-based learning of correct answers.
1611.09823#2
1611.09823#4
1611.09823
[ "1511.06931" ]
1611.09823#4
Dialogue Learning With Human-In-The-Loop
However, two important issues were not addressed: (i) it did not use a reinforcement learning setting, but instead used pre-built datasets with ï¬ xed policies given in advance; and (ii) experiments used only simulated and no real language data. Hence, models that can learn policies from real online communication were not investigated. To make the differences with our work clear, we will now detail these points further. The experiments in (Weston, 2016) involve constructing pre-built ï¬ xed datasets, rather than training the learner within a simulator, as in our work. Pre-built datasets can only be made by ï¬
1611.09823#3
1611.09823#5
1611.09823
[ "1511.06931" ]
1611.09823#5
Dialogue Learning With Human-In-The-Loop
xing a prior in advance. They achieve this by choosing an omniscient (but deliberately imperfect) labeler that gets Ï acc examples always correct (the paper looked at values 50%, 10% and 1%). Again, this was not learned, and was ï¬ xed to generate the datasets. Note that the paper refers to these answers as coming from â the learnerâ (which should be the model), but since the policy is ï¬ xed it actually does not depend on the model. In a realistic setting one does not have access to an omniscient labeler, one has to learn a policy completely from scratch, online, starting with a random policy, so their setting was not practically viable. In our work, when policy training is viewed as batch learning over iterations of the dataset, updating the policy on each iteration, (Weston, 2016) can be viewed as training only one iteration, whereas we perform multiple iterations.
1611.09823#4
1611.09823#6
1611.09823
[ "1511.06931" ]
1611.09823#6
Dialogue Learning With Human-In-The-Loop
This is explained further in Sections 4.2 and 5.1. We show in our experiments that performance improves over the iterations, i.e. it is better than the ï¬ rst iteration. We show that such online learning works for both reward- based numerical feedback and for forward prediction methods using textual feedback (under certain conditions which are detailed). This is a key contribution of our work. Finally, (Weston, 2016) only conducted experiments on synthetic or templated language, and not real language, especially the feedback from the teacher was scripted. While we believe that synthetic datasets are very important for developing understanding (hence we develop a simulator and conduct experiments also with synthetic data), for a new method to gain traction it must be shown to work on real data. We hence employ Mechanical Turk to collect real language data for the questions and importantly for the teacher feedback and construct experiments in this real setting. # 3 DATASET AND TASKS We begin by describing the data setup we use. In our ï¬ rst set of experiments we build a simulator as a testbed for learning algorithms. In our second set of experiments we use Mechanical Turk to provide real human teachers giving feedback.
1611.09823#5
1611.09823#7
1611.09823
[ "1511.06931" ]
1611.09823#7
Dialogue Learning With Human-In-The-Loop
2 # Under review as a conference paper at ICLR 2017 # 3.1 SIMULATOR The simulator adapts two existing ï¬ xed datasets to our online setting. Following Weston (2016), we use (i) the single supporting fact problem from the bAbI datasets (Weston et al., 2015) which consists of 1000 short stories from a simulated world interspersed with questions; and (ii) the WikiMovies dataset (Weston et al., 2015) which consists of roughly 100k (templated) questions over 75k entities based on questions with answers in the open movie database (OMDb). Each dialogue takes place between a teacher, scripted by the simulation, and a bot. The communication protocol is as follows: (1) the teacher ï¬ rst asks a question from the ï¬ xed set of questions existing in the dataset, (2) the bot answers the question, and ï¬ nally (3) the teacher gives feedback on the botâ s answer. We follow the paradigm deï¬ ned in (Weston, 2016) where the teacherâ s feedback takes the form of either textual feedback, a numerical reward, or both, depending on the task. For each dataset, there are ten tasks, which are further described in Sec. A and illustrated in Figure 5 of the appendix. We also refer the readers to (Weston, 2016) for more detailed descriptions and the motivation behind these tasks. In the main text of this paper we only consider Task 6 (â partial feedbackâ ): the teacher replies with positive textual feedback (6 possible templates) when the bot answers correctly, and positive reward is given only 50% of the time. When the bot is wrong, the teacher gives textual feedback containing the answer. Descriptions and experiments on the other tasks are detailed in the appendix. Example dialogues are given in Figure 1. The difference between our simulation and the original ï¬ xed tasks of Weston (2016) is that models are trained on-the-ï¬ y. After receiving feedback and/or rewards, we update the model (policy) and then deploy it to collect teacherâ s feedback in the next episode or batch. This means the modelâ s policy affects the data which is used to train it, which was not the case in the previous work. Figure 1:
1611.09823#6
1611.09823#8
1611.09823
[ "1511.06931" ]
1611.09823#8
Dialogue Learning With Human-In-The-Loop
Simulator sample dialogues for the bAbI task (left) and WikiMovies (right). We consider 10 different tasks following Weston (2016) but here describe only Task 6; other tasks are detailed in the appendix. The teacherâ s dialogue is in black and the bot is in red. (+) indicates receiving positive reward, given only 50% of the time even when correct. bAbI Task 6: Partial Rewards Mary went to the hallway. John moved to the bathroom. Mary travelled to the kitchen. Where is Mary?
1611.09823#7
1611.09823#9
1611.09823
[ "1511.06931" ]
1611.09823#9
Dialogue Learning With Human-In-The-Loop
Yes, thatâ s right! Where is John? Yes, thatâ s correct! (+) kitchen bathroom WikiMovies Task 6: Partial Rewards What ï¬ lms are about Hawaii? Correct! Who acted in Licence to Kill? No, the answer is Timothy Dalton. What genre is Saratoga Trunk in? Yes! (+) . . . 50 First Dates Billy Madison Drama Figure 2: Human Dialogue from Mechanical Turk (based on WikiMovies) The human teacherâ s dialogue is in black and the bot is in red. We show examples where the bot answers correctly (left) and incorrectly (right). Real humans provide more variability of language in both questions and textual feedback than in the simulator setup (cf. Figure 1). Sample dialogues with correct answers from the bot:
1611.09823#8
1611.09823#10
1611.09823
[ "1511.06931" ]
1611.09823#10
Dialogue Learning With Human-In-The-Loop
Who wrote the Linguini Incident ? Richard Shepard is one of the right answers here. What year did The World Before Her premiere? Yep! Thatâ s when it came out. Which are the movie genres of Mystery of the 13th Guest? Right, it can also be categorized as a mystery. Sample dialogues with incorrect answers from the bot: What are some movies about a supermarket ? There were many options and this one was not among them. Which are the genres of the ï¬
1611.09823#9
1611.09823#11
1611.09823
[ "1511.06931" ]
1611.09823#11
Dialogue Learning With Human-In-The-Loop
lm Juwanna Mann ? That is incorrect. Remember the question asked for a genre not name. Who wrote the story of movie Coraline ? fantasy Thatâ s a movie genre and not the name of the writer. A better answer would of been Henry Selick or Neil Gaiman. 3 # Under review as a conference paper at ICLR 2017 3.2 MECHANICAL TURK EXPERIMENTS Finally, we extended WikiMovies using Mechanical Turk so that real human teachers are giving feedback rather than using a simulation. As both the questions and feedback are templated in the simulation, they are now both replaced with natural human utterances. Rather than having a set of simulated tasks, we have only one task, and we gave instructions to the teachers that they could give feedback as they see ï¬ t. The exact instructions given to the Turkers is given in Appendix B. In general, each independent response contains feedback like (i) positive or negative sentences; or (ii) a phrase containing the answer or (iii) a hint, which are similar to setups deï¬ ned in the simulator. However, some human responses cannot be so easily categorized, and the lexical variability is much larger in human responses. Some examples of the collected data are given in Figure 2. # 4 METHODS 4.1 MODEL ARCHITECTURE
1611.09823#10
1611.09823#12
1611.09823
[ "1511.06931" ]
1611.09823#12
Dialogue Learning With Human-In-The-Loop
In our experiments, we used variants of the End-to-End Memory Network (MemN2N) model (Sukhbaatar et al., 2015) as our underlying architecture for learning from dialogue. The input to MemN2N is the last utterance of the dialogue history x as well as a set of memories (context) C=c1, c2, ..., cN . The memory C encodes both short-term memory, e.g., dialogue histories between the bot and the teacher, and long-term memories, e.g., the knowledge base facts that the bot has access to. Given the input x and C, the goal is to produce an output/label a.
1611.09823#11
1611.09823#13
1611.09823
[ "1511.06931" ]
1611.09823#13
Dialogue Learning With Human-In-The-Loop
In the ï¬ rst step, the query x is transformed to a vector representation u0 by summing up its con- stituent word embeddings: u0 = Ax. The input x is a bag-of-words vector and A is the d à V word embedding matrix where d denotes the emebbding dimension and V denotes the vocabulary size. Each memory ci is similarly transformed to a vector mi. The model will read information from the memory by comparing input representation u0 with memory vectors mi using softmax weights: o1 = p1 i mi i = softmax(uT p1 0 mi) i (1) This process selects memories relevant to the last utterance x, i.e., the memories with large values of p1 i . The returned memory vector o1 is the weighted sum of memory vectors. This process can be repeated to query the memory N times (so called â
1611.09823#12
1611.09823#14
1611.09823
[ "1511.06931" ]
1611.09823#14
Dialogue Learning With Human-In-The-Loop
hopsâ ) by adding on to the original input, u1 = o1 + u0, or to the previous state, un = on + unâ 1, and then using un to query the memories again. In the end, uN is input to a softmax function for the ï¬ nal prediction: N y1, uT where y1, . . . , yL denote the set of candidate answers. If the answer is a word, yi is the corresponding word embedding. If the answer is a sentence, yi is the embedding for the sentence achieved in the same way that we obtain embeddings for query x and memory C. The standard way MemN2N is trained is via a cross entropy criterion on known input-output pairs, which we refer to as supervised or imitation learning. As our work is in a reinforcement learning setup where our model must make predictions to learn, this procedure will not work, so we instead consider reinforcement learning algorithms which we describe next. 4.2 REINFORCEMENT LEARNING In this section, we present the algorithms we used to train MemN2N in an online fashion. Our learn- ing setup can be cast as a particular form of Reinforcement Learning. The policy is implemented by the MemN2N model. The state is the dialogue history. The action space corresponds to the set of answers the MemN2N has to choose from to answer the teacherâ s question. In our setting, the policy chooses only one action for each episode. The reward is either 1 (a reward from the teacher when the bot answers correctly) or 0 otherwise. Note that in our experiments, a reward equal to 0 might mean that the answer is incorrect or that the positive reward is simply missing. The overall setup is closest to standard contextual bandits, except that the reward is binary.
1611.09823#13
1611.09823#15
1611.09823
[ "1511.06931" ]
1611.09823#15
Dialogue Learning With Human-In-The-Loop
4 # Under review as a conference paper at ICLR 2017 When working with real human dialogues, e.g. collecting data via Mechanical Turk, it is easier to set up a task whereby a bot is deployed to respond to a large batch of utterances, as opposed to a single one. The latter would be more difï¬ cult to manage and scale up since it would require some form of synchronization between the model replicas interacting with each human. This is comparable to the real world situation where a teacher can either ask a student a single question and give feedback right away, or set up a test that contains many questions and grade all of them at once. Only after the learner completes all questions, it can hear feedback from the teacher. We use batch size to refer to how many dialogue episodes the current model is used to collect feedback before updating its parameters. In the Reinforcement Learning literature, batch size is related to off-policy learning since the MemN2N policy is trained using episodes collected with a stale version of the model. Our experiments show that our model and base algorithms are very robust to the choice of batch size, alleviating the need for correction terms in the learning algorithm (Bottou et al., 2013). We consider two strategies: (i) online batch size, whereby the target policy is updated after doing a single pass over each batch (a batch size of 1 reverts to the usual on-policy online learning); and (ii) dataset-sized batch, whereby training is continued to convergence on the batch which is the size of the dataset, and then the target policy is updated with the new model, and a new batch is drawn and the procedure iterates. These strategies can be applied to all the methods we use, described below. Next, we discuss the learning algorithms we considered in this work. 4.2.1 REWARD-BASED IMITATION (RBI) The simplest algorithm we ï¬ rst consider is the one employed in Weston (2016). RBI relies on positive rewards provided by the teacher. It is trained to imitate the correct behavior of the learner, i.e., learning to predict the correct answers (with reward 1) at training time and disregarding the other ones. This is implemented by using a MemN2N that maps a dialogue input to a prediction, i.e. using the cross entropy criterion on the positively rewarded subset of the data.
1611.09823#14
1611.09823#16
1611.09823
[ "1511.06931" ]
1611.09823#16
Dialogue Learning With Human-In-The-Loop
In order to make this work in the online setting which requires exploration to find the correct answer, we employ an e-greedy strategy: the learner makes a prediction using its own model (the answer assigned the highest probability) with probability 1 â ¢, otherwise it picks a random answer with probability «. The teacher will then give a reward of +1 if the answer is correct, otherwise 0. The bot will then learn to imitate the correct answers: predicting the correct answers while ignoring the incorrect ones. # 4.2.2 REINFORCE The second algorithm we use is the REINFORCE algorithm (Williams, 1992), which maximizes the expected cumulative reward of the episode, in our case the expected reward provided by the teacher. The expectation is approximated by sampling an answer from the model distribution. Let a denote the answer that the learner gives, p(a) denote the probability that current model assigns to a, r denote the teacherâ s reward, and J(θ) denote the expectation of the reward.
1611.09823#15
1611.09823#17
1611.09823
[ "1511.06931" ]
1611.09823#17
Dialogue Learning With Human-In-The-Loop
We have: â J(θ) â â log p(a)[r â b] (3) where b is the baseline value, which is estimated using a linear regression model that takes as input the output of the memory network after the last hop, and outputs a scalar b denoting the estimation of the future reward. The baseline model is trained by minimizing the mean squared loss between the estimated reward b and actual reward r, ||r â b||2. We refer the readers to (Ranzato et al., 2015; Zaremba & Sutskever, 2015) for more details. The baseline estimator model is independent from the policy model, and its error is not backpropagated through the policy model. The major difference between RBI and REINFORCE is that (i) the learner only tries to imitate correct behavior in RBI while in REINFORCE it also leverages the incorrect behavior, and (ii) the learner explores using an e-greedy strategy in RBI while in REINFORCE it uses the distribution over actions produced by the model itself.
1611.09823#16
1611.09823#18
1611.09823
[ "1511.06931" ]