id
stringlengths 12
15
| title
stringlengths 8
162
| content
stringlengths 1
17.6k
| prechunk_id
stringlengths 0
15
| postchunk_id
stringlengths 0
15
| arxiv_id
stringlengths 10
10
| references
listlengths 1
1
|
---|---|---|---|---|---|---|
1612.03969#41 | Tracking the World State with Recurrent Entity Networks | DNC 9.0 ± 12.6 39.2 ± 20.5 39.6 ± 16.4 0.4 ± 0.7 1.5 ± 1.0 6.9 ± 7.5 9.8 ± 7.0 5.5 ± 5.9 7.7 ± 8.3 9.6 ± 11.4 3.3 ± 5.7 5.0 ± 6.3 3.1 ± 3.6 11.0 ± 7.5 27.2 ± 20.1 53.6 ± 1.9 32.4 ± 8.0 4.2 ± 1.8 64.6 ± 37.4 0.0 ± 0.1 11.2 ± 5.4 16.7 ± 7.6 EntNet 0 ± 0.1 15.3 ± 15.7 29.3 ± 26.3 0.1 ± 0.1 0.4 ± 0.3 0.6 ± 0.8 1.8 ± 1.1 1.5 ± 1.2 0 ± 0.1 0.1 ± 0.2 0.2 ± 0.2 0 ± 0 0 ± 0.1 7.3 ± 4.5 3.6 ± 8.1 53.3 ± 1.2 8.8 ± 3.8 1.3 ± 0.9 70.4 ± 6.1 0 ± 0 5 ± 1.2 9.7 ± 2.6 0.1 2.8 10.6 0 0.4 0.3 0.8 0.1 0 0 0 0 0 3.6 0 52.1 11.7 2.1 63.0 0 4 7.38 15 | 1612.03969#40 | 1612.03969 | [
"1503.01007"
]
|
|
1612.03144#0 | Feature Pyramid Networks for Object Detection | 7 1 0 2 r p A 9 1 ] V C . s c [ 2 v 4 4 1 3 0 . 2 1 6 1 : v i X r a # Feature Pyramid Networks for Object Detection Tsung-Yi Lin1,2, Piotr Doll´ar1, Ross Girshick1, Kaiming He1, Bharath Hariharan1, and Serge Belongie2 1Facebook AI Research (FAIR) 2Cornell University and Cornell Tech # Abstract | 1612.03144#1 | 1612.03144 | [
"1703.06870"
]
|
|
1612.03144#1 | Feature Pyramid Networks for Object Detection | Feature pyramids are a basic component in recognition systems for detecting objects at different scales. But recent deep learning object detectors have avoided pyramid rep- resentations, in part because they are compute and memory intensive. In this paper, we exploit the inherent multi-scale, pyramidal hierarchy of deep convolutional networks to con- struct feature pyramids with marginal extra cost. A top- down architecture with lateral connections is developed for building high-level semantic feature maps at all scales. This architecture, called a Feature Pyramid Network (FPN), shows signiï¬ cant improvement as a generic feature extrac- tor in several applications. Using FPN in a basic Faster R-CNN system, our method achieves state-of-the-art single- model results on the COCO detection benchmark without bells and whistles, surpassing all existing single-model en- tries including those from the COCO 2016 challenge win- ners. In addition, our method can run at 6 FPS on a GPU and thus is a practical and accurate solution to multi-scale object detection. | 1612.03144#0 | 1612.03144#2 | 1612.03144 | [
"1703.06870"
]
|
1612.03144#2 | Feature Pyramid Networks for Object Detection | Code will be made publicly available. predict] predict (a) Featurized image pyramid (b) Single feature map predict] predict â > [predict predict] i (c) Pyramidal feature hierarchy (d) Feature Pyramid Network Figure 1. (a) Using an image pyramid to build a feature pyramid. Features are computed on each of the image scales independently, which is slow. (b) Recent detection systems have opted to use only single scale features for faster detection. (c) An alternative is to reuse the pyramidal feature hierarchy computed by a ConvNet as if it were a featurized image pyramid. (d) Our proposed Feature Pyramid Network (FPN) is fast like (b) and (c), but more accurate. | 1612.03144#1 | 1612.03144#3 | 1612.03144 | [
"1703.06870"
]
|
1612.03144#3 | Feature Pyramid Networks for Object Detection | In this ï¬ gure, feature maps are indicate by blue outlines and thicker outlines denote semantically stronger features. # 1. Introduction Recognizing objects at vastly different scales is a fun- damental challenge in computer vision. Feature pyramids built upon image pyramids (for short we call these featur- ized image pyramids) form the basis of a standard solution [1] (Fig. 1(a)). These pyramids are scale-invariant in the sense that an objectâ s scale change is offset by shifting its level in the pyramid. Intuitively, this property enables a model to detect objects across a large range of scales by scanning the model over both positions and pyramid levels. Featurized image pyramids were heavily used in the era of hand-engineered features [5, 25]. They were so critical that object detectors like DPM [7] required dense scale sampling to achieve good results (e.g., 10 scales per octave). For recognition tasks, engineered features have largely been replaced with features computed by deep con- volutional networks (ConvNets) [19, 20]. Aside from being capable of representing higher-level semantics, ConvNets are also more robust to variance in scale and thus facilitate recognition from features computed on a single input scale [15, 11, 29] (Fig. 1(b)). But even with this robustness, pyra- mids are still needed to get the most accurate results. All re- cent top entries in the ImageNet [33] and COCO [21] detec- tion challenges use multi-scale testing on featurized image pyramids (e.g., [16, 35]). The principle advantage of fea- turizing each level of an image pyramid is that it produces a multi-scale feature representation in which all levels are semantically strong, including the high-resolution levels. Nevertheless, featurizing each level of an image pyra- mid has obvious limitations. Inference time increases con- siderably (e.g., by four times [11]), making this approach impractical for real applications. Moreover, training deep | 1612.03144#2 | 1612.03144#4 | 1612.03144 | [
"1703.06870"
]
|
1612.03144#4 | Feature Pyramid Networks for Object Detection | 1 networks end-to-end on an image pyramid is infeasible in terms of memory, and so, if exploited, image pyramids are used only at test time [15, 11, 16, 35], which creates an inconsistency between train/test-time inference. For these reasons, Fast and Faster R-CNN [11, 29] opt to not use fea- turized image pyramids under default settings. However, image pyramids are not the only way to com- pute a multi-scale feature representation. A deep ConvNet computes a feature hierarchy layer by layer, and with sub- sampling layers the feature hierarchy has an inherent multi- scale, pyramidal shape. This in-network feature hierarchy produces feature maps of different spatial resolutions, but introduces large semantic gaps caused by different depths. The high-resolution maps have low-level features that harm their representational capacity for object recognition. | 1612.03144#3 | 1612.03144#5 | 1612.03144 | [
"1703.06870"
]
|
1612.03144#5 | Feature Pyramid Networks for Object Detection | The Single Shot Detector (SSD) [22] is one of the ï¬ rst attempts at using a ConvNetâ s pyramidal feature hierarchy as if it were a featurized image pyramid (Fig. 1(c)). Ideally, the SSD-style pyramid would reuse the multi-scale feature maps from different layers computed in the forward pass and thus come free of cost. But to avoid using low-level features SSD foregoes reusing already computed layers and instead builds the pyramid starting from high up in the net- work (e.g., conv4 3 of VGG nets [36]) and then by adding several new layers. Thus it misses the opportunity to reuse the higher-resolution maps of the feature hierarchy. We show that these are important for detecting small objects. The goal of this paper is to naturally leverage the pyra- midal shape of a ConvNetâ s feature hierarchy while cre- ating a feature pyramid that has strong semantics at all scales. To achieve this goal, we rely on an architecture that combines low-resolution, semantically strong features with high-resolution, semantically weak features via a top-down pathway and lateral connections (Fig. 1(d)). The result is a feature pyramid that has rich semantics at all levels and is built quickly from a single input image scale. In other words, we show how to create in-network feature pyramids that can be used to replace featurized image pyramids with- out sacriï¬ cing representational power, speed, or memory. Similar architectures adopting top-down and skip con- nections are popular in recent research [28, 17, 8, 26]. Their goals are to produce a single high-level feature map of a ï¬ ne resolution on which the predictions are to be made (Fig. 2 top). On the contrary, our method leverages the architecture as a feature pyramid where predictions (e.g., object detec- tions) are independently made on each level (Fig. 2 bottom). Our model echoes a featurized image pyramid, which has not been explored in these works. We evaluate our method, called a Feature Pyramid Net- work (FPN), in various systems for detection and segmen- tation [11, 29, 27]. Without bells and whistles, we re- port a state-of-the-art single-model result on the challenging COCO detection benchmark [21] simply based on FPN and | 1612.03144#4 | 1612.03144#6 | 1612.03144 | [
"1703.06870"
]
|
1612.03144#6 | Feature Pyramid Networks for Object Detection | 2 Y / i ye redict t Pp >| predict predict predict Figure 2. Top: a top-down architecture with skip connections, where predictions are made on the ï¬ nest level (e.g., [28]). Bottom: our model that has a similar structure but leverages it as a feature pyramid, with predictions made independently at all levels. a basic Faster R-CNN detector [29], surpassing all exist- ing heavily-engineered single-model entries of competition winners. In ablation experiments, we ï¬ nd that for bound- ing box proposals, FPN signiï¬ | 1612.03144#5 | 1612.03144#7 | 1612.03144 | [
"1703.06870"
]
|
1612.03144#7 | Feature Pyramid Networks for Object Detection | cantly increases the Average Recall (AR) by 8.0 points; for object detection, it improves the COCO-style Average Precision (AP) by 2.3 points and PASCAL-style AP by 3.8 points, over a strong single-scale baseline of Faster R-CNN on ResNets [16]. Our method is also easily extended to mask proposals and improves both instance segmentation AR and speed over state-of-the-art methods that heavily depend on image pyramids. In addition, our pyramid structure can be trained end-to- end with all scales and is used consistently at train/test time, which would be memory-infeasible using image pyramids. As a result, FPNs are able to achieve higher accuracy than all existing state-of-the-art methods. Moreover, this im- provement is achieved without increasing testing time over the single-scale baseline. We believe these advances will facilitate future research and applications. Our code will be made publicly available. | 1612.03144#6 | 1612.03144#8 | 1612.03144 | [
"1703.06870"
]
|
1612.03144#8 | Feature Pyramid Networks for Object Detection | # 2. Related Work Hand-engineered features and early neural networks. SIFT features [25] were originally extracted at scale-space extrema and used for feature point matching. HOG fea- tures [5], and later SIFT features as well, were computed densely over entire image pyramids. These HOG and SIFT pyramids have been used in numerous works for image classiï¬ cation, object detection, human pose estimation, and more. There has also been signiï¬ cant interest in comput- ing featurized image pyramids quickly. Doll´ar et al. [6] demonstrated fast pyramid computation by ï¬ rst computing a sparsely sampled (in scale) pyramid and then interpolat- ing missing levels. Before HOG and SIFT, early work on face detection with ConvNets [38, 32] computed shallow networks over image pyramids to detect faces across scales. Deep ConvNet object detectors. With the development of modern deep ConvNets [19], object detectors like Over- Feat [34] and R-CNN [12] showed dramatic improvements in accuracy. OverFeat adopted a strategy similar to early neural network face detectors by applying a ConvNet as a sliding window detector on an image pyramid. R-CNN adopted a region proposal-based strategy [37] in which each proposal was scale-normalized before classifying with a ConvNet. SPPnet [15] demonstrated that such region-based detectors could be applied much more efï¬ ciently on fea- ture maps extracted on a single image scale. Recent and more accurate detection methods like Fast R-CNN [11] and Faster R-CNN [29] advocate using features computed from a single scale, because it offers a good trade-off between accuracy and speed. Multi-scale detection, however, still performs better, especially for small objects. Methods using multiple layers. A number of recent ap- proaches improve detection and segmentation by using dif- ferent layers in a ConvNet. FCN [24] sums partial scores for each category over multiple scales to compute semantic segmentations. Hypercolumns [13] uses a similar method for object instance segmentation. Several other approaches (HyperNet [18], ParseNet [23], and ION [2]) concatenate features of multiple layers before computing predictions, which is equivalent to summing transformed features. | 1612.03144#7 | 1612.03144#9 | 1612.03144 | [
"1703.06870"
]
|
1612.03144#9 | Feature Pyramid Networks for Object Detection | SSD [22] and MS-CNN [3] predict objects at multiple layers of the feature hierarchy without combining features or scores. There are recent methods exploiting lateral/skip connec- tions that associate low-level feature maps across resolu- tions and semantic levels, including U-Net [31] and Sharp- Mask [28] for segmentation, Recombinator networks [17] for face detection, and Stacked Hourglass networks [26] for keypoint estimation. Ghiasi et al. [8] present a Lapla- cian pyramid presentation for FCNs to progressively reï¬ ne segmentation. Although these methods adopt architectures with pyramidal shapes, they are unlike featurized image pyramids [5, 7, 34] where predictions are made indepen- dently at all levels, see Fig. 2. In fact, for the pyramidal architecture in Fig. 2 (top), image pyramids are still needed to recognize objects across multiple scales [28]. | 1612.03144#8 | 1612.03144#10 | 1612.03144 | [
"1703.06870"
]
|
1612.03144#10 | Feature Pyramid Networks for Object Detection | # 3. Feature Pyramid Networks Our goal is to leverage a ConvNetâ s pyramidal feature hierarchy, which has semantics from low to high levels, and build a feature pyramid with high-level semantics through- out. The resulting Feature Pyramid Network is general- purpose and in this paper we focus on sliding window pro- posers (Region Proposal Network, RPN for short) [29] and region-based detectors (Fast R-CNN) [11]. We also gener- alize FPNs to instance segmentation proposals in Sec. 6. Our method takes a single-scale image of an arbitrary size as input, and outputs proportionally sized feature maps | 1612.03144#9 | 1612.03144#11 | 1612.03144 | [
"1703.06870"
]
|
1612.03144#11 | Feature Pyramid Networks for Object Detection | 3 [predict â > 1x1 conv -* Figure 3. A building block illustrating the lateral connection and the top-down pathway, merged by addition. at multiple levels, in a fully convolutional fashion. This pro- cess is independent of the backbone convolutional architec- tures (e.g., [19, 36, 16]), and in this paper we present results using ResNets [16]. The construction of our pyramid in- volves a bottom-up pathway, a top-down pathway, and lat- eral connections, as introduced in the following. Bottom-up pathway. The bottom-up pathway is the feed- forward computation of the backbone ConvNet, which com- putes a feature hierarchy consisting of feature maps at sev- eral scales with a scaling step of 2. There are often many layers producing output maps of the same size and we say these layers are in the same network stage. For our feature pyramid, we deï¬ ne one pyramid level for each stage. We choose the output of the last layer of each stage as our ref- erence set of feature maps, which we will enrich to create our pyramid. This choice is natural since the deepest layer of each stage should have the strongest features. | 1612.03144#10 | 1612.03144#12 | 1612.03144 | [
"1703.06870"
]
|
1612.03144#12 | Feature Pyramid Networks for Object Detection | Speciï¬ cally, for ResNets [16] we use the feature activa- tions output by each stageâ s last residual block. We denote the output of these last residual blocks as {C2, C3, C4, C5} for conv2, conv3, conv4, and conv5 outputs, and note that they have strides of {4, 8, 16, 32} pixels with respect to the input image. We do not include conv1 into the pyramid due to its large memory footprint. Top-down pathway and lateral connections. The top- down pathway hallucinates higher resolution features by upsampling spatially coarser, but semantically stronger, fea- ture maps from higher pyramid levels. These features are then enhanced with features from the bottom-up pathway via lateral connections. Each lateral connection merges fea- ture maps of the same spatial size from the bottom-up path- way and the top-down pathway. The bottom-up feature map is of lower-level semantics, but its activations are more ac- curately localized as it was subsampled fewer times. Fig. 3 shows the building block that constructs our top- down feature maps. With a coarser-resolution feature map, we upsample the spatial resolution by a factor of 2 (using nearest neighbor upsampling for simplicity). The upsam- pled map is then merged with the corresponding bottom-up map (which undergoes a 1à 1 convolutional layer to reduce channel dimensions) by element-wise addition. This pro- cess is iterated until the ï¬ nest resolution map is generated. To start the iteration, we simply attach a 1à 1 convolutional layer on C5 to produce the coarsest resolution map. Fi- nally, we append a 3à 3 convolution on each merged map to generate the ï¬ nal feature map, which is to reduce the alias- ing effect of upsampling. This ï¬ nal set of feature maps is called {P2, P3, P4, P5}, corresponding to {C2, C3, C4, C5} that are respectively of the same spatial sizes. Because all levels of the pyramid use shared classi- ï¬ ers/regressors as in a traditional featurized image pyramid, we ï¬ x the feature dimension (numbers of channels, denoted as d) in all the feature maps. | 1612.03144#11 | 1612.03144#13 | 1612.03144 | [
"1703.06870"
]
|
1612.03144#13 | Feature Pyramid Networks for Object Detection | We set d = 256 in this pa- per and thus all extra convolutional layers have 256-channel outputs. There are no non-linearities in these extra layers, which we have empirically found to have minor impacts. Simplicity is central to our design and we have found that our model is robust to many design choices. We have exper- imented with more sophisticated blocks (e.g., using multi- layer residual blocks [16] as the connections) and observed marginally better results. Designing better connection mod- ules is not the focus of this paper, so we opt for the simple design described above. | 1612.03144#12 | 1612.03144#14 | 1612.03144 | [
"1703.06870"
]
|
1612.03144#14 | Feature Pyramid Networks for Object Detection | # 4. Applications Our method is a generic solution for building feature pyramids inside deep ConvNets. In the following we adopt our method in RPN [29] for bounding box proposal gen- eration and in Fast R-CNN [11] for object detection. To demonstrate the simplicity and effectiveness of our method, we make minimal modiï¬ cations to the original systems of [29, 11] when adapting them to our feature pyramid. # 4.1. Feature Pyramid Networks for RPN RPN [29] is a sliding-window class-agnostic object de- tector. In the original RPN design, a small subnetwork is evaluated on dense 3à 3 sliding windows, on top of a single- scale convolutional feature map, performing object/non- object binary classiï¬ cation and bounding box regression. This is realized by a 3à 3 convolutional layer followed by two sibling 1à 1 convolutions for classiï¬ cation and regres- sion, which we refer to as a network head. The object/non- object criterion and bounding box regression target are de- ï¬ ned with respect to a set of reference boxes called anchors [29]. The anchors are of multiple pre-deï¬ ned scales and aspect ratios in order to cover objects of different shapes. We adapt RPN by replacing the single-scale feature map with our FPN. We attach a head of the same design (3à 3 conv and two sibling 1à 1 convs) to each level on our feature pyramid. Because the head slides densely over all locations in all pyramid levels, it is not necessary to have multi-scale | 1612.03144#13 | 1612.03144#15 | 1612.03144 | [
"1703.06870"
]
|
1612.03144#15 | Feature Pyramid Networks for Object Detection | 4 anchors on a speciï¬ c level. Instead, we assign anchors of a single scale to each level. Formally, we deï¬ ne the an- chors to have areas of {322, 642, 1282, 2562, 5122} pixels on {P2, P3, P4, P5, P6} respectively.1 As in [29] we also use anchors of multiple aspect ratios {1:2, 1:1, 2:1} at each level. So in total there are 15 anchors over the pyramid. We assign training labels to the anchors based on their Intersection-over-Union (IoU) ratios with ground-truth bounding boxes as in [29]. Formally, an anchor is assigned a positive label if it has the highest IoU for a given ground- truth box or an IoU over 0.7 with any ground-truth box, and a negative label if it has IoU lower than 0.3 for all ground-truth boxes. Note that scales of ground-truth boxes are not explicitly used to assign them to the levels of the pyramid; instead, ground-truth boxes are associated with anchors, which have been assigned to pyramid levels. | 1612.03144#14 | 1612.03144#16 | 1612.03144 | [
"1703.06870"
]
|
1612.03144#16 | Feature Pyramid Networks for Object Detection | As such, we introduce no extra rules in addition to those in [29]. We note that the parameters of the heads are shared across all feature pyramid levels; we have also evaluated the alternative without sharing parameters and observed similar accuracy. The good performance of sharing parameters in- dicates that all levels of our pyramid share similar semantic levels. This advantage is analogous to that of using a fea- turized image pyramid, where a common head classiï¬ er can be applied to features computed at any image scale. With the above adaptations, RPN can be naturally trained and tested with our FPN, in the same fashion as in [29]. We elaborate on the implementation details in the experiments. # 4.2. | 1612.03144#15 | 1612.03144#17 | 1612.03144 | [
"1703.06870"
]
|
1612.03144#17 | Feature Pyramid Networks for Object Detection | Feature Pyramid Networks for Fast R-CNN Fast R-CNN [11] is a region-based object detector in which Region-of-Interest (RoI) pooling is used to extract features. Fast R-CNN is most commonly performed on a single-scale feature map. To use it with our FPN, we need to assign RoIs of different scales to the pyramid levels. We view our feature pyramid as if it were produced from an image pyramid. Thus we can adapt the assignment strat- egy of region-based detectors [15, 11] in the case when they are run on image pyramids. Formally, we assign an RoI of width w and height h (on the input image to the network) to the level Pk of our feature pyramid by: | 1612.03144#16 | 1612.03144#18 | 1612.03144 | [
"1703.06870"
]
|
1612.03144#18 | Feature Pyramid Networks for Object Detection | â k = [ko + logy(Vwh/224)|. (1) Here 224 is the canonical ImageNet pre-training size, and k0 is the target level on which an RoI with w à h = 2242 should be mapped into. Analogous to the ResNet-based Faster R-CNN system [16] that uses C4 as the single-scale feature map, we set k0 to 4. Intuitively, Eqn. (1) means that if the RoIâ s scale becomes smaller (say, 1/2 of 224), it should be mapped into a ï¬ ner-resolution level (say, k = 3). 1Here we introduce P6 only for covering a larger anchor scale of 5122. P6 is simply a stride two subsampling of P5. P6 is not used by the Fast R-CNN detector in the next section. We attach predictor heads (in Fast R-CNN the heads are class-speciï¬ c classiï¬ ers and bounding box regressors) to all RoIs of all levels. Again, the heads all share parameters, regardless of their levels. In [16], a ResNetâ s conv5 lay- ers (a 9-layer deep subnetwork) are adopted as the head on top of the conv4 features, but our method has already har- nessed conv5 to construct the feature pyramid. So unlike [16], we simply adopt RoI pooling to extract 7à 7 features, and attach two hidden 1,024-d fully-connected (fc) layers (each followed by ReLU) before the ï¬ | 1612.03144#17 | 1612.03144#19 | 1612.03144 | [
"1703.06870"
]
|
1612.03144#19 | Feature Pyramid Networks for Object Detection | nal classiï¬ cation and bounding box regression layers. These layers are randomly initialized, as there are no pre-trained fc layers available in ResNets. Note that compared to the standard conv5 head, our 2-fc MLP head is lighter weight and faster. Based on these adaptations, we can train and test Fast R- CNN on top of the feature pyramid. Implementation details are given in the experimental section. # 5. Experiments on Object Detection We perform experiments on the 80 category COCO de- tection dataset [21]. We train using the union of 80k train images and a 35k subset of val images (trainval35k [2]), and report ablations on a 5k subset of val images (minival). We also report ï¬ nal results on the standard test set (test-std) [21] which has no disclosed labels. As is common practice [12], all network backbones are pre-trained on the ImageNet1k classiï¬ cation set [33] and then ï¬ ne-tuned on the detection dataset. | 1612.03144#18 | 1612.03144#20 | 1612.03144 | [
"1703.06870"
]
|
1612.03144#20 | Feature Pyramid Networks for Object Detection | We use the pre-trained ResNet-50 and ResNet-101 models that are publicly available.2 Our code is a reimplementation of py-faster-rcnn3 using Caffe2.4 # 5.1. Region Proposal with RPN We evaluate the COCO-style Average Recall (AR) and AR on small, medium, and large objects (ARs, ARm, and ARl) following the deï¬ nitions in [21]. We report results for 100 and 1000 proposals per images (AR100 and AR1k). Implementation details. All architectures in Table 1 are trained end-to-end. | 1612.03144#19 | 1612.03144#21 | 1612.03144 | [
"1703.06870"
]
|
1612.03144#21 | Feature Pyramid Networks for Object Detection | The input image is resized such that its shorter side has 800 pixels. We adopt synchronized SGD training on 8 GPUs. A mini-batch involves 2 images per GPU and 256 anchors per image. We use a weight decay of 0.0001 and a momentum of 0.9. The learning rate is 0.02 for the ï¬ rst 30k mini-batches and 0.002 for the next 10k. For all RPN experiments (including baselines), we include the anchor boxes that are outside the image for training, which is unlike [29] where these anchor boxes are ignored. Other implementation details are as in [29]. Training RPN with FPN on 8 GPUs takes about 8 hours on COCO. 2https://github.com/kaiminghe/deep-residual-networks 3https://github.com/rbgirshick/py-faster-rcnn 4https://github.com/caffe2/caffe2 | 1612.03144#20 | 1612.03144#22 | 1612.03144 | [
"1703.06870"
]
|
1612.03144#22 | Feature Pyramid Networks for Object Detection | 5 # 5.1.1 Ablation Experiments Comparisons with baselines. For fair comparisons with original RPNs [29], we run two baselines (Table 1(a, b)) us- ing the single-scale map of C4 (the same as [16]) or C5, both using the same hyper-parameters as ours, including using 5 scale anchors of {322, 642, 1282, 2562, 5122}. Table 1 (b) shows no advantage over (a), indicating that a single higher- level feature map is not enough because there is a trade-off between coarser resolutions and stronger semantics. Placing FPN in RPN improves AR1k to 56.3 (Table 1 (c)), which is 8.0 points increase over the single-scale RPN baseline (Table 1 (a)). In addition, the performance on small objects (AR1k s ) is boosted by a large margin of 12.9 points. Our pyramid representation greatly improves RPNâ s robust- ness to object scale variation. How important is top-down enrichment? Table 1(d) shows the results of our feature pyramid without the top- down pathway. | 1612.03144#21 | 1612.03144#23 | 1612.03144 | [
"1703.06870"
]
|
1612.03144#23 | Feature Pyramid Networks for Object Detection | With this modiï¬ cation, the 1à 1 lateral con- nections followed by 3à 3 convolutions are attached to the bottom-up pyramid. This architecture simulates the effect of reusing the pyramidal feature hierarchy (Fig. 1(b)). The results in Table 1(d) are just on par with the RPN baseline and lag far behind ours. We conjecture that this is because there are large semantic gaps between different levels on the bottom-up pyramid (Fig. 1(b)), especially for very deep ResNets. We have also evaluated a variant of Ta- ble 1(d) without sharing the parameters of the heads, but observed similarly degraded performance. This issue can- not be simply remedied by level-speciï¬ c heads. | 1612.03144#22 | 1612.03144#24 | 1612.03144 | [
"1703.06870"
]
|
1612.03144#24 | Feature Pyramid Networks for Object Detection | How important are lateral connections? Table 1(e) shows the ablation results of a top-down feature pyramid without the 1à 1 lateral connections. This top-down pyra- mid has strong semantic features and ï¬ ne resolutions. But we argue that the locations of these features are not precise, because these maps have been downsampled and upsampled several times. More precise locations of features can be di- rectly passed from the ï¬ ner levels of the bottom-up maps via the lateral connections to the top-down maps. As a results, FPN has an AR1k score 10 points higher than Table 1(e). How important are pyramid representations? Instead of resorting to pyramid representations, one can attach the head to the highest-resolution, strongly semantic feature maps of P2 (i.e., the ï¬ nest level in our pyramids). Simi- lar to the single-scale baselines, we assign all anchors to the P2 feature map. This variant (Table 1(f)) is better than the baseline but inferior to our approach. RPN is a sliding win- dow detector with a ï¬ xed window size, so scanning over pyramid levels can increase its robustness to scale variance. In addition, we note that using P2 alone leads to more anchors (750k, Table 1(f)) caused by its large spatial reso- lution. This result suggests that a larger number of anchors is not sufï¬ cient in itself to improve accuracy. | 1612.03144#23 | 1612.03144#25 | 1612.03144 | [
"1703.06870"
]
|
1612.03144#25 | Feature Pyramid Networks for Object Detection | RPN feature | # anchors | lateral? top-down? | | | ARIE ARIE AR} (a) baseline on conv4 C4 47k 36.1 48.3 32.0 58.7 62.2 (b) baseline on conv5 Cs 12k 36.3 44.9 25.3 55.5 64.2 (c) FPN {Py} 200k v v 44.0 | 56.3 | 44.9 63.4 66.2 Ablation experiments follow: (d) bottom-up pyramid {Px} 200k v 374 49.5 30.5 59.9 68.0 (e) top-down pyramid, w/o lateral {Px} 200k v 34.5 46.1 265 574 64.7 (f) only finest level Pz 750k v v 38.4 51.3 | 35.1 59.7 67.6 Table 1. Bounding box proposal results using RPN [29], evaluated on the COCO minival set. All models are trained on trainval35k. | 1612.03144#24 | 1612.03144#26 | 1612.03144 | [
"1703.06870"
]
|
1612.03144#26 | Feature Pyramid Networks for Object Detection | The columns â lateralâ and â top-downâ denote the presence of lateral and top-down connections, respectively. The column â featureâ denotes the feature maps on which the heads are attached. All results are based on ResNet-50 and share the same hyper-parameters. Fast R-CNN proposals feature | head | lateral? top-down? | [email protected] | AP | AP; AP, AP; (a) baseline on conv4 RPN, {P;,} C4 conv5S 54.7 31.9] 15.7 365 45.5 (b) baseline on conv5 RPN, {Px} Cs 2fc 52.9 28.8} 11.9 324 43.4 (c) FPN RPN, {P} | {Px} 2fc v v 56.9 33.9 | 17.8 37.7 45.8 Ablation experiments follow: (d) bottom-up pyramid RPN, {Px} | {Pe} | 2fe v 44.9 | 249] 10.9 244 38.5 (e) top-down pyramid, w/o lateral RPN, {Px} ] {Pe} | 2fc v 54.0 | 31.3] 13.3 35.2 45.3 (f) only finest level RPN, {P,.} P2 2fc v v 56.3 33.4] 17.3 37.3 45.6 | 1612.03144#25 | 1612.03144#27 | 1612.03144 | [
"1703.06870"
]
|
1612.03144#27 | Feature Pyramid Networks for Object Detection | Table 2. Object detection results using Fast R-CNN [11] on a ï¬ xed set of proposals (RPN, {Pk}, Table 1(c)), evaluated on the COCO minival set. Models are trained on the trainval35k set. All results are based on ResNet-50 and share the same hyper-parameters. Faster R-CNN proposals feature | head | lateral? top-down? | [email protected] | AP | AP; AP, AP; (*) baseline from He er al. [16] RPN, C4 C41 conv5 47.3 26.3 - - - (a) baseline on conv4 RPN, C4 C41 conv5 53.1 31.6 | 13.2 35.6 47.1 (b) baseline on conv5 RPN, C5 Cs 2fe 51.7 28.0) 96 31.9 43.1 (c) FPN RPN, {Px} | {Pe} 2fc v v 56.9 33.9 | 17.8 37.7 45.8 | 1612.03144#26 | 1612.03144#28 | 1612.03144 | [
"1703.06870"
]
|
1612.03144#28 | Feature Pyramid Networks for Object Detection | Table 3. Object detection results using Faster R-CNN [29] evaluated on the COCO minival set. The backbone network for RPN are consistent with Fast R-CNN. Models are trained on the trainval35k set and use ResNet-50. â Provided by authors of [16]. # 5.2. Object Detection with Fast/Faster R-CNN Next we investigate FPN for region-based (non-sliding window) detectors. We evaluate object detection by the COCO-style Average Precision (AP) and PASCAL-style AP (at a single IoU threshold of 0.5). We also report COCO AP on objects of small, medium, and large sizes (namely, APs, APm, and APl) following the deï¬ | 1612.03144#27 | 1612.03144#29 | 1612.03144 | [
"1703.06870"
]
|
1612.03144#29 | Feature Pyramid Networks for Object Detection | nitions in [21]. Implementation details. The input image is resized such that its shorter side has 800 pixels. Synchronized SGD is used to train the model on 8 GPUs. Each mini-batch in- volves 2 image per GPU and 512 RoIs per image. We use a weight decay of 0.0001 and a momentum of 0.9. The learning rate is 0.02 for the ï¬ rst 60k mini-batches and 0.002 for the next 20k. We use 2000 RoIs per image for training and 1000 for testing. Training Fast R-CNN with FPN takes about 10 hours on the COCO dataset. # 5.2.1 Fast R-CNN (on ï¬ | 1612.03144#28 | 1612.03144#30 | 1612.03144 | [
"1703.06870"
]
|
1612.03144#30 | Feature Pyramid Networks for Object Detection | xed proposals) puted by RPN on FPN (Table 1(c)), because it has good per- formance on small objects that are to be recognized by the detector. For simplicity we do not share features between Fast R-CNN and RPN, except when speciï¬ ed. As a ResNet-based Fast R-CNN baseline, following [16], we adopt RoI pooling with an output size of 14à 14 and attach all conv5 layers as the hidden layers of the head. This gives an AP of 31.9 in Table 2(a). Table 2(b) is a base- line exploiting an MLP head with 2 hidden fc layers, similar to the head in our architecture. It gets an AP of 28.8, indi- cating that the 2-fc head does not give us any orthogonal advantage over the baseline in Table 2(a). Table 2(c) shows the results of our FPN in Fast R-CNN. Comparing with the baseline in Table 2(a), our method im- proves AP by 2.0 points and small object AP by 2.1 points. Comparing with the baseline that also adopts a 2fc head (Ta- ble 2(b)), our method improves AP by 5.1 points.5 These comparisons indicate that our feature pyramid is superior to single-scale features for a region-based object detector. To better investigate FPNâ s effects on the region-based de- tector alone, we conduct ablations of Fast R-CNN on a ï¬ | 1612.03144#29 | 1612.03144#31 | 1612.03144 | [
"1703.06870"
]
|
1612.03144#31 | Feature Pyramid Networks for Object Detection | xed set of proposals. We choose to freeze the proposals as com- Table 2(d) and (e) show that removing top-down con- 5We expect a stronger architecture of the head [30] will improve upon our results, which is beyond the focus of this paper. 6 image test-dev test-std method backbone competition | pyramid | [email protected] | AP | APs AP», AP; | APa.s] AP | APs APm AP, ours, Faster R-CNN on FPN ResNet-101 - 59.1 | 36.2] 18.2 39.0 48.2) 585 | 35.8|17.5 38.7 47.8 Competition-winning single-model results follow: G-RMIt Inception-ResNet 2016 - 34.7] - - - - - - - - AttractioNet? [10] VGG16 + Wide ResNet® 2016 v 53.4 | 35.7] 15.6 38.0 52.7) 52.9 | 35.3] 14.7 37.6 51.9 Faster R-CNN +++ [16] ResNet-101 2015 v 55.7 | 34.9] 15.6 38.7 50.9 - - - - - Multipath [40] (on minival) VGG-16 2015 49.6 |31.5] - - - - - - - - ION? [2] VGG-16 2015 53.4 | 31.2] 12.8 32.9 45.2) 52.9 | 30.7] 11.8 32.8 44.8 | 1612.03144#30 | 1612.03144#32 | 1612.03144 | [
"1703.06870"
]
|
1612.03144#32 | Feature Pyramid Networks for Object Detection | Table 4. Comparisons of single-model results on the COCO detection benchmark. Some results were not available on the test-std set, so we also include the test-dev results (and for Multipath [40] on minival). â : http://image-net.org/challenges/ talks/2016/GRMI-COCO-slidedeck.pdf. â ¡: http://mscoco.org/dataset/#detections-leaderboard. §: This entry of AttractioNet [10] adopts VGG-16 for proposals and Wide ResNet [39] for object detection, so is not strictly a single-model result. nections or removing lateral connections leads to inferior results, similar to what we have observed in the above sub- section for RPN. It is noteworthy that removing top-down connections (Table 2(d)) signiï¬ cantly degrades the accu- racy, suggesting that Fast R-CNN suffers from using the low-level features at the high-resolution maps. In Table 2(f), we adopt Fast R-CNN on the single ï¬ nest scale feature map of P2. Its result (33.4 AP) is marginally worse than that of using all pyramid levels (33.9 AP, Ta- ble 2(c)). We argue that this is because RoI pooling is a warping-like operation, which is less sensitive to the re- gionâ s scales. Despite the good accuracy of this variant, it is based on the RPN proposals of {Pk} and has thus already beneï¬ ted from the pyramid representation. # 5.2.2 Faster R-CNN (on consistent proposals) | 1612.03144#31 | 1612.03144#33 | 1612.03144 | [
"1703.06870"
]
|
1612.03144#33 | Feature Pyramid Networks for Object Detection | In the above we used a ï¬ xed set of proposals to investi- gate the detectors. But in a Faster R-CNN system [29], the RPN and Fast R-CNN must use the same network back- bone in order to make feature sharing possible. Table 3 shows the comparisons between our method and two base- lines, all using consistent backbone architectures for RPN and Fast R-CNN. Table 3(a) shows our reproduction of the baseline Faster R-CNN system as described in [16]. Under controlled settings, our FPN (Table 3(c)) is better than this strong baseline by 2.3 points AP and 3.8 points [email protected]. Note that Table 3(a) and (b) are baselines that are much stronger than the baseline provided by He et al. [16] in Ta- ble 3(*). | 1612.03144#32 | 1612.03144#34 | 1612.03144 | [
"1703.06870"
]
|
1612.03144#34 | Feature Pyramid Networks for Object Detection | We ï¬ nd the following implementations contribute to the gap: (i) We use an image scale of 800 pixels instead of 600 in [11, 16]; (ii) We train with 512 RoIs per image which accelerate convergence, in contrast to 64 RoIs in [11, 16]; (iii) We use 5 scale anchors instead of 4 in [16] (adding 322); (iv) At test time we use 1000 proposals per image in- stead of 300 in [16]. So comparing with He et al.â s ResNet- 50 Faster R-CNN baseline in Table 3(*), our method im- proves AP by 7.6 points and [email protected] by 9.6 points. ResNet-50 ResNet-101 share features? no yes [email protected] 56.9 57.2 AP 33.9 34.3 [email protected] 58.0 58.2 AP 35.0 35.2 Table 5. More object detection results using Faster R-CNN and our FPNs, evaluated on minival. Sharing features increases train time by 1.5à (using 4-step training [29]), but reduces test time. ble 5, we evaluate sharing features following the 4-step training described in [29]. Similar to [29], we ï¬ nd that shar- ing features improves accuracy by a small margin. Feature sharing also reduces the testing time. | 1612.03144#33 | 1612.03144#35 | 1612.03144 | [
"1703.06870"
]
|
1612.03144#35 | Feature Pyramid Networks for Object Detection | Running time. With feature sharing, our FPN-based Faster R-CNN system has inference time of 0.148 seconds per image on a single NVIDIA M40 GPU for ResNet-50, and 0.172 seconds for ResNet-101.6 As a comparison, the single-scale ResNet-50 baseline in Table 3(a) runs at 0.32 seconds. Our method introduces small extra cost by the ex- tra layers in the FPN, but has a lighter weight head. Overall our system is faster than the ResNet-based Faster R-CNN counterpart. | 1612.03144#34 | 1612.03144#36 | 1612.03144 | [
"1703.06870"
]
|
1612.03144#36 | Feature Pyramid Networks for Object Detection | We believe the efï¬ ciency and simplicity of our method will beneï¬ t future research and applications. # 5.2.3 Comparing with COCO Competition Winners We ï¬ nd that our ResNet-101 model in Table 5 is not sufï¬ - ciently trained with the default learning rate schedule. So we increase the number of mini-batches by 2à at each learning rate when training the Fast R-CNN step. This in- creases AP on minival to 35.6, without sharing features. This model is the one we submitted to the COCO detection leaderboard, shown in Table 4. We have not evaluated its feature-sharing version due to limited time, which should be slightly better as implied by Table 5. Table 4 compares our method with the single-model re- sults of the COCO competition winners, including the 2016 winner G-RMI and the 2015 winner Faster R-CNN+++. Without adding bells and whistles, our single-model entry has surpassed these strong, heavily engineered competitors. | 1612.03144#35 | 1612.03144#37 | 1612.03144 | [
"1703.06870"
]
|
1612.03144#37 | Feature Pyramid Networks for Object Detection | Sharing features. In the above, for simplicity we do not share the features between RPN and Fast R-CNN. In Ta- 6These runtimes are updated from an earlier version of this paper. 7 Taxt4 160x160 [128x128] 14x14 gox80 [64x64] Figure 4. FPN for object segment proposals. The feature pyramid is constructed with identical structure as for object detection. We apply a small MLP on 5Ã 5 windows to generate dense object seg- ments with output dimension of 14Ã 14. Shown in orange are the size of the image regions the mask corresponds to for each pyra- mid level (levels P3â 5 are shown here). Both the corresponding image region size (light orange) and canonical object size (dark orange) are shown. Half octaves are handled by an MLP on 7x7 windows (7 â 5 2), not shown here. Details are in the appendix. On the test-dev set, our method increases over the ex- isting best results by 0.5 points of AP (36.2 vs. 35.7) and 3.4 points of [email protected] (59.1 vs. 55.7). It is worth noting that our method does not rely on image pyramids and only uses a single input image scale, but still has outstanding AP on small-scale objects. This could only be achieved by high- resolution image inputs with previous methods. Moreover, our method does not exploit many popular improvements, such as iterative regression [9], hard nega- tive mining [35], context modeling [16], stronger data aug- mentation [22], etc. These improvements are complemen- tary to FPNs and should boost accuracy further. Recently, FPN has enabled new top results in all tracks of the COCO competition, including detection, instance segmentation, and keypoint estimation. See [14] for details. # 6. Extensions: Segmentation Proposals Our method is a generic pyramid representation and can be used in applications other than object detection. In this section we use FPNs to generate segmentation proposals, following the DeepMask/SharpMask framework [27, 28]. DeepMask/SharpMask were trained on image crops for predicting instance segments and object/non-object scores. At inference time, these models are run convolutionally to generate dense proposals in an image. | 1612.03144#36 | 1612.03144#38 | 1612.03144 | [
"1703.06870"
]
|
1612.03144#38 | Feature Pyramid Networks for Object Detection | To generate segments at multiple scales, image pyramids are necessary [27, 28]. It is easy to adapt FPN to generate mask proposals. We use a fully convolutional setup for both training and infer- ence. We construct our feature pyramid as in Sec. 5.1 and set d = 128. On top of each level of the feature pyramid, we apply a small 5Ã 5 MLP to predict 14Ã 14 masks and object scores in a fully convolutional fashion, see Fig. 4. Addition- ally, motivated by the use of 2 scales per octave in the image pyramid of [27, 28], we use a second MLP of input size 7Ã 7 to handle half octaves. The two MLPs play a similar role as anchors in RPN. | 1612.03144#37 | 1612.03144#39 | 1612.03144 | [
"1703.06870"
]
|
1612.03144#39 | Feature Pyramid Networks for Object Detection | The architecture is trained end-to-end; full implementation details are given in the appendix. 8 image pyramid] AR AR, AR AR; |time (s) DeepMask [27] v 37.1 15.8 50.1 54.9} 0.49 SharpMask [28] v 39.8 17.4 53.1 59.1} 0.77 InstanceFCN [4] v 39.2 = - - 1.50¢ FPN Mask Results: single MLP [5x5] 43.4 32.5 49.2 53.7) 0.15 single MLP [7x7] 43.5 30.0 49.6 57.8) 0.19 dual MLP [5x5, 7x7] 45.7 31.9 51.5 60.8] 0.24 + 2x mask resolution 46.7 31.7 53.1 63.2] 0.25 + 2x train schedule 48.1 32.6 54.2 65.6) 0.25 Table 6. Instance segmentation proposals evaluated on the ï¬ rst 5k COCO val images. All models are trained on the train set. DeepMask, SharpMask, and FPN use ResNet-50 while Instance- FCN uses VGG-16. DeepMask and SharpMask performance is computed with models available from https://github. com/facebookresearch/deepmask (both are the â | 1612.03144#38 | 1612.03144#40 | 1612.03144 | [
"1703.06870"
]
|
1612.03144#40 | Feature Pyramid Networks for Object Detection | zoomâ variants). â Runtimes are measured on an NVIDIA M40 GPU, ex- cept the InstanceFCN timing which is based on the slower K40. # 6.1. Segmentation Proposal Results Results are shown in Table 6. We report segment AR and segment AR on small, medium, and large objects, always for 1000 proposals. Our baseline FPN model with a single 5Ã 5 MLP achieves an AR of 43.4. Switching to a slightly larger 7Ã 7 MLP leaves accuracy largely unchanged. Using both MLPs together increases accuracy to 45.7 AR. Increas- ing mask output size from 14Ã 14 to 28Ã 28 increases AR another point (larger sizes begin to degrade accuracy). Fi- nally, doubling the training iterations increases AR to 48.1. We also report comparisons to DeepMask [27], Sharp- Mask [28], and InstanceFCN [4], the previous state of the art methods in mask proposal generation. We outperform the accuracy of these approaches by over 8.3 points AR. In particular, we nearly double the accuracy on small objects. Existing mask proposal methods [27, 28, 4] are based on densely sampled image pyramids (e.g., scaled by 2{â 2:0.5:1} in [27, 28]), making them computationally expensive. Our approach, based on FPNs, is substantially faster (our mod- els run at 6 to 7 FPS). These results demonstrate that our model is a generic feature extractor and can replace image pyramids for other multi-scale detection problems. | 1612.03144#39 | 1612.03144#41 | 1612.03144 | [
"1703.06870"
]
|
1612.03144#41 | Feature Pyramid Networks for Object Detection | # 7. Conclusion We have presented a clean and simple framework for building feature pyramids inside ConvNets. Our method shows signiï¬ cant improvements over several strong base- lines and competition winners. Thus, it provides a practical solution for research and applications of feature pyramids, without the need of computing image pyramids. Finally, our study suggests that despite the strong representational power of deep ConvNets and their implicit robustness to scale variation, it is still critical to explicitly address multi- scale problems using pyramid representations. | 1612.03144#40 | 1612.03144#42 | 1612.03144 | [
"1703.06870"
]
|
1612.03144#42 | Feature Pyramid Networks for Object Detection | # A. Implementation of Segmentation Proposals We use our feature pyramid networks to efï¬ ciently gen- erate object segment proposals, adopting an image-centric training strategy popular for object detection [11, 29]. Our FPN mask generation model inherits many of the ideas and motivations from DeepMask/SharpMask [27, 28]. How- ever, in contrast to these models, which were trained on image crops and used a densely sampled image pyramid for inference, we perform fully-convolutional training for mask prediction on a feature pyramid. | 1612.03144#41 | 1612.03144#43 | 1612.03144 | [
"1703.06870"
]
|
1612.03144#43 | Feature Pyramid Networks for Object Detection | While this requires chang- ing many of the speciï¬ cs, our implementation remains sim- ilar in spirit to DeepMask. Speciï¬ cally, to deï¬ ne the label of a mask instance at each sliding window, we think of this window as being a crop on the input image, allowing us to inherit deï¬ nitions of positives/negatives from DeepMask. We give more details next, see also Fig. 4 for a visualization. We construct the feature pyramid with P2â 6 using the same architecture as described in Sec. 5.1. We set d = 128. Each level of our feature pyramid is used for predicting masks at a different scale. As in DeepMask, we deï¬ ne the scale of a mask as the max of its width and height. Masks with scales of {32, 64, 128, 256, 512} pixels map to {P2, P3, P4, P5, P6}, respectively, and are handled by a 5à 5 MLP. As DeepMask uses a pyramid with half octaves, 2) we use a second slightly larger MLP of size 7à 7 (7 â 5 to handle half-octaves in our model (e.g., a 128 2 scale mask is predicted by the 7à 7 MLP on P4). Objects at inter- mediate scales are mapped to the nearest scale in log space. As the MLP must predict objects at a range of scales for each pyramid level (speciï¬ cally a half octave range), some padding must be given around the canonical object size. We use 25% padding. This means that the mask output over {P2, P3, P4, P5, P6} maps to {40, 80, 160, 320, 640} sized image regions for the 5à 5 MLP (and to 2 larger corre- sponding sizes for the 7à 7 MLP). | 1612.03144#42 | 1612.03144#44 | 1612.03144 | [
"1703.06870"
]
|
1612.03144#44 | Feature Pyramid Networks for Object Detection | Each spatial position in the feature map is used to pre- dict a mask at a different location. Speciï¬ cally, at scale Pk, each spatial position in the feature map is used to predict the mask whose center falls within 2k pixels of that loca- tion (corresponding to ±1 cell offset in the feature map). If no object center falls within this range, the location is con- sidered a negative, and, as in DeepMask, is used only for training the score branch and not the mask branch. The MLP we use for predicting the mask and score is fairly simple. We apply a 5à 5 kernel with 512 outputs, fol- lowed by sibling fully connected layers to predict a 14à 14 mask (142 outputs) and object score (1 output). The model is implemented in a fully convolutional manner (using 1à 1 convolutions in place of fully connected layers). | 1612.03144#43 | 1612.03144#45 | 1612.03144 | [
"1703.06870"
]
|
1612.03144#45 | Feature Pyramid Networks for Object Detection | The 7Ã 7 MLP for handling objects at half octave scales is identical to the 5Ã 5 MLP except for its larger input region. During training, we randomly sample 2048 examples per mini-batch (128 examples per image from 16 images) with 9 a positive/negative sampling ratio of 1:3. The mask loss is given 10Ã higher weight than the score loss. This model is trained end-to-end on 8 GPUs using synchronized SGD (2 images per GPU). We start with a learning rate of 0.03 and train for 80k mini-batches, dividing the learning rate by 10 after 60k mini-batches. The image scale is set to 800 pixels during training and testing (we do not use scale jitter). Dur- ing inference our fully-convolutional model predicts scores at all positions and scales and masks at the 1000 highest scoring locations. We do not perform any non-maximum suppression or post-processing. | 1612.03144#44 | 1612.03144#46 | 1612.03144 | [
"1703.06870"
]
|
1612.03144#46 | Feature Pyramid Networks for Object Detection | # References [1] E. H. Adelson, C. H. Anderson, J. R. Bergen, P. J. Burt, and J. M. Ogden. Pyramid methods in image processing. RCA engineer, 1984. Inside- outside net: Detecting objects in context with skip pooling and recurrent neural networks. In CVPR, 2016. [3] Z. Cai, Q. Fan, R. S. Feris, and N. Vasconcelos. A uniï¬ | 1612.03144#45 | 1612.03144#47 | 1612.03144 | [
"1703.06870"
]
|
1612.03144#47 | Feature Pyramid Networks for Object Detection | ed multi-scale deep convolutional neural network for fast object detection. In ECCV, 2016. [4] J. Dai, K. He, Y. Li, S. Ren, and J. Sun. Instance-sensitive fully convolutional networks. In ECCV, 2016. [5] N. Dalal and B. Triggs. Histograms of oriented gradients for human detection. In CVPR, 2005. [6] P. Doll´ar, R. Appel, S. Belongie, and P. Perona. | 1612.03144#46 | 1612.03144#48 | 1612.03144 | [
"1703.06870"
]
|
1612.03144#48 | Feature Pyramid Networks for Object Detection | Fast feature pyramids for object detection. TPAMI, 2014. [7] P. F. Felzenszwalb, R. B. Girshick, D. McAllester, and D. Ra- manan. Object detection with discriminatively trained part- based models. TPAMI, 2010. [8] G. Ghiasi and C. C. Fowlkes. Laplacian pyramid reconstruc- In ECCV, tion and reï¬ nement for semantic segmentation. 2016. [9] S. Gidaris and N. Komodakis. | 1612.03144#47 | 1612.03144#49 | 1612.03144 | [
"1703.06870"
]
|
1612.03144#49 | Feature Pyramid Networks for Object Detection | Object detection via a multi- region & semantic segmentation-aware CNN model. In ICCV, 2015. [10] S. Gidaris and N. Komodakis. Attend reï¬ ne repeat: Active box proposal generation via in-out localization. In BMVC, 2016. [11] R. Girshick. Fast R-CNN. In ICCV, 2015. [12] R. Girshick, J. Donahue, T. Darrell, and J. Malik. | 1612.03144#48 | 1612.03144#50 | 1612.03144 | [
"1703.06870"
]
|
1612.03144#50 | Feature Pyramid Networks for Object Detection | Rich fea- ture hierarchies for accurate object detection and semantic segmentation. In CVPR, 2014. [13] B. Hariharan, P. Arbel´aez, R. Girshick, and J. Malik. Hyper- columns for object segmentation and ï¬ ne-grained localiza- tion. In CVPR, 2015. [14] K. He, G. Gkioxari, P. Doll´ar, and R. Girshick. Mask r-cnn. arXiv:1703.06870, 2017. [15] K. He, X. Zhang, S. Ren, and J. Sun. | 1612.03144#49 | 1612.03144#51 | 1612.03144 | [
"1703.06870"
]
|
1612.03144#51 | Feature Pyramid Networks for Object Detection | Spatial pyramid pooling in deep convolutional networks for visual recognition. In ECCV. 2014. [16] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In CVPR, 2016. [17] S. Honari, J. Yosinski, P. Vincent, and C. Pal. Recombinator networks: Learning coarse-to-ï¬ ne feature aggregation. In CVPR, 2016. [18] T. Kong, A. Yao, Y. Chen, and F. | 1612.03144#50 | 1612.03144#52 | 1612.03144 | [
"1703.06870"
]
|
1612.03144#52 | Feature Pyramid Networks for Object Detection | Sun. Hypernet: Towards ac- curate region proposal generation and joint object detection. In CVPR, 2016. [19] A. Krizhevsky, I. Sutskever, and G. Hinton. ImageNet clas- siï¬ cation with deep convolutional neural networks. In NIPS, 2012. [20] Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel. | 1612.03144#51 | 1612.03144#53 | 1612.03144 | [
"1703.06870"
]
|
1612.03144#53 | Feature Pyramid Networks for Object Detection | Backpropagation applied to handwritten zip code recognition. Neural compu- tation, 1989. [21] T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ra- manan, P. Doll´ar, and C. L. Zitnick. Microsoft COCO: Com- mon objects in context. In ECCV, 2014. [22] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, and S. Reed. SSD: Single shot multibox detector. In ECCV, 2016. [23] W. Liu, A. Rabinovich, and A. C. Berg. ParseNet: Looking | 1612.03144#52 | 1612.03144#54 | 1612.03144 | [
"1703.06870"
]
|
1612.03144#54 | Feature Pyramid Networks for Object Detection | wider to see better. In ICLR workshop, 2016. [24] J. Long, E. Shelhamer, and T. Darrell. Fully convolutional networks for semantic segmentation. In CVPR, 2015. [25] D. G. Lowe. Distinctive image features from scale-invariant keypoints. IJCV, 2004. [26] A. Newell, K. Yang, and J. Deng. Stacked hourglass net- works for human pose estimation. In ECCV, 2016. [27] P. O. Pinheiro, R. Collobert, and P. | 1612.03144#53 | 1612.03144#55 | 1612.03144 | [
"1703.06870"
]
|
1612.03144#55 | Feature Pyramid Networks for Object Detection | Dollar. Learning to seg- ment object candidates. In NIPS, 2015. [28] P. O. Pinheiro, T.-Y. Lin, R. Collobert, and P. Doll´ar. Learn- ing to reï¬ ne object segments. In ECCV, 2016. [29] S. Ren, K. He, R. Girshick, and J. Sun. Faster R-CNN: To- wards real-time object detection with region proposal net- works. In NIPS, 2015. | 1612.03144#54 | 1612.03144#56 | 1612.03144 | [
"1703.06870"
]
|
1612.03144#56 | Feature Pyramid Networks for Object Detection | [30] S. Ren, K. He, R. Girshick, X. Zhang, and J. Sun. Object detection networks on convolutional feature maps. PAMI, 2016. [31] O. Ronneberger, P. Fischer, and T. Brox. U-Net: Convolu- tional networks for biomedical image segmentation. In MIC- CAI, 2015. [32] H. Rowley, S. Baluja, and T. | 1612.03144#55 | 1612.03144#57 | 1612.03144 | [
"1703.06870"
]
|
1612.03144#57 | Feature Pyramid Networks for Object Detection | Kanade. Human face detec- tion in visual scenes. Technical Report CMU-CS-95-158R, Carnegie Mellon University, 1995. [33] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei. | 1612.03144#56 | 1612.03144#58 | 1612.03144 | [
"1703.06870"
]
|
1612.03144#58 | Feature Pyramid Networks for Object Detection | ImageNet Large Scale Visual Recognition Challenge. IJCV, 2015. [34] P. Sermanet, D. Eigen, X. Zhang, M. Mathieu, R. Fergus, and Y. LeCun. Overfeat: Integrated recognition, localization and detection using convolutional networks. In ICLR, 2014. [35] A. Shrivastava, A. Gupta, and R. Girshick. Training region- based object detectors with online hard example mining. In CVPR, 2016. | 1612.03144#57 | 1612.03144#59 | 1612.03144 | [
"1703.06870"
]
|
1612.03144#59 | Feature Pyramid Networks for Object Detection | [36] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. In ICLR, 2015. [37] J. R. Uijlings, K. E. van de Sande, T. Gevers, and A. W. IJCV, Smeulders. Selective search for object recognition. 2013. 10 [38] R. Vaillant, C. Monrocq, and Y. LeCun. Original approach for the localisation of objects in images. IEE Proc. on Vision, Image, and Signal Processing, 1994. | 1612.03144#58 | 1612.03144#60 | 1612.03144 | [
"1703.06870"
]
|
1612.03144#60 | Feature Pyramid Networks for Object Detection | [39] S. Zagoruyko and N. Komodakis. Wide residual networks. In BMVC, 2016. [40] S. Zagoruyko, A. Lerer, T.-Y. Lin, P. O. Pinheiro, S. Gross, S. Chintala, and P. Doll´ar. A multipath network for object detection. In BMVC, 2016. | 1612.03144#59 | 1612.03144 | [
"1703.06870"
]
|
|
1612.02136#0 | Mode Regularized Generative Adversarial Networks | 7 1 0 2 r a M 2 ] G L . s c [ 5 v 6 3 1 2 0 . 2 1 6 1 : v i X r a Published as a conference paper at ICLR 2017 # MODE REGULARIZED GENERATIVE ADVERSARIAL NETWORKS â Tong Cheâ , â ¡Yanran Liâ , â ,§Athul Paul Jacob, â Yoshua Bengio, â ¡Wenjie Li â Montreal Institute for Learning Algorithms, Universit´e de Montr´eal, Montr´eal, QC H3T 1J4, Canada â ¡Department of Computing, The Hong Kong Polytechnic University, Hong Kong §David R. Cheriton School of Computer Science, University Of Waterloo, Waterloo, ON N2L 3G1, Canada {tong.che,ap.jacob,yoshua.bengio}@umontreal.ca {csyli,cswjli}@comp.polyu.edu.hk # ABSTRACT | 1612.02136#1 | 1612.02136 | [
"1511.05440"
]
|
|
1612.02136#1 | Mode Regularized Generative Adversarial Networks | Although Generative Adversarial Networks achieve state-of-the-art results on a variety of generative tasks, they are regarded as highly unstable and prone to miss modes. We argue that these bad behaviors of GANs are due to the very particular functional shape of the trained discriminators in high dimensional spaces, which can easily make training stuck or push probability mass in the wrong direction, towards that of higher concentration than that of the data generating distribution. We introduce several ways of regularizing the objective, which can dramatically stabilize the training of GAN models. We also show that our regularizers can help the fair distribution of probability mass across the modes of the data gener- ating distribution, during the early phases of training and thus providing a uniï¬ ed solution to the missing modes problem. | 1612.02136#0 | 1612.02136#2 | 1612.02136 | [
"1511.05440"
]
|
1612.02136#2 | Mode Regularized Generative Adversarial Networks | 1 # INTRODUCTION Generative adversarial networks (GAN) (Goodfellow et al., 2014) have demonstrated their potential on various tasks, such as image generation, image super-resolution, 3D object generation, and video prediction (Radford et al., 2015; Ledig et al., 2016; Sønderby et al., 2016; Nguyen et al., 2016; Wu et al., 2016; Mathieu et al., 2015). The objective is to train a parametrized function (the generator) which maps noise samples (e.g., uniform or Gaussian) to samples whose distribution is close to that of the data generating distribution. The basic scheme of the GAN training procedure is to train a discriminator which assigns higher probabilities to real data samples and lower probabilities to generated data samples, while simultaneously trying to move the generated samples towards the real data manifold using the gradient information provided by the discriminator. In a typical setting, the generator and the discriminator are represented by deep neural networks. Despite their success, GANs are generally considered as very hard to train due to training instability and sensitivity to hyper-parameters. On the other hand, a common failure pattern observed while training GANs is the collapsing of large volumes of probability mass onto a few modes. Namely, although the generators produce meaningful samples, these samples are often from just a few modes (small regions of high probability under the data distribution). Behind this phenomenon is the miss- ing modes problem, which is widely conceived as a major problem for training GANs: many modes of the data generating distribution are not at all represented in the generated samples, yielding a much lower entropy distribution, with less variety than the data generating distribution. This issue has been the subject of several recent papers proposing several tricks and new archi- tectures to stabilize GANâ s training and encourage its samplesâ | 1612.02136#1 | 1612.02136#3 | 1612.02136 | [
"1511.05440"
]
|
1612.02136#3 | Mode Regularized Generative Adversarial Networks | diversity. However, we argue that a general cause behind these problems is the lack of control on the discriminator during GAN training. We would like to encourage the manifold of the samples produced by the generator to move towards that of real data, using the discriminator as a metric. However, even if we train the discriminator to distinguish between these two manifolds, we have no control over the shape of the discriminator function in between these manifolds. In fact, the shape of the discriminator function in the data | 1612.02136#2 | 1612.02136#4 | 1612.02136 | [
"1511.05440"
]
|
1612.02136#4 | Mode Regularized Generative Adversarial Networks | â Authors contributed equally. 1 Published as a conference paper at ICLR 2017 space can be very non-linear with bad plateaus and wrong maxima and this can therefore hurt the training of GANs (Figure 1). To remedy this problem, we propose a novel regu- larizer for the GAN training target. The basic idea is simple yet powerful: in addition to the gradient information provided by the discriminator, we want the generator to take advantage of other similarity metrics with much more predictable behavior, such as the L2 norm. Differentiating these similarity met- rics will provide us with more stable gradients to train our generator. Combining this idea with an ap- proach meant to penalize the missing modes, we pro- pose a family of additional regularizers for the GAN objective. We then design a set of metrics to evaluate the generated samples in terms of both the diversity of modes and the distribution fairness of the probability mass. These metrics are shown to be more robust in judging complex generative models, including those which are well-trained and collapsed ones. Regularizers usually bring a trade-off between model variance and bias. Our results have shown that, when correctly applied, our regularizers can dramatically reduce model variance, stabilize the training, and ï¬ x the missing mode problem all at once, with positive or at the least no negative effects on the generated samples. We also discuss a variant of the regularized GAN algorithm, which can even improve sample quality as compared to the DCGAN baseline. # 2 RELATED WORK The GAN approach was initially proposed by Goodfellow et al. (2014) where both the generator and the discriminator are deï¬ ned by deep neural networks. In Goodfellow et al. (2014), the GAN is able to generate interesting local structure but globally incoherent images on various datasets. Mirza & Osindero (2014) enlarges GANâ s representation capacity by introducing an extra vector to allow the generator to produce samples conditioned on other beneï¬ | 1612.02136#3 | 1612.02136#5 | 1612.02136 | [
"1511.05440"
]
|
1612.02136#5 | Mode Regularized Generative Adversarial Networks | cial information. Motivated from this, several conditional variants of GAN has been applied to a wide range of tasks, including image prediction from a normal map Wang & Gupta (2016), image synthesis from text Reed et al. (2016) and edge map Isola et al. (2016), real-time image manipulation Zhu et al. (2016), temporal image generation Zhou & Berg (2016); Saito & Matsumoto (2016); Vondrick et al. (2016), texture synthesis, style transfer, and video stylization Li & Wand (2016). Researchers also aim at stretching GANâ s limit to generate higher-resolution, photo-realistic images. Denton et al. (2015) initially apply a Laplacian pyramid framework on GAN to generate images of high resolution. At each level of their LAPGAN, both the generator and the discriminator are convo- lutional networks. As an alternative to LAPGAN, Radford et al. (2015) successfully designs a class of deep convolutional generative adversarial networks which has led to signiï¬ cant improvements on unsupervised image representation learning. Another line of work aimed at improving GANs are through feature learning, including features from the latent space and image space. The motivation is that features from different spaces are complementary for generating perceptual and natural-looking images. With this perspective, some researchers use distances between learned features as losses for training objectives for generative models. Larsen et al. (2015) combine a variational autoencoder objective with a GAN and utilize the learned features from the discriminator in the GANs for better image similarity metrics. It is shown that the learned distance from the discriminator is of great help for the sample visual ï¬ | 1612.02136#4 | 1612.02136#6 | 1612.02136 | [
"1511.05440"
]
|
1612.02136#6 | Mode Regularized Generative Adversarial Networks | delity. Recent literature have also shown impressive results on image super-resolution to infer photo-realistic natural images for 4x upscaling factors Ledig et al. (2016); Sønderby et al. (2016); Nguyen et al. (2016). Despite these promising successes, GANs are notably hard to train. Although Radford et al. (2015) provide a class of empirical architectural choices that are critical to stabilize GANâ s training, it would be even better to train GANs more robustly and systematically. Salimans et al. (2016) pro- pose feature matching technique to stabilize GANâ s training. The generator is required to match the statistics of intermediate features of the discriminator. Similar idea is adopted by Zhao et al. (2016). | 1612.02136#5 | 1612.02136#7 | 1612.02136 | [
"1511.05440"
]
|
1612.02136#7 | Mode Regularized Generative Adversarial Networks | 2 Published as a conference paper at ICLR 2017 In addition to feature distances, Dosovitskiy & Brox (2016) found that the counterpart loss in image space further improves GANâ s training stability. Furthermore, some researchers make use of infor- mation in both spaces in a uniï¬ ed learning procedure (Dumoulin et al., 2016; Donahue et al., 2016). In Dumoulin et al. (2016), one trains not just a generator but also an encoder, and the discriminator is trained to distinguish between two joint distributions over image and latent spaces produced either by the application of the encoder on the training data or by the application of the generator (decoder) to the latent prior. This is in contrast with the regular GAN training, in which the discriminator only attempts to separate the distributions in the image space. Parallelly, Metz et al. (2016) stabilize GANs by unrolling the optimization of discriminator, which can be considered as an orthogonal work with ours. Our work is related to VAEGAN (Larsen et al., 2015) in terms of training an autoencoder or VAE jointly with the GAN model. However, the variational autoencoder (VAE) in VAEGAN is used to generate samples whereas our autoencoder based losses serves as a regularizer to penalize missing modes and thus improving GANâ | 1612.02136#6 | 1612.02136#8 | 1612.02136 | [
"1511.05440"
]
|
1612.02136#8 | Mode Regularized Generative Adversarial Networks | s training stability and sample qualities. We demonstrate detailed differences from various aspects in Appendix D. # 3 MODE REGULARIZERS FOR GANS The GAN training procedure can be viewed as a non-cooperative two player game, in which the discriminator D tries to distinguish real and generated examples, while the generator G tries to fool the discriminator by pushing the generated samples towards the direction of higher discrimination values. Training the discriminator D can be viewed as training an evaluation metric on the sample space. Then the generator G has to take advantage of the local gradient â log D(G) provided by the discriminator to improve itself, namely to move towards the data manifold. We now take a closer look at the root cause of the instabilities while training GANs. The discrim- inator is trained on both generated and real examples. As pointed out by Goodfellow et al. (2014); Denton et al. (2015); Radford et al. (2015), when the data manifold and the generation manifold are disjoint (which is true in almost all practical situations), it is equivalent to training a characteristic function to be very close to 1 on the data manifold, and 0 on the generation manifold. In order to pass good gradient information to the generator, it is important that the trained discriminator pro- duces stable and smooth gradients. However, since the discriminator objective does not directly depend on the behavior of the discriminator in other parts of the space, training can easily fail if the shape of the discriminator function is not as expected. As an example,Denton et al. (2015) noted a common failure pattern for training GANs which is the vanishing gradient problem, in which the discriminator D perfectly classiï¬ es real and fake examples, such that around the fake examples, D is nearly zero. In such cases, the generator will receive no gradient to improve itself.1 Another important problem while training GANs is mode missing. In theory, if the generated data and the real data come from the same low dimensional manifold, the discriminator can help the generator distribute its probability mass, because the missing modes will not have near-0 probability under the generator and so the samples in these areas can be appropriately concentrated towards regions where D is closer to 1. | 1612.02136#7 | 1612.02136#9 | 1612.02136 | [
"1511.05440"
]
|
1612.02136#9 | Mode Regularized Generative Adversarial Networks | However, in practice since the two manifolds are disjoint, D tends to be near 1 on all the real data samples, so large modes usually have a much higher chance of attracting the gradient of discriminator. For a typical GAN model, since all modes have similar D values, there is no reason why the generator cannot collapse to just a few major modes. In other words, since the discriminatorâ s output is nearly 0 and 1 on fake and real data respectively, the generator is not penalized for missing modes. 3.1 GEOMETRIC METRICS REGULARIZER Compared with the objective for the GAN generator, the optimization targets for supervised learning are more stable from an optimization point of view. The difference is clear: the optimization target for the GAN generator is a learned discriminator. While in supervised models, the optimization targets are distance functions with nice geometric properties. The latter usually provides much easier training gradients than the former, especially at the early stages of training. 1This problem exists even when we use log D(G(z)) as target for the generator, as noted by Denton et al. (2015) and our experiments. | 1612.02136#8 | 1612.02136#10 | 1612.02136 | [
"1511.05440"
]
|
1612.02136#10 | Mode Regularized Generative Adversarial Networks | 3 Published as a conference paper at ICLR 2017 Inspired by this observation, we propose to incorporate a supervised training signal as a regularizer on top of the discriminator target. Assume the generator G(z) : Z â X generates samples by sam- pling ï¬ rst from a ï¬ xed prior distribution in space Z followed by a deterministic trainable transforma- tion G into the sample space X. Together with G, we also jointly train an encoder E(x) : X â Z. Assume d is some similarity metric in the data space, we add Exâ ¼pd [d(x, Gâ ¦E(x))] as a regularizer, where pd is the data generating distribution. | 1612.02136#9 | 1612.02136#11 | 1612.02136 | [
"1511.05440"
]
|
1612.02136#11 | Mode Regularized Generative Adversarial Networks | The encoder itself is trained by minimizing the same reconstruction error. In practice, there are many options for the distance measure d. For instance, the pixel-wise L2 distance, or the distance of learned features by the discriminator (Dumoulin et al., 2016) or by other networks, such as a VGG classiï¬ er. (Ledig et al., 2016) The geometric intuition for this regularizer is straight-forward. We are trying to move the generated manifold to the real data manifold using gradient descent. In addition to the gradient provided by the discriminator, we can also try to match the two manifolds by other geometric distances, say, Ls metric. | 1612.02136#10 | 1612.02136#12 | 1612.02136 | [
"1511.05440"
]
|
1612.02136#12 | Mode Regularized Generative Adversarial Networks | The idea of adding an encoder is equivalent to ï¬ rst training a point to point mapping G(E(x)) between the two manifolds and then trying to minimize the expected distance between the points on these two manifolds. # 3.2 MODE REGULARIZER In addition to the metric regularizer, we propose a mode regularizer to further penalize miss- ing modes. In traditional GANs, the optimization target for the generator is the empirical sum >=; Vo log D(Go(z;)). The missing mode problem is caused by the conjunction of two facts: (1) the areas near missing modes are rarely visited by the generator, by definition, thus providing very few examples to improve the generator around those areas, and (2) both missing modes and non- missing modes tend to correspond to a high value of D, because the generator is not perfect so that the discriminator can take strong decisions locally and obtain a high value of D even near non-missing modes. As an example, consider the situation in Fig- ure 2. For most z, the gradient of the generator â θ log D(Gθ(z)) pushes the generator towards the major mode M1. Only when G(z) is very close to the mode M2 can the generator get gra- dients to push itself towards the minor mode M2. However, it is possible that such z is of low or zero probability in the prior distribution p0. towards Mr towards Mp generation manifold Given this observation, consider a regularized GAN model with the metric regularizer. As- sume M0 is a minor mode of the data generat- ing distribution. | 1612.02136#11 | 1612.02136#13 | 1612.02136 | [
"1511.05440"
]
|
1612.02136#13 | Mode Regularized Generative Adversarial Networks | For x â M0, we know that if G â ¦ E is a good autoencoder, G(E(x)) will be located very close to mode M0. Since there are sufï¬ cient training examples of mode M0 in the training data, we add the mode regularizer Exâ ¼pd [log D(G â ¦ E(x))] to our optimization target for the generator, to encourage G(E(x)) to move towards a nearby mode of the data generating distribution. In this way, we can achieve fair probability mass distribution across different modes. Figure 2: Illustration of missing modes problem. In short, our regularized optimization target for the generator and the encoder becomes: | 1612.02136#12 | 1612.02136#14 | 1612.02136 | [
"1511.05440"
]
|
1612.02136#14 | Mode Regularized Generative Adversarial Networks | TG = â Ez[log D(G(z))] + Exâ ¼pd [λ1d(x, G â ¦ E(x)) + λ2 log D(G â ¦ E(x))] TE = Exâ ¼pd [λ1d(x, G â ¦ E(x)) + λ2 log D(G â ¦ E(x))] (1) (2) 4 Published as a conference paper at ICLR 2017 3.3 MANIFOLD-DIFFUSION TRAINING FOR REGULARIZED GANS On some large scale datasets, CelebA for example, the regularizers we have discussed do improve the diversity of generated samples, but the quality of samples may not be as good without care- fully tuning the hyperparameters. Here we propose a new algorithm for training metric-regularized GANs, which is very stable and much easier to tune for producing good samples. The proposed algorithm divides the training procedure of GANs into two steps: a manifold step and a diffusion step. In the manifold step, we try to match the generation manifold and the real data manifold with the help of an encoder and the geometric metric loss. In the diffusion step, we try to distribute the probability mass on the generation manifold fairly according to the real data distribution. An example of manifold-diffusion training of GAN (MDGAN for short) is as follows: we train a discriminator D1 which separates between the samples x and G â ¦ E(x), for x from the data, and we optimize G with respect to the regularized GAN loss E[log D1(Gâ ¦E(x))+λd(x, Gâ ¦E(x))] in order to match the two manifolds. In the diffusion step we train a discriminator D2 between distributions G(z) and G â ¦ E(x), and we train G to maximize log D2(G(z)). Since these two distributions are now nearly on the same low dimensional manifold, the discriminator D2 provides much smoother and more stable gradients. The detailed training procedure is given in Appendix A. See Figure 6 for the quality of generated samples. 3.4 EVALUATION METRICS FOR MODE MISSING In order to estimate both the missing modes and the sample qualities in our experiments, we used several different metrics for different experiments instead of human annotators. The inception score (Salimans et al., 2016) was considered as a good assessment for sample quality from a labelled dataset: | 1612.02136#13 | 1612.02136#15 | 1612.02136 | [
"1511.05440"
]
|
1612.02136#15 | Mode Regularized Generative Adversarial Networks | exp (ExKL(p(y|x)||pâ (y))) (3) Where x denotes one sample, p(y|x) is the softmax output of a trained classiï¬ er of the labels, and pâ (y) is the overall label distribution of generated samples. The intuition behind this score is that a strong classiï¬ er usually has a high conï¬ dence for good samples. However, the inception score is sometimes not a good metric for our purpose. Assume a generative model that collapse to a very bad image. Although the model is very bad, it can have a perfect inception score, because p(y|x) can have a high entropy and pâ (y) can have a low entropy. So instead, for labelled datasets, we propose another assessment for both visual quality and variety of samples, the MODE score: exp (ExKL(p(y|x)||p(y)) â KL(pâ (y)||p(y))) (4) where p(y) is the distribution of labels in the training data. According to our human evaluation experiences, the MODE score successfully measures two important aspects of generative models, i.e., variety and visual quality, in one metric. However, in datasets without labels (LSUN) or where the labels are not sufï¬ cient to characterize every data mode (CelebA), the above metric does not work well. We instead train a third party discriminator between the real data and the generated data from the model. It is similar to the GAN discriminator but is not used to train the generator. We can view the output of the discriminator as an estimator for the quantity (See (Goodfellow et al., 2014) for proof): | 1612.02136#14 | 1612.02136#16 | 1612.02136 | [
"1511.05440"
]
|
1612.02136#16 | Mode Regularized Generative Adversarial Networks | Dâ (s) â pg(s) pg(s) + pd(s) (5) Where pg is the probability density of the generator and pd is the density of the data distribution. To prevent Dâ from learning a perfect 0-1 separation of pg and pd, we inject a zero-mean Gaussian noise to the inputs when training Dâ . After training, we test Dâ on the test set T of the real dataset. If for any test sample t â T , the discrimination value D(t) is close to 1, we can conclude that the mode corresponding to t is missing. In this way, although we cannot measure exactly the number of modes that are missing, we have a good estimator of the total probability mass of all the missing modes. | 1612.02136#15 | 1612.02136#17 | 1612.02136 | [
"1511.05440"
]
|
1612.02136#17 | Mode Regularized Generative Adversarial Networks | 5 Published as a conference paper at ICLR 2017 4 EXPERIMENTS # 4.1 MNIST We perform two classes of experiments on MNIST. For the MNIST dataset, we can assume that the data generating distribution can be approximated with ten dominant modes, if we deï¬ ne the term â modeâ here as a connected component of the data manifold. Table 1: Grid Search for Hyperparameters. nLayerG nLayerD sizeG sizeD dropoutD [True,False] [SGD,Adam] optimG [SGD,Adam] optimD [1e-2,1e-3,1e-4] lr [2,3,4] [2,3,4] [400,800,1600,3200] [256, 512, 1024] | 1612.02136#16 | 1612.02136#18 | 1612.02136 | [
"1511.05440"
]
|
1612.02136#18 | Mode Regularized Generative Adversarial Networks | _ # 4.1.1 GRID SEARCH FOR MNIST GAN MODELS In order to systemically explore the effect of our pro- posed regularizers on GAN models in terms of im- proving stability and sample quality, we use a large scale grid search of different GAN hyper-parameters on the MNIST dataset. The grid search is based on a pair of randomly selected loss weights: λ1 = 0.2 and λ2 = 0.4. We use the same hyper-parameter settings for both GAN and Regularized GAN, and list the search ranges in Table 1. Our grid search is similar to those proposed in Zhao et al. (2016). Please refer to it for detailed explanations regarding these hyper-parameters. | 1612.02136#17 | 1612.02136#19 | 1612.02136 | [
"1511.05440"
]
|
1612.02136#19 | Mode Regularized Generative Adversarial Networks | For evaluation, we ï¬ rst train a 4-layer CNN classiï¬ er on the MNIST digits, and then apply it to compute the MODE scores for the generated samples from all these models. The resulting distribu- tion of MODE score is shown in Figure 3. Clearly, our proposed regularizer signiï¬ cantly improves the MODE scores and thus demonstrates its beneï¬ ts on stabilizing GANs and improving sample qualities. 10 59.97 CAN sem Regularized GAN 22.29 20 7.34 14.86 96 * 6.19 6.19 743 soz 433 35 aay 248833 2.79 oles eM 2 cee conf) onl oo Si Py a 00.5 05-1 12 23 | 1612.02136#18 | 1612.02136#20 | 1612.02136 | [
"1511.05440"
]
|
1612.02136#20 | Mode Regularized Generative Adversarial Networks | Figure 3: The distributions of MODE scores for GAN and regularized GAN. To illustrate the effect of regularizers with different coefï¬ cients, we randomly pick an architecture and train it with different λ1 = λ2. The results are shown in Figure 4. Figure 4: (Left 1-5) Different hyperparameters for MNIST generation. The values of the λ1 and λ2 in our Regularized GAN are listed below the corresponding samples. (Right 6-7) Best samples through grid search for GAN and Regularized GAN. 4.1.2 COMPOSITIONAL MNIST DATA WITH 1000 MODES In order to quantitatively study the effect of our regularizers on the missing modes, we concatenate three MNIST digits to a number in [0,999] in a single 64x64 image, and then train DCGAN as a baseline model on the 1000 modes dataset. The digits on the image are sampled with different | 1612.02136#19 | 1612.02136#21 | 1612.02136 | [
"1511.05440"
]
|
1612.02136#21 | Mode Regularized Generative Adversarial Networks | 6 Published as a conference paper at ICLR 2017 probabilities, in order to test the modelâ s capability to preserve small modes in generation. We again use a pre-trained classiï¬ er for MNIST instead of a human to evaluate the models. Table 2: Results for Compositional MNIST with 1000 modes. The proposed regularization (Reg- DCGAN) allows to substantially reduce the number of missed modes as well as the KL divergence that measures the plausibility of the generated samples (like in the Inception score). Set 1 #Miss KL Set 2 #Miss KL Set 3 #Miss KL Set 4 #Miss KL DCGAN 204.7 77.9 204.3 60.2 103.4 75.9 89.3 77.8 Reg-DCGAN 32.1 62.3 71.5 58.9 42.7 68.4 31.6 67.8 The performances on the compositional experiment are measured by two metrics. #Miss represents the classiï¬ er-reported number of missing modes, which is the size of the set of numbers that the model never generates. KL stands for the KL divergence between the classiï¬ er-reported distribution of generated numbers and the distribution of numbers in the training data (as for the Inception score). The results are shown in Table 2. With the help of our proposed regularizer, both the number of missing modes and KL divergence drop dramatically among all the sets of the compositional MNIST dataset, which again proves the effectiveness of our regularizer for preventing the missing modes problem. | 1612.02136#20 | 1612.02136#22 | 1612.02136 | [
"1511.05440"
]
|
1612.02136#22 | Mode Regularized Generative Adversarial Networks | 4.2 CELEBA To test the effectiveness of our proposal on harder problems, we implement an encoder for the DCGAN algorithm and train our model with different hyper-parameters together with the DCGAN baseline on the CelebA dataset. We provide the detailed architecture of our regularized DCGAN in Appendix B. 4.2.1 MISSING MODES ESTIMATION ON CELEBA We also employ a third party discriminator trained with injected noise as a metric for missing mode estimation. To implement this, we add noise in the input layer in the discriminator network. For each GAN model to be estimated, we independently train this noisy discriminator, as mode estimator, with the same architecture and hyper-parameters on the generated data and the training data. We then apply the mode estimator to the test data. The images which have high mode estimator outputs can be viewed as on the missing modes. Table 3: Number of images on the missing modes on CelebA estimated by a third-party discrimina- tor. The numbers in the brackets indicate the dimension of prior z. Ï denotes the standard deviation of the added Gaussian noise applied at the input of the discriminator to regularize it. MDGAN achieves a very high reduction in the number of missing modes, in comparison to other methods . | 1612.02136#21 | 1612.02136#23 | 1612.02136 | [
"1511.05440"
]
|
1612.02136#23 | Mode Regularized Generative Adversarial Networks | Ï DCGAN (100) DCGAN (200) Reg-GAN (100) Reg-GAN (200) MDGAN (200) 3.5 5463 17089 754 3644 74 4.0 590 15832 42 391 13 The comparison result is shown in Table 3. Both our proposed Regularized-GAN and MDGAN outperform baseline DCGAN models on all settings. Especially, MDGAN suppresses other models, showing its superiority on modes preserving. | 1612.02136#22 | 1612.02136#24 | 1612.02136 | [
"1511.05440"
]
|
1612.02136#24 | Mode Regularized Generative Adversarial Networks | We also ï¬ nd that, although sharing the same architec- ture, the DCGAN with 200-dimensional noise performs quite worse than that with 100-dimensional noise as input. On the contrary, our regularized GAN performs more consistently. To get a better understanding of the modelsâ performance, we want to ï¬ gure out when and where these models miss the modes. Visualizing the test images associated with missed modes is instruc- tive. In Figure 5, the left three images are missed by all models. It is rare to see in the training data the cap in the second image and the type of background in the third, which thus can be viewed as small modes under this situation. These three images should be considered as the hardest test data | 1612.02136#23 | 1612.02136#25 | 1612.02136 | [
"1511.05440"
]
|
1612.02136#25 | Mode Regularized Generative Adversarial Networks | 7 Published as a conference paper at ICLR 2017 for GAN to learn. Nonetheless, our best model, MDGAN still capture certain small modes. The seven images on the right in Figure 5 are only missed by DCGAN. The sideface, paleface, black, and the berets are special attributes among these images, but our proposed MDGAN performs well on all of them. Hada AAAS Figure 5: Test set images that are on missing mode. Left: Both MDGAN and DCGAN missing. Right: Only DCGAN missing. | 1612.02136#24 | 1612.02136#26 | 1612.02136 | [
"1511.05440"
]
|
1612.02136#26 | Mode Regularized Generative Adversarial Networks | 4.2.2 QUALITATIVE EVALUATION OF GENERATED SAMPLES After quantitative evaluation, we manually examine the generated samples by our regularized GAN to see whether the proposed regularizer has side-effects on sample quality. We compare our model with ALI (Dumoulin et al., 2016), VAEGAN (Larsen et al., 2015), and DCGAN (Radford et al., 2015) in terms of sample visual quality and mode diversity. Samples generated from these models are shown in Figure 62. | 1612.02136#25 | 1612.02136#27 | 1612.02136 | [
"1511.05440"
]
|
1612.02136#27 | Mode Regularized Generative Adversarial Networks | a 5 â a -GAN , a) es d a e " . Te }81 <[2) - i ~ J VAEGAN «ly â _ bg R: ie: rd bE we é| 7 , = | Figure 6: Samples generated from different generative models. For each compared model, we directly take ten decent samples reported in their corresponding papers and code repositories. Note how MDGAN samples are both globally more coherent and locally have sharp textures. Both MDGAN and Regularized-GAN generate clear and natural-looking face images. | 1612.02136#26 | 1612.02136#28 | 1612.02136 | [
"1511.05440"
]
|
1612.02136#28 | Mode Regularized Generative Adversarial Networks | Although ALIâ s samples are plausible, they are sightly deformed in comparison with those from MDGAN. The samples from VAEGAN and DCGAN seem globally less coherent and locally less sharp. As to sample quality, it is worth noting that the samples from MDGAN enjoy fewer distortions. With all four other models, the majority of generated samples suffer from some sort of distortion. However, for the samples generated by MDGAN, the level of distortion is lower compared with the other four compared models. We attribute it to the help of the autoencoder as the regularizer to alter the generation manifolds. In this way, the generator is able to learn ï¬ ne-grained details such as face edges. As a result, MDGAN is able to reduce distortions. 2For fair comparison, we also recommend readers to refer to the original papers Dumoulin et al. (2016); Larsen et al. (2015); Radford et al. (2015) for the reported samples of the compared. The ALI sam- ples are from https://github.com/IshmaelBelghazi/ALI/blob/master/paper/celeba_ samples.png and we reverted them to the original 64x64 size. The DCGAN samples are from https: //github.com/Newmu/dcgan_code/ | 1612.02136#27 | 1612.02136#29 | 1612.02136 | [
"1511.05440"
]
|
1612.02136#29 | Mode Regularized Generative Adversarial Networks | 8 Published as a conference paper at ICLR 2017 MDGAN Regularized an -GAN Figure 7: Sideface samples generated by Regularized-GAN and MDGAN. In terms of missing modes problem, we instructed ï¬ ve individuals to conduct human evaluation on the generated samples. They achieve consensus that MDGAN wins in terms of mode diversities. Two people pointed out that MDGAN generates a larger amount of samples with side faces than other models. We select several of these side face samples in Figure 7. Clearly, our samples maintain acceptable visual ï¬ delity meanwhile share diverse modes. Combined with the above quantitative results, it is convincing that our regularizers bring beneï¬ ts for both training stability and mode variety without the loss of sample quality. # 5 CONCLUSIONS Although GANs achieve state-of-the-art results on a large variety of unsupervised learning tasks, training them is considered highly unstable, very difï¬ cult and sensitive to hyper-parameters, all the while, missing modes from the data distribution or even collapsing large amounts of probability mass on some modes. Successful GAN training usually requires large amounts of human and com- puting efforts to ï¬ ne tune the hyper-parameters, in order to stabilize training and avoid collapsing. Researchers usually rely on their own experience and published tricks and hyper-parameters instead of systematic methods for training GANs. We provide systematic ways to measure and avoid the missing modes problem and stabilize training with the proposed autoencoder-based regularizers. The key idea is that some geometric metrics can provide more stable gradients than trained discriminators, and when combined with the encoder, they can be used as regularizers for training. These regularizers can also penalize missing modes and encourage a fair distribution of probability mass on the generation manifold. | 1612.02136#28 | 1612.02136#30 | 1612.02136 | [
"1511.05440"
]
|
1612.02136#30 | Mode Regularized Generative Adversarial Networks | # ACKNOWLEDGEMENTS We thank Naiyan Wang, Jianbo Ye, Yuchen Ding, Saboya Yang for their GPU support. We also want to thank Huiling Zhen for helpful discussions, Junbo Zhao for providing the details of grid search experiments on the EBGAN model, as well as Anders Boesen Lindbo Larsen for kindly helping us on running VAEGAN experiments. We appreciate for the valuable suggestions and comments from the anonymous reviewers. The work described in this paper was partially supported by NSERC, Calcul Quebec, Compute Canada, the Canada Research Chairs, CIFAR, National Natural Science Foundation of China (61672445 and 61272291), Research Grants Council of Hong Kong (PolyU 152094/14E), and The Hong Kong Polytechnic University (G-YBP6). # REFERENCES Emily L Denton, Soumith Chintala, Rob Fergus, et al. | 1612.02136#29 | 1612.02136#31 | 1612.02136 | [
"1511.05440"
]
|
1612.02136#31 | Mode Regularized Generative Adversarial Networks | Deep generative image models using a laplacian pyramid of adversarial networks. In Advances in neural information processing systems, pp. 1486â 1494, 2015. Jeff Donahue, Philipp Kr¨ahenb¨uhl, and Trevor Darrell. Adversarial feature learning. arXiv preprint arXiv:1605.09782, 2016. Alexey Dosovitskiy and Thomas Brox. Generating images with perceptual similarity metrics based on deep networks. arXiv preprint arXiv:1602.02644, 2016. Vincent Dumoulin, Ishmael Belghazi, Ben Poole, Alex Lamb, Martin Arjovsky, Olivier Mastropi- etro, and Aaron Courville. Adversarially learned inference. arXiv preprint arXiv:1606.00704, 2016. | 1612.02136#30 | 1612.02136#32 | 1612.02136 | [
"1511.05440"
]
|
1612.02136#32 | Mode Regularized Generative Adversarial Networks | 9 Published as a conference paper at ICLR 2017 Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Infor- mation Processing Systems, pp. 2672â 2680, 2014. Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A Efros. Image-to-image translation with conditional adversarial networks. arxiv, 2016. Anders Boesen Lindbo Larsen, Søren Kaae Sønderby, and Ole Winther. Autoencoding beyond pixels using a learned similarity metric. arXiv preprint arXiv:1512.09300, 2015. Christian Ledig, Lucas Theis, Ferenc Husz´ar, Jose Caballero, Andrew Aitken, Alykhan Tejani, Jo- hannes Totz, Zehan Wang, and Wenzhe Shi. Photo-realistic single image super-resolution using a generative adversarial network. arXiv preprint arXiv:1609.04802, 2016. Chuan Li and Michael Wand. Precomputed real-time texture synthesis with markovian generative adversarial networks. arXiv preprint arXiv:1604.04382, 2016. Michael Mathieu, Camille Couprie, and Yann LeCun. Deep multi-scale video prediction beyond mean square error. arXiv preprint arXiv:1511.05440, 2015. Luke Metz, Ben Poole, David Pfau, and Jascha Sohl-Dickstein. Unrolled generative adversarial networks. arXiv preprint arXiv:1611.02163, 2016. Mehdi Mirza and Simon Osindero. Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784, 2014. Anh Nguyen, Jason Yosinski, Yoshua Bengio, Alexey Dosovitskiy, and Jeff Clune. Plug & play generative networks: | 1612.02136#31 | 1612.02136#33 | 1612.02136 | [
"1511.05440"
]
|
1612.02136#33 | Mode Regularized Generative Adversarial Networks | Conditional iterative generation of images in latent space. arXiv preprint arXiv:1612.00005, 2016. Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434, 2015. Scott Reed, Zeynep Akata, Xinchen Yan, Lajanugen Logeswaran, Bernt Schiele, and Honglak Lee. Generative adversarial text to image synthesis. arXiv preprint arXiv:1605.05396, 2016. Masaki Saito and Eiichi Matsumoto. Temporal generative adversarial nets. arXiv preprint arXiv:1611.06624, 2016. Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improved techniques for training gans. arXiv preprint arXiv:1606.03498, 2016. Casper Kaae Sønderby, Jose Caballero, Lucas Theis, Wenzhe Shi, and Ferenc Husz´ar. Amortised map inference for image super-resolution. arXiv preprint arXiv:1610.04490, 2016. Carl Vondrick, Hamed Pirsiavash, and Antonio Torralba. | 1612.02136#32 | 1612.02136#34 | 1612.02136 | [
"1511.05440"
]
|
1612.02136#34 | Mode Regularized Generative Adversarial Networks | Generating videos with scene dynamics. In Advances In Neural Information Processing Systems, pp. 613â 621, 2016. Xiaolong Wang and Abhinav Gupta. Generative image modeling using style and structure adversar- ial networks. In ECCV, 2016. Jiajun Wu, Chengkai Zhang, Tianfan Xue, William T Freeman, and Joshua B Tenenbaum. Learning a probabilistic latent space of object shapes via 3d generative-adversarial modeling. In Neural Information Processing Systems (NIPS), 2016. Junbo Zhao, Michael Mathieu, and Yann LeCun. Energy-based generative adversarial network. arXiv preprint arXiv:1609.03126, 2016. Yipin Zhou and Tamara L Berg. Learning temporal transformations from time-lapse videos. In European Conference on Computer Vision, pp. 262â | 1612.02136#33 | 1612.02136#35 | 1612.02136 | [
"1511.05440"
]
|
1612.02136#35 | Mode Regularized Generative Adversarial Networks | 277. Springer, 2016. Jun-Yan Zhu, Philipp Kr¨ahenb¨uhl, Eli Shechtman, and Alexei A. Efros. Generative visual manipula- tion on the natural image manifold. In Proceedings of European Conference on Computer Vision (ECCV), 2016. 10 Published as a conference paper at ICLR 2017 # A APPENDIX: PSEUDO CODE FOR MDGAN In this Appendix, we give the detailed training procedure of an MDGAN example we discuss in Section 3.3. Manifold Step: 1. Sample {x1, x2, · · · xm} from data generating distribution pdata(x). 2. Update discriminator D1 using SGD with gradient ascent: 1 Voy = > aflog Di(xi) + log(1 â Di (G(E(%:))))] i=l 3. | 1612.02136#34 | 1612.02136#36 | 1612.02136 | [
"1511.05440"
]
|
1612.02136#36 | Mode Regularized Generative Adversarial Networks | Update generator G using SGD with gradient ascent: i< : Vo, â > [Alog Di (G(E(x:))) â |i - @(E(:))|7) i=1 Diffusion Step: 4. Sample {x1, x2, · · · xm} from data generating distribution pdata(x). 5. Sample {z1, z2, · · · zm} from prior distribution pÏ (z). 6. Update discriminator D2 using SGD with gradient ascent: 1 Var SV flog D2(G(E(x;))) + log(1 â Da(z:))] i=1 7. | 1612.02136#35 | 1612.02136#37 | 1612.02136 | [
"1511.05440"
]
|
1612.02136#37 | Mode Regularized Generative Adversarial Networks | Update generator G using SGD with gradient ascent: m Vo, <> log Da(G(es)) i=l Figure 8: The detailed training procedure of an MDGAN example. # B APPENDIX: ARCHITECTURE FOR EXPERIMENTS We use similar architectures for Compositional MNIST and CelebA experiments. The architecture is based on that found in DCGAN Radford et al. (2015). Apart from the discriminator and generator which are the same as DCGAN, we add an encoder which is the â | 1612.02136#36 | 1612.02136#38 | 1612.02136 | [
"1511.05440"
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.