doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1612.03144 | 35 | # 5.2.1 Fast R-CNN (on ï¬xed proposals)
puted by RPN on FPN (Table 1(c)), because it has good per- formance on small objects that are to be recognized by the detector. For simplicity we do not share features between Fast R-CNN and RPN, except when speciï¬ed.
As a ResNet-based Fast R-CNN baseline, following [16], we adopt RoI pooling with an output size of 14Ã14 and attach all conv5 layers as the hidden layers of the head. This gives an AP of 31.9 in Table 2(a). Table 2(b) is a base- line exploiting an MLP head with 2 hidden fc layers, similar to the head in our architecture. It gets an AP of 28.8, indi- cating that the 2-fc head does not give us any orthogonal advantage over the baseline in Table 2(a). | 1612.03144#35 | Feature Pyramid Networks for Object Detection | Feature pyramids are a basic component in recognition systems for detecting
objects at different scales. But recent deep learning object detectors have
avoided pyramid representations, in part because they are compute and memory
intensive. In this paper, we exploit the inherent multi-scale, pyramidal
hierarchy of deep convolutional networks to construct feature pyramids with
marginal extra cost. A top-down architecture with lateral connections is
developed for building high-level semantic feature maps at all scales. This
architecture, called a Feature Pyramid Network (FPN), shows significant
improvement as a generic feature extractor in several applications. Using FPN
in a basic Faster R-CNN system, our method achieves state-of-the-art
single-model results on the COCO detection benchmark without bells and
whistles, surpassing all existing single-model entries including those from the
COCO 2016 challenge winners. In addition, our method can run at 5 FPS on a GPU
and thus is a practical and accurate solution to multi-scale object detection.
Code will be made publicly available. | http://arxiv.org/pdf/1612.03144 | Tsung-Yi Lin, Piotr Dollár, Ross Girshick, Kaiming He, Bharath Hariharan, Serge Belongie | cs.CV | null | null | cs.CV | 20161209 | 20170419 | [
{
"id": "1703.06870"
}
] |
1612.03144 | 36 | Table 2(c) shows the results of our FPN in Fast R-CNN. Comparing with the baseline in Table 2(a), our method im- proves AP by 2.0 points and small object AP by 2.1 points. Comparing with the baseline that also adopts a 2fc head (Ta- ble 2(b)), our method improves AP by 5.1 points.5 These comparisons indicate that our feature pyramid is superior to single-scale features for a region-based object detector.
To better investigate FPNâs effects on the region-based de- tector alone, we conduct ablations of Fast R-CNN on a ï¬xed set of proposals. We choose to freeze the proposals as comTable 2(d) and (e) show that removing top-down con5We expect a stronger architecture of the head [30] will improve upon our results, which is beyond the focus of this paper.
6 | 1612.03144#36 | Feature Pyramid Networks for Object Detection | Feature pyramids are a basic component in recognition systems for detecting
objects at different scales. But recent deep learning object detectors have
avoided pyramid representations, in part because they are compute and memory
intensive. In this paper, we exploit the inherent multi-scale, pyramidal
hierarchy of deep convolutional networks to construct feature pyramids with
marginal extra cost. A top-down architecture with lateral connections is
developed for building high-level semantic feature maps at all scales. This
architecture, called a Feature Pyramid Network (FPN), shows significant
improvement as a generic feature extractor in several applications. Using FPN
in a basic Faster R-CNN system, our method achieves state-of-the-art
single-model results on the COCO detection benchmark without bells and
whistles, surpassing all existing single-model entries including those from the
COCO 2016 challenge winners. In addition, our method can run at 5 FPS on a GPU
and thus is a practical and accurate solution to multi-scale object detection.
Code will be made publicly available. | http://arxiv.org/pdf/1612.03144 | Tsung-Yi Lin, Piotr Dollár, Ross Girshick, Kaiming He, Bharath Hariharan, Serge Belongie | cs.CV | null | null | cs.CV | 20161209 | 20170419 | [
{
"id": "1703.06870"
}
] |
1612.03144 | 37 | 6
image test-dev test-std method backbone competition | pyramid | [email protected] | AP | APs AP», AP; | APa.s] AP | APs APm AP, ours, Faster R-CNN on FPN ResNet-101 - 59.1 | 36.2] 18.2 39.0 48.2) 585 | 35.8|17.5 38.7 47.8 Competition-winning single-model results follow: G-RMIt Inception-ResNet 2016 - 34.7] - - - - - - - - AttractioNet? [10] VGG16 + Wide ResNet® 2016 v 53.4 | 35.7] 15.6 38.0 52.7) 52.9 | 35.3] 14.7 37.6 51.9 Faster R-CNN +++ [16] ResNet-101 2015 v 55.7 | 34.9] 15.6 38.7 50.9 - - - - - Multipath [40] (on minival) VGG-16 2015 49.6 |31.5] - - - - - - - - ION? [2] VGG-16 2015 53.4 | 31.2] 12.8 32.9 45.2) 52.9 | 30.7] 11.8 32.8 44.8 | 1612.03144#37 | Feature Pyramid Networks for Object Detection | Feature pyramids are a basic component in recognition systems for detecting
objects at different scales. But recent deep learning object detectors have
avoided pyramid representations, in part because they are compute and memory
intensive. In this paper, we exploit the inherent multi-scale, pyramidal
hierarchy of deep convolutional networks to construct feature pyramids with
marginal extra cost. A top-down architecture with lateral connections is
developed for building high-level semantic feature maps at all scales. This
architecture, called a Feature Pyramid Network (FPN), shows significant
improvement as a generic feature extractor in several applications. Using FPN
in a basic Faster R-CNN system, our method achieves state-of-the-art
single-model results on the COCO detection benchmark without bells and
whistles, surpassing all existing single-model entries including those from the
COCO 2016 challenge winners. In addition, our method can run at 5 FPS on a GPU
and thus is a practical and accurate solution to multi-scale object detection.
Code will be made publicly available. | http://arxiv.org/pdf/1612.03144 | Tsung-Yi Lin, Piotr Dollár, Ross Girshick, Kaiming He, Bharath Hariharan, Serge Belongie | cs.CV | null | null | cs.CV | 20161209 | 20170419 | [
{
"id": "1703.06870"
}
] |
1612.03144 | 38 | Table 4. Comparisons of single-model results on the COCO detection benchmark. Some results were not available on the test-std set, so we also include the test-dev results (and for Multipath [40] on minival). â : http://image-net.org/challenges/ talks/2016/GRMI-COCO-slidedeck.pdf. â¡: http://mscoco.org/dataset/#detections-leaderboard. §: This entry of AttractioNet [10] adopts VGG-16 for proposals and Wide ResNet [39] for object detection, so is not strictly a single-model result.
nections or removing lateral connections leads to inferior results, similar to what we have observed in the above sub- section for RPN. It is noteworthy that removing top-down connections (Table 2(d)) signiï¬cantly degrades the accu- racy, suggesting that Fast R-CNN suffers from using the low-level features at the high-resolution maps. | 1612.03144#38 | Feature Pyramid Networks for Object Detection | Feature pyramids are a basic component in recognition systems for detecting
objects at different scales. But recent deep learning object detectors have
avoided pyramid representations, in part because they are compute and memory
intensive. In this paper, we exploit the inherent multi-scale, pyramidal
hierarchy of deep convolutional networks to construct feature pyramids with
marginal extra cost. A top-down architecture with lateral connections is
developed for building high-level semantic feature maps at all scales. This
architecture, called a Feature Pyramid Network (FPN), shows significant
improvement as a generic feature extractor in several applications. Using FPN
in a basic Faster R-CNN system, our method achieves state-of-the-art
single-model results on the COCO detection benchmark without bells and
whistles, surpassing all existing single-model entries including those from the
COCO 2016 challenge winners. In addition, our method can run at 5 FPS on a GPU
and thus is a practical and accurate solution to multi-scale object detection.
Code will be made publicly available. | http://arxiv.org/pdf/1612.03144 | Tsung-Yi Lin, Piotr Dollár, Ross Girshick, Kaiming He, Bharath Hariharan, Serge Belongie | cs.CV | null | null | cs.CV | 20161209 | 20170419 | [
{
"id": "1703.06870"
}
] |
1612.03144 | 39 | In Table 2(f), we adopt Fast R-CNN on the single ï¬nest scale feature map of P2. Its result (33.4 AP) is marginally worse than that of using all pyramid levels (33.9 AP, Ta- ble 2(c)). We argue that this is because RoI pooling is a warping-like operation, which is less sensitive to the re- gionâs scales. Despite the good accuracy of this variant, it is based on the RPN proposals of {Pk} and has thus already beneï¬ted from the pyramid representation.
# 5.2.2 Faster R-CNN (on consistent proposals)
In the above we used a ï¬xed set of proposals to investi- gate the detectors. But in a Faster R-CNN system [29], the RPN and Fast R-CNN must use the same network back- bone in order to make feature sharing possible. Table 3 shows the comparisons between our method and two base- lines, all using consistent backbone architectures for RPN and Fast R-CNN. Table 3(a) shows our reproduction of the baseline Faster R-CNN system as described in [16]. Under controlled settings, our FPN (Table 3(c)) is better than this strong baseline by 2.3 points AP and 3.8 points [email protected]. | 1612.03144#39 | Feature Pyramid Networks for Object Detection | Feature pyramids are a basic component in recognition systems for detecting
objects at different scales. But recent deep learning object detectors have
avoided pyramid representations, in part because they are compute and memory
intensive. In this paper, we exploit the inherent multi-scale, pyramidal
hierarchy of deep convolutional networks to construct feature pyramids with
marginal extra cost. A top-down architecture with lateral connections is
developed for building high-level semantic feature maps at all scales. This
architecture, called a Feature Pyramid Network (FPN), shows significant
improvement as a generic feature extractor in several applications. Using FPN
in a basic Faster R-CNN system, our method achieves state-of-the-art
single-model results on the COCO detection benchmark without bells and
whistles, surpassing all existing single-model entries including those from the
COCO 2016 challenge winners. In addition, our method can run at 5 FPS on a GPU
and thus is a practical and accurate solution to multi-scale object detection.
Code will be made publicly available. | http://arxiv.org/pdf/1612.03144 | Tsung-Yi Lin, Piotr Dollár, Ross Girshick, Kaiming He, Bharath Hariharan, Serge Belongie | cs.CV | null | null | cs.CV | 20161209 | 20170419 | [
{
"id": "1703.06870"
}
] |
1612.03144 | 40 | Note that Table 3(a) and (b) are baselines that are much stronger than the baseline provided by He et al. [16] in Ta- ble 3(*). We ï¬nd the following implementations contribute to the gap: (i) We use an image scale of 800 pixels instead of 600 in [11, 16]; (ii) We train with 512 RoIs per image which accelerate convergence, in contrast to 64 RoIs in [11, 16]; (iii) We use 5 scale anchors instead of 4 in [16] (adding 322); (iv) At test time we use 1000 proposals per image in- stead of 300 in [16]. So comparing with He et al.âs ResNet- 50 Faster R-CNN baseline in Table 3(*), our method im- proves AP by 7.6 points and [email protected] by 9.6 points.
ResNet-50 ResNet-101 share features? no yes [email protected] 56.9 57.2 AP 33.9 34.3 [email protected] 58.0 58.2 AP 35.0 35.2
Table 5. More object detection results using Faster R-CNN and our FPNs, evaluated on minival. Sharing features increases train time by 1.5Ã (using 4-step training [29]), but reduces test time. | 1612.03144#40 | Feature Pyramid Networks for Object Detection | Feature pyramids are a basic component in recognition systems for detecting
objects at different scales. But recent deep learning object detectors have
avoided pyramid representations, in part because they are compute and memory
intensive. In this paper, we exploit the inherent multi-scale, pyramidal
hierarchy of deep convolutional networks to construct feature pyramids with
marginal extra cost. A top-down architecture with lateral connections is
developed for building high-level semantic feature maps at all scales. This
architecture, called a Feature Pyramid Network (FPN), shows significant
improvement as a generic feature extractor in several applications. Using FPN
in a basic Faster R-CNN system, our method achieves state-of-the-art
single-model results on the COCO detection benchmark without bells and
whistles, surpassing all existing single-model entries including those from the
COCO 2016 challenge winners. In addition, our method can run at 5 FPS on a GPU
and thus is a practical and accurate solution to multi-scale object detection.
Code will be made publicly available. | http://arxiv.org/pdf/1612.03144 | Tsung-Yi Lin, Piotr Dollár, Ross Girshick, Kaiming He, Bharath Hariharan, Serge Belongie | cs.CV | null | null | cs.CV | 20161209 | 20170419 | [
{
"id": "1703.06870"
}
] |
1612.03144 | 41 | ble 5, we evaluate sharing features following the 4-step training described in [29]. Similar to [29], we ï¬nd that shar- ing features improves accuracy by a small margin. Feature sharing also reduces the testing time.
Running time. With feature sharing, our FPN-based Faster R-CNN system has inference time of 0.148 seconds per image on a single NVIDIA M40 GPU for ResNet-50, and 0.172 seconds for ResNet-101.6 As a comparison, the single-scale ResNet-50 baseline in Table 3(a) runs at 0.32 seconds. Our method introduces small extra cost by the ex- tra layers in the FPN, but has a lighter weight head. Overall our system is faster than the ResNet-based Faster R-CNN counterpart. We believe the efï¬ciency and simplicity of our method will beneï¬t future research and applications.
# 5.2.3 Comparing with COCO Competition Winners | 1612.03144#41 | Feature Pyramid Networks for Object Detection | Feature pyramids are a basic component in recognition systems for detecting
objects at different scales. But recent deep learning object detectors have
avoided pyramid representations, in part because they are compute and memory
intensive. In this paper, we exploit the inherent multi-scale, pyramidal
hierarchy of deep convolutional networks to construct feature pyramids with
marginal extra cost. A top-down architecture with lateral connections is
developed for building high-level semantic feature maps at all scales. This
architecture, called a Feature Pyramid Network (FPN), shows significant
improvement as a generic feature extractor in several applications. Using FPN
in a basic Faster R-CNN system, our method achieves state-of-the-art
single-model results on the COCO detection benchmark without bells and
whistles, surpassing all existing single-model entries including those from the
COCO 2016 challenge winners. In addition, our method can run at 5 FPS on a GPU
and thus is a practical and accurate solution to multi-scale object detection.
Code will be made publicly available. | http://arxiv.org/pdf/1612.03144 | Tsung-Yi Lin, Piotr Dollár, Ross Girshick, Kaiming He, Bharath Hariharan, Serge Belongie | cs.CV | null | null | cs.CV | 20161209 | 20170419 | [
{
"id": "1703.06870"
}
] |
1612.03144 | 42 | # 5.2.3 Comparing with COCO Competition Winners
We ï¬nd that our ResNet-101 model in Table 5 is not sufï¬- ciently trained with the default learning rate schedule. So we increase the number of mini-batches by 2à at each learning rate when training the Fast R-CNN step. This in- creases AP on minival to 35.6, without sharing features. This model is the one we submitted to the COCO detection leaderboard, shown in Table 4. We have not evaluated its feature-sharing version due to limited time, which should be slightly better as implied by Table 5.
Table 4 compares our method with the single-model re- sults of the COCO competition winners, including the 2016 winner G-RMI and the 2015 winner Faster R-CNN+++. Without adding bells and whistles, our single-model entry has surpassed these strong, heavily engineered competitors.
Sharing features. In the above, for simplicity we do not share the features between RPN and Fast R-CNN. In Ta6These runtimes are updated from an earlier version of this paper.
7
Taxt4 160x160 [128x128] 14x14 gox80 [64x64] | 1612.03144#42 | Feature Pyramid Networks for Object Detection | Feature pyramids are a basic component in recognition systems for detecting
objects at different scales. But recent deep learning object detectors have
avoided pyramid representations, in part because they are compute and memory
intensive. In this paper, we exploit the inherent multi-scale, pyramidal
hierarchy of deep convolutional networks to construct feature pyramids with
marginal extra cost. A top-down architecture with lateral connections is
developed for building high-level semantic feature maps at all scales. This
architecture, called a Feature Pyramid Network (FPN), shows significant
improvement as a generic feature extractor in several applications. Using FPN
in a basic Faster R-CNN system, our method achieves state-of-the-art
single-model results on the COCO detection benchmark without bells and
whistles, surpassing all existing single-model entries including those from the
COCO 2016 challenge winners. In addition, our method can run at 5 FPS on a GPU
and thus is a practical and accurate solution to multi-scale object detection.
Code will be made publicly available. | http://arxiv.org/pdf/1612.03144 | Tsung-Yi Lin, Piotr Dollár, Ross Girshick, Kaiming He, Bharath Hariharan, Serge Belongie | cs.CV | null | null | cs.CV | 20161209 | 20170419 | [
{
"id": "1703.06870"
}
] |
1612.03144 | 43 | 7
Taxt4 160x160 [128x128] 14x14 gox80 [64x64]
Figure 4. FPN for object segment proposals. The feature pyramid is constructed with identical structure as for object detection. We apply a small MLP on 5Ã5 windows to generate dense object seg- ments with output dimension of 14Ã14. Shown in orange are the size of the image regions the mask corresponds to for each pyra- mid level (levels P3â5 are shown here). Both the corresponding image region size (light orange) and canonical object size (dark orange) are shown. Half octaves are handled by an MLP on 7x7 windows (7 â 5 2), not shown here. Details are in the appendix.
On the test-dev set, our method increases over the ex- isting best results by 0.5 points of AP (36.2 vs. 35.7) and 3.4 points of [email protected] (59.1 vs. 55.7). It is worth noting that our method does not rely on image pyramids and only uses a single input image scale, but still has outstanding AP on small-scale objects. This could only be achieved by high- resolution image inputs with previous methods. | 1612.03144#43 | Feature Pyramid Networks for Object Detection | Feature pyramids are a basic component in recognition systems for detecting
objects at different scales. But recent deep learning object detectors have
avoided pyramid representations, in part because they are compute and memory
intensive. In this paper, we exploit the inherent multi-scale, pyramidal
hierarchy of deep convolutional networks to construct feature pyramids with
marginal extra cost. A top-down architecture with lateral connections is
developed for building high-level semantic feature maps at all scales. This
architecture, called a Feature Pyramid Network (FPN), shows significant
improvement as a generic feature extractor in several applications. Using FPN
in a basic Faster R-CNN system, our method achieves state-of-the-art
single-model results on the COCO detection benchmark without bells and
whistles, surpassing all existing single-model entries including those from the
COCO 2016 challenge winners. In addition, our method can run at 5 FPS on a GPU
and thus is a practical and accurate solution to multi-scale object detection.
Code will be made publicly available. | http://arxiv.org/pdf/1612.03144 | Tsung-Yi Lin, Piotr Dollár, Ross Girshick, Kaiming He, Bharath Hariharan, Serge Belongie | cs.CV | null | null | cs.CV | 20161209 | 20170419 | [
{
"id": "1703.06870"
}
] |
1612.03144 | 44 | Moreover, our method does not exploit many popular improvements, such as iterative regression [9], hard nega- tive mining [35], context modeling [16], stronger data aug- mentation [22], etc. These improvements are complemen- tary to FPNs and should boost accuracy further.
Recently, FPN has enabled new top results in all tracks of the COCO competition, including detection, instance segmentation, and keypoint estimation. See [14] for details.
# 6. Extensions: Segmentation Proposals
Our method is a generic pyramid representation and can be used in applications other than object detection. In this section we use FPNs to generate segmentation proposals, following the DeepMask/SharpMask framework [27, 28].
DeepMask/SharpMask were trained on image crops for predicting instance segments and object/non-object scores. At inference time, these models are run convolutionally to generate dense proposals in an image. To generate segments at multiple scales, image pyramids are necessary [27, 28]. | 1612.03144#44 | Feature Pyramid Networks for Object Detection | Feature pyramids are a basic component in recognition systems for detecting
objects at different scales. But recent deep learning object detectors have
avoided pyramid representations, in part because they are compute and memory
intensive. In this paper, we exploit the inherent multi-scale, pyramidal
hierarchy of deep convolutional networks to construct feature pyramids with
marginal extra cost. A top-down architecture with lateral connections is
developed for building high-level semantic feature maps at all scales. This
architecture, called a Feature Pyramid Network (FPN), shows significant
improvement as a generic feature extractor in several applications. Using FPN
in a basic Faster R-CNN system, our method achieves state-of-the-art
single-model results on the COCO detection benchmark without bells and
whistles, surpassing all existing single-model entries including those from the
COCO 2016 challenge winners. In addition, our method can run at 5 FPS on a GPU
and thus is a practical and accurate solution to multi-scale object detection.
Code will be made publicly available. | http://arxiv.org/pdf/1612.03144 | Tsung-Yi Lin, Piotr Dollár, Ross Girshick, Kaiming He, Bharath Hariharan, Serge Belongie | cs.CV | null | null | cs.CV | 20161209 | 20170419 | [
{
"id": "1703.06870"
}
] |
1612.03144 | 45 | It is easy to adapt FPN to generate mask proposals. We use a fully convolutional setup for both training and infer- ence. We construct our feature pyramid as in Sec. 5.1 and set d = 128. On top of each level of the feature pyramid, we apply a small 5Ã5 MLP to predict 14Ã14 masks and object scores in a fully convolutional fashion, see Fig. 4. Addition- ally, motivated by the use of 2 scales per octave in the image pyramid of [27, 28], we use a second MLP of input size 7Ã7 to handle half octaves. The two MLPs play a similar role as anchors in RPN. The architecture is trained end-to-end; full implementation details are given in the appendix.
8 | 1612.03144#45 | Feature Pyramid Networks for Object Detection | Feature pyramids are a basic component in recognition systems for detecting
objects at different scales. But recent deep learning object detectors have
avoided pyramid representations, in part because they are compute and memory
intensive. In this paper, we exploit the inherent multi-scale, pyramidal
hierarchy of deep convolutional networks to construct feature pyramids with
marginal extra cost. A top-down architecture with lateral connections is
developed for building high-level semantic feature maps at all scales. This
architecture, called a Feature Pyramid Network (FPN), shows significant
improvement as a generic feature extractor in several applications. Using FPN
in a basic Faster R-CNN system, our method achieves state-of-the-art
single-model results on the COCO detection benchmark without bells and
whistles, surpassing all existing single-model entries including those from the
COCO 2016 challenge winners. In addition, our method can run at 5 FPS on a GPU
and thus is a practical and accurate solution to multi-scale object detection.
Code will be made publicly available. | http://arxiv.org/pdf/1612.03144 | Tsung-Yi Lin, Piotr Dollár, Ross Girshick, Kaiming He, Bharath Hariharan, Serge Belongie | cs.CV | null | null | cs.CV | 20161209 | 20170419 | [
{
"id": "1703.06870"
}
] |
1612.03144 | 46 | 8
image pyramid] AR AR, AR AR; |time (s) DeepMask [27] v 37.1 15.8 50.1 54.9} 0.49 SharpMask [28] v 39.8 17.4 53.1 59.1} 0.77 InstanceFCN [4] v 39.2 = - - 1.50¢ FPN Mask Results: single MLP [5x5] 43.4 32.5 49.2 53.7) 0.15 single MLP [7x7] 43.5 30.0 49.6 57.8) 0.19 dual MLP [5x5, 7x7] 45.7 31.9 51.5 60.8] 0.24 + 2x mask resolution 46.7 31.7 53.1 63.2] 0.25 + 2x train schedule 48.1 32.6 54.2 65.6) 0.25 | 1612.03144#46 | Feature Pyramid Networks for Object Detection | Feature pyramids are a basic component in recognition systems for detecting
objects at different scales. But recent deep learning object detectors have
avoided pyramid representations, in part because they are compute and memory
intensive. In this paper, we exploit the inherent multi-scale, pyramidal
hierarchy of deep convolutional networks to construct feature pyramids with
marginal extra cost. A top-down architecture with lateral connections is
developed for building high-level semantic feature maps at all scales. This
architecture, called a Feature Pyramid Network (FPN), shows significant
improvement as a generic feature extractor in several applications. Using FPN
in a basic Faster R-CNN system, our method achieves state-of-the-art
single-model results on the COCO detection benchmark without bells and
whistles, surpassing all existing single-model entries including those from the
COCO 2016 challenge winners. In addition, our method can run at 5 FPS on a GPU
and thus is a practical and accurate solution to multi-scale object detection.
Code will be made publicly available. | http://arxiv.org/pdf/1612.03144 | Tsung-Yi Lin, Piotr Dollár, Ross Girshick, Kaiming He, Bharath Hariharan, Serge Belongie | cs.CV | null | null | cs.CV | 20161209 | 20170419 | [
{
"id": "1703.06870"
}
] |
1612.03144 | 47 | Table 6. Instance segmentation proposals evaluated on the ï¬rst 5k COCO val images. All models are trained on the train set. DeepMask, SharpMask, and FPN use ResNet-50 while Instance- FCN uses VGG-16. DeepMask and SharpMask performance is computed with models available from https://github. com/facebookresearch/deepmask (both are the âzoomâ variants). â Runtimes are measured on an NVIDIA M40 GPU, ex- cept the InstanceFCN timing which is based on the slower K40.
# 6.1. Segmentation Proposal Results | 1612.03144#47 | Feature Pyramid Networks for Object Detection | Feature pyramids are a basic component in recognition systems for detecting
objects at different scales. But recent deep learning object detectors have
avoided pyramid representations, in part because they are compute and memory
intensive. In this paper, we exploit the inherent multi-scale, pyramidal
hierarchy of deep convolutional networks to construct feature pyramids with
marginal extra cost. A top-down architecture with lateral connections is
developed for building high-level semantic feature maps at all scales. This
architecture, called a Feature Pyramid Network (FPN), shows significant
improvement as a generic feature extractor in several applications. Using FPN
in a basic Faster R-CNN system, our method achieves state-of-the-art
single-model results on the COCO detection benchmark without bells and
whistles, surpassing all existing single-model entries including those from the
COCO 2016 challenge winners. In addition, our method can run at 5 FPS on a GPU
and thus is a practical and accurate solution to multi-scale object detection.
Code will be made publicly available. | http://arxiv.org/pdf/1612.03144 | Tsung-Yi Lin, Piotr Dollár, Ross Girshick, Kaiming He, Bharath Hariharan, Serge Belongie | cs.CV | null | null | cs.CV | 20161209 | 20170419 | [
{
"id": "1703.06870"
}
] |
1612.03144 | 48 | # 6.1. Segmentation Proposal Results
Results are shown in Table 6. We report segment AR and segment AR on small, medium, and large objects, always for 1000 proposals. Our baseline FPN model with a single 5Ã5 MLP achieves an AR of 43.4. Switching to a slightly larger 7Ã7 MLP leaves accuracy largely unchanged. Using both MLPs together increases accuracy to 45.7 AR. Increas- ing mask output size from 14Ã14 to 28Ã28 increases AR another point (larger sizes begin to degrade accuracy). Fi- nally, doubling the training iterations increases AR to 48.1. We also report comparisons to DeepMask [27], Sharp- Mask [28], and InstanceFCN [4], the previous state of the art methods in mask proposal generation. We outperform the accuracy of these approaches by over 8.3 points AR. In particular, we nearly double the accuracy on small objects. Existing mask proposal methods [27, 28, 4] are based on densely sampled image pyramids (e.g., scaled by 2{â2:0.5:1} in [27, 28]), making them computationally expensive. Our approach, based on FPNs, is substantially faster (our mod- els run at 6 to 7 FPS). These results demonstrate that our model is a generic feature extractor and can replace image pyramids for other multi-scale detection problems. | 1612.03144#48 | Feature Pyramid Networks for Object Detection | Feature pyramids are a basic component in recognition systems for detecting
objects at different scales. But recent deep learning object detectors have
avoided pyramid representations, in part because they are compute and memory
intensive. In this paper, we exploit the inherent multi-scale, pyramidal
hierarchy of deep convolutional networks to construct feature pyramids with
marginal extra cost. A top-down architecture with lateral connections is
developed for building high-level semantic feature maps at all scales. This
architecture, called a Feature Pyramid Network (FPN), shows significant
improvement as a generic feature extractor in several applications. Using FPN
in a basic Faster R-CNN system, our method achieves state-of-the-art
single-model results on the COCO detection benchmark without bells and
whistles, surpassing all existing single-model entries including those from the
COCO 2016 challenge winners. In addition, our method can run at 5 FPS on a GPU
and thus is a practical and accurate solution to multi-scale object detection.
Code will be made publicly available. | http://arxiv.org/pdf/1612.03144 | Tsung-Yi Lin, Piotr Dollár, Ross Girshick, Kaiming He, Bharath Hariharan, Serge Belongie | cs.CV | null | null | cs.CV | 20161209 | 20170419 | [
{
"id": "1703.06870"
}
] |
1612.03144 | 49 | # 7. Conclusion
We have presented a clean and simple framework for building feature pyramids inside ConvNets. Our method shows signiï¬cant improvements over several strong base- lines and competition winners. Thus, it provides a practical solution for research and applications of feature pyramids, without the need of computing image pyramids. Finally, our study suggests that despite the strong representational power of deep ConvNets and their implicit robustness to scale variation, it is still critical to explicitly address multi- scale problems using pyramid representations.
# A. Implementation of Segmentation Proposals | 1612.03144#49 | Feature Pyramid Networks for Object Detection | Feature pyramids are a basic component in recognition systems for detecting
objects at different scales. But recent deep learning object detectors have
avoided pyramid representations, in part because they are compute and memory
intensive. In this paper, we exploit the inherent multi-scale, pyramidal
hierarchy of deep convolutional networks to construct feature pyramids with
marginal extra cost. A top-down architecture with lateral connections is
developed for building high-level semantic feature maps at all scales. This
architecture, called a Feature Pyramid Network (FPN), shows significant
improvement as a generic feature extractor in several applications. Using FPN
in a basic Faster R-CNN system, our method achieves state-of-the-art
single-model results on the COCO detection benchmark without bells and
whistles, surpassing all existing single-model entries including those from the
COCO 2016 challenge winners. In addition, our method can run at 5 FPS on a GPU
and thus is a practical and accurate solution to multi-scale object detection.
Code will be made publicly available. | http://arxiv.org/pdf/1612.03144 | Tsung-Yi Lin, Piotr Dollár, Ross Girshick, Kaiming He, Bharath Hariharan, Serge Belongie | cs.CV | null | null | cs.CV | 20161209 | 20170419 | [
{
"id": "1703.06870"
}
] |
1612.03144 | 50 | We use our feature pyramid networks to efï¬ciently gen- erate object segment proposals, adopting an image-centric training strategy popular for object detection [11, 29]. Our FPN mask generation model inherits many of the ideas and motivations from DeepMask/SharpMask [27, 28]. How- ever, in contrast to these models, which were trained on image crops and used a densely sampled image pyramid for inference, we perform fully-convolutional training for mask prediction on a feature pyramid. While this requires chang- ing many of the speciï¬cs, our implementation remains sim- ilar in spirit to DeepMask. Speciï¬cally, to deï¬ne the label of a mask instance at each sliding window, we think of this window as being a crop on the input image, allowing us to inherit deï¬nitions of positives/negatives from DeepMask. We give more details next, see also Fig. 4 for a visualization. We construct the feature pyramid with P2â6 using the same architecture as described in Sec. 5.1. We set d = 128. Each level of our feature pyramid is used for predicting masks at a different scale. As in DeepMask, we deï¬ne | 1612.03144#50 | Feature Pyramid Networks for Object Detection | Feature pyramids are a basic component in recognition systems for detecting
objects at different scales. But recent deep learning object detectors have
avoided pyramid representations, in part because they are compute and memory
intensive. In this paper, we exploit the inherent multi-scale, pyramidal
hierarchy of deep convolutional networks to construct feature pyramids with
marginal extra cost. A top-down architecture with lateral connections is
developed for building high-level semantic feature maps at all scales. This
architecture, called a Feature Pyramid Network (FPN), shows significant
improvement as a generic feature extractor in several applications. Using FPN
in a basic Faster R-CNN system, our method achieves state-of-the-art
single-model results on the COCO detection benchmark without bells and
whistles, surpassing all existing single-model entries including those from the
COCO 2016 challenge winners. In addition, our method can run at 5 FPS on a GPU
and thus is a practical and accurate solution to multi-scale object detection.
Code will be made publicly available. | http://arxiv.org/pdf/1612.03144 | Tsung-Yi Lin, Piotr Dollár, Ross Girshick, Kaiming He, Bharath Hariharan, Serge Belongie | cs.CV | null | null | cs.CV | 20161209 | 20170419 | [
{
"id": "1703.06870"
}
] |
1612.03144 | 51 | We set d = 128. Each level of our feature pyramid is used for predicting masks at a different scale. As in DeepMask, we deï¬ne the scale of a mask as the max of its width and height. Masks with scales of {32, 64, 128, 256, 512} pixels map to {P2, P3, P4, P5, P6}, respectively, and are handled by a 5Ã5 MLP. As DeepMask uses a pyramid with half octaves, 2) we use a second slightly larger MLP of size 7Ã7 (7 â 5 to handle half-octaves in our model (e.g., a 128 2 scale mask is predicted by the 7Ã7 MLP on P4). Objects at inter- mediate scales are mapped to the nearest scale in log space. As the MLP must predict objects at a range of scales for each pyramid level (speciï¬cally a half octave range), some padding must be given around the canonical object size. We use 25% padding. This means that the mask output over {P2, P3, P4, P5, P6} maps to {40, 80, 160, 320, 640} sized image regions for the 5Ã5 MLP (and to 2 larger corre- sponding | 1612.03144#51 | Feature Pyramid Networks for Object Detection | Feature pyramids are a basic component in recognition systems for detecting
objects at different scales. But recent deep learning object detectors have
avoided pyramid representations, in part because they are compute and memory
intensive. In this paper, we exploit the inherent multi-scale, pyramidal
hierarchy of deep convolutional networks to construct feature pyramids with
marginal extra cost. A top-down architecture with lateral connections is
developed for building high-level semantic feature maps at all scales. This
architecture, called a Feature Pyramid Network (FPN), shows significant
improvement as a generic feature extractor in several applications. Using FPN
in a basic Faster R-CNN system, our method achieves state-of-the-art
single-model results on the COCO detection benchmark without bells and
whistles, surpassing all existing single-model entries including those from the
COCO 2016 challenge winners. In addition, our method can run at 5 FPS on a GPU
and thus is a practical and accurate solution to multi-scale object detection.
Code will be made publicly available. | http://arxiv.org/pdf/1612.03144 | Tsung-Yi Lin, Piotr Dollár, Ross Girshick, Kaiming He, Bharath Hariharan, Serge Belongie | cs.CV | null | null | cs.CV | 20161209 | 20170419 | [
{
"id": "1703.06870"
}
] |
1612.03144 | 53 | Each spatial position in the feature map is used to pre- dict a mask at a different location. Speciï¬cally, at scale Pk, each spatial position in the feature map is used to predict the mask whose center falls within 2k pixels of that loca- tion (corresponding to ±1 cell offset in the feature map). If no object center falls within this range, the location is con- sidered a negative, and, as in DeepMask, is used only for training the score branch and not the mask branch.
The MLP we use for predicting the mask and score is fairly simple. We apply a 5Ã5 kernel with 512 outputs, fol- lowed by sibling fully connected layers to predict a 14Ã14 mask (142 outputs) and object score (1 output). The model is implemented in a fully convolutional manner (using 1Ã1 convolutions in place of fully connected layers). The 7Ã7 MLP for handling objects at half octave scales is identical to the 5Ã5 MLP except for its larger input region.
During training, we randomly sample 2048 examples per mini-batch (128 examples per image from 16 images) with
9 | 1612.03144#53 | Feature Pyramid Networks for Object Detection | Feature pyramids are a basic component in recognition systems for detecting
objects at different scales. But recent deep learning object detectors have
avoided pyramid representations, in part because they are compute and memory
intensive. In this paper, we exploit the inherent multi-scale, pyramidal
hierarchy of deep convolutional networks to construct feature pyramids with
marginal extra cost. A top-down architecture with lateral connections is
developed for building high-level semantic feature maps at all scales. This
architecture, called a Feature Pyramid Network (FPN), shows significant
improvement as a generic feature extractor in several applications. Using FPN
in a basic Faster R-CNN system, our method achieves state-of-the-art
single-model results on the COCO detection benchmark without bells and
whistles, surpassing all existing single-model entries including those from the
COCO 2016 challenge winners. In addition, our method can run at 5 FPS on a GPU
and thus is a practical and accurate solution to multi-scale object detection.
Code will be made publicly available. | http://arxiv.org/pdf/1612.03144 | Tsung-Yi Lin, Piotr Dollár, Ross Girshick, Kaiming He, Bharath Hariharan, Serge Belongie | cs.CV | null | null | cs.CV | 20161209 | 20170419 | [
{
"id": "1703.06870"
}
] |
1612.03144 | 54 | During training, we randomly sample 2048 examples per mini-batch (128 examples per image from 16 images) with
9
a positive/negative sampling ratio of 1:3. The mask loss is given 10Ã higher weight than the score loss. This model is trained end-to-end on 8 GPUs using synchronized SGD (2 images per GPU). We start with a learning rate of 0.03 and train for 80k mini-batches, dividing the learning rate by 10 after 60k mini-batches. The image scale is set to 800 pixels during training and testing (we do not use scale jitter). Dur- ing inference our fully-convolutional model predicts scores at all positions and scales and masks at the 1000 highest scoring locations. We do not perform any non-maximum suppression or post-processing.
# References
[1] E. H. Adelson, C. H. Anderson, J. R. Bergen, P. J. Burt, and J. M. Ogden. Pyramid methods in image processing. RCA engineer, 1984.
Inside- outside net: Detecting objects in context with skip pooling and recurrent neural networks. In CVPR, 2016.
[3] Z. Cai, Q. Fan, R. S. Feris, and N. Vasconcelos. A uniï¬ed multi-scale deep convolutional neural network for fast object detection. In ECCV, 2016. | 1612.03144#54 | Feature Pyramid Networks for Object Detection | Feature pyramids are a basic component in recognition systems for detecting
objects at different scales. But recent deep learning object detectors have
avoided pyramid representations, in part because they are compute and memory
intensive. In this paper, we exploit the inherent multi-scale, pyramidal
hierarchy of deep convolutional networks to construct feature pyramids with
marginal extra cost. A top-down architecture with lateral connections is
developed for building high-level semantic feature maps at all scales. This
architecture, called a Feature Pyramid Network (FPN), shows significant
improvement as a generic feature extractor in several applications. Using FPN
in a basic Faster R-CNN system, our method achieves state-of-the-art
single-model results on the COCO detection benchmark without bells and
whistles, surpassing all existing single-model entries including those from the
COCO 2016 challenge winners. In addition, our method can run at 5 FPS on a GPU
and thus is a practical and accurate solution to multi-scale object detection.
Code will be made publicly available. | http://arxiv.org/pdf/1612.03144 | Tsung-Yi Lin, Piotr Dollár, Ross Girshick, Kaiming He, Bharath Hariharan, Serge Belongie | cs.CV | null | null | cs.CV | 20161209 | 20170419 | [
{
"id": "1703.06870"
}
] |
1612.03144 | 55 | [4] J. Dai, K. He, Y. Li, S. Ren, and J. Sun. Instance-sensitive fully convolutional networks. In ECCV, 2016.
[5] N. Dalal and B. Triggs. Histograms of oriented gradients for human detection. In CVPR, 2005.
[6] P. Doll´ar, R. Appel, S. Belongie, and P. Perona. Fast feature pyramids for object detection. TPAMI, 2014.
[7] P. F. Felzenszwalb, R. B. Girshick, D. McAllester, and D. Ra- manan. Object detection with discriminatively trained part- based models. TPAMI, 2010.
[8] G. Ghiasi and C. C. Fowlkes. Laplacian pyramid reconstruc- In ECCV, tion and reï¬nement for semantic segmentation. 2016.
[9] S. Gidaris and N. Komodakis. Object detection via a multi- region & semantic segmentation-aware CNN model. In ICCV, 2015.
[10] S. Gidaris and N. Komodakis. Attend reï¬ne repeat: Active box proposal generation via in-out localization. In BMVC, 2016. | 1612.03144#55 | Feature Pyramid Networks for Object Detection | Feature pyramids are a basic component in recognition systems for detecting
objects at different scales. But recent deep learning object detectors have
avoided pyramid representations, in part because they are compute and memory
intensive. In this paper, we exploit the inherent multi-scale, pyramidal
hierarchy of deep convolutional networks to construct feature pyramids with
marginal extra cost. A top-down architecture with lateral connections is
developed for building high-level semantic feature maps at all scales. This
architecture, called a Feature Pyramid Network (FPN), shows significant
improvement as a generic feature extractor in several applications. Using FPN
in a basic Faster R-CNN system, our method achieves state-of-the-art
single-model results on the COCO detection benchmark without bells and
whistles, surpassing all existing single-model entries including those from the
COCO 2016 challenge winners. In addition, our method can run at 5 FPS on a GPU
and thus is a practical and accurate solution to multi-scale object detection.
Code will be made publicly available. | http://arxiv.org/pdf/1612.03144 | Tsung-Yi Lin, Piotr Dollár, Ross Girshick, Kaiming He, Bharath Hariharan, Serge Belongie | cs.CV | null | null | cs.CV | 20161209 | 20170419 | [
{
"id": "1703.06870"
}
] |
1612.03144 | 56 | [10] S. Gidaris and N. Komodakis. Attend reï¬ne repeat: Active box proposal generation via in-out localization. In BMVC, 2016.
[11] R. Girshick. Fast R-CNN. In ICCV, 2015. [12] R. Girshick, J. Donahue, T. Darrell, and J. Malik. Rich fea- ture hierarchies for accurate object detection and semantic segmentation. In CVPR, 2014.
[13] B. Hariharan, P. Arbel´aez, R. Girshick, and J. Malik. Hyper- columns for object segmentation and ï¬ne-grained localiza- tion. In CVPR, 2015.
[14] K. He, G. Gkioxari, P. Doll´ar, and R. Girshick. Mask r-cnn. arXiv:1703.06870, 2017.
[15] K. He, X. Zhang, S. Ren, and J. Sun. Spatial pyramid pooling in deep convolutional networks for visual recognition. In ECCV. 2014.
[16] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In CVPR, 2016. | 1612.03144#56 | Feature Pyramid Networks for Object Detection | Feature pyramids are a basic component in recognition systems for detecting
objects at different scales. But recent deep learning object detectors have
avoided pyramid representations, in part because they are compute and memory
intensive. In this paper, we exploit the inherent multi-scale, pyramidal
hierarchy of deep convolutional networks to construct feature pyramids with
marginal extra cost. A top-down architecture with lateral connections is
developed for building high-level semantic feature maps at all scales. This
architecture, called a Feature Pyramid Network (FPN), shows significant
improvement as a generic feature extractor in several applications. Using FPN
in a basic Faster R-CNN system, our method achieves state-of-the-art
single-model results on the COCO detection benchmark without bells and
whistles, surpassing all existing single-model entries including those from the
COCO 2016 challenge winners. In addition, our method can run at 5 FPS on a GPU
and thus is a practical and accurate solution to multi-scale object detection.
Code will be made publicly available. | http://arxiv.org/pdf/1612.03144 | Tsung-Yi Lin, Piotr Dollár, Ross Girshick, Kaiming He, Bharath Hariharan, Serge Belongie | cs.CV | null | null | cs.CV | 20161209 | 20170419 | [
{
"id": "1703.06870"
}
] |
1612.03144 | 57 | [16] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In CVPR, 2016.
[17] S. Honari, J. Yosinski, P. Vincent, and C. Pal. Recombinator networks: Learning coarse-to-ï¬ne feature aggregation. In CVPR, 2016.
[18] T. Kong, A. Yao, Y. Chen, and F. Sun. Hypernet: Towards ac- curate region proposal generation and joint object detection. In CVPR, 2016.
[19] A. Krizhevsky, I. Sutskever, and G. Hinton. ImageNet clas- siï¬cation with deep convolutional neural networks. In NIPS, 2012.
[20] Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel. Backpropagation applied to handwritten zip code recognition. Neural compu- tation, 1989. | 1612.03144#57 | Feature Pyramid Networks for Object Detection | Feature pyramids are a basic component in recognition systems for detecting
objects at different scales. But recent deep learning object detectors have
avoided pyramid representations, in part because they are compute and memory
intensive. In this paper, we exploit the inherent multi-scale, pyramidal
hierarchy of deep convolutional networks to construct feature pyramids with
marginal extra cost. A top-down architecture with lateral connections is
developed for building high-level semantic feature maps at all scales. This
architecture, called a Feature Pyramid Network (FPN), shows significant
improvement as a generic feature extractor in several applications. Using FPN
in a basic Faster R-CNN system, our method achieves state-of-the-art
single-model results on the COCO detection benchmark without bells and
whistles, surpassing all existing single-model entries including those from the
COCO 2016 challenge winners. In addition, our method can run at 5 FPS on a GPU
and thus is a practical and accurate solution to multi-scale object detection.
Code will be made publicly available. | http://arxiv.org/pdf/1612.03144 | Tsung-Yi Lin, Piotr Dollár, Ross Girshick, Kaiming He, Bharath Hariharan, Serge Belongie | cs.CV | null | null | cs.CV | 20161209 | 20170419 | [
{
"id": "1703.06870"
}
] |
1612.03144 | 58 | [21] T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ra- manan, P. Doll´ar, and C. L. Zitnick. Microsoft COCO: Com- mon objects in context. In ECCV, 2014.
[22] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, and S. Reed. SSD: Single shot multibox detector. In ECCV, 2016. [23] W. Liu, A. Rabinovich, and A. C. Berg. ParseNet: Looking
wider to see better. In ICLR workshop, 2016.
[24] J. Long, E. Shelhamer, and T. Darrell. Fully convolutional networks for semantic segmentation. In CVPR, 2015. [25] D. G. Lowe. Distinctive image features from scale-invariant
keypoints. IJCV, 2004.
[26] A. Newell, K. Yang, and J. Deng. Stacked hourglass net- works for human pose estimation. In ECCV, 2016.
[27] P. O. Pinheiro, R. Collobert, and P. Dollar. Learning to seg- ment object candidates. In NIPS, 2015. | 1612.03144#58 | Feature Pyramid Networks for Object Detection | Feature pyramids are a basic component in recognition systems for detecting
objects at different scales. But recent deep learning object detectors have
avoided pyramid representations, in part because they are compute and memory
intensive. In this paper, we exploit the inherent multi-scale, pyramidal
hierarchy of deep convolutional networks to construct feature pyramids with
marginal extra cost. A top-down architecture with lateral connections is
developed for building high-level semantic feature maps at all scales. This
architecture, called a Feature Pyramid Network (FPN), shows significant
improvement as a generic feature extractor in several applications. Using FPN
in a basic Faster R-CNN system, our method achieves state-of-the-art
single-model results on the COCO detection benchmark without bells and
whistles, surpassing all existing single-model entries including those from the
COCO 2016 challenge winners. In addition, our method can run at 5 FPS on a GPU
and thus is a practical and accurate solution to multi-scale object detection.
Code will be made publicly available. | http://arxiv.org/pdf/1612.03144 | Tsung-Yi Lin, Piotr Dollár, Ross Girshick, Kaiming He, Bharath Hariharan, Serge Belongie | cs.CV | null | null | cs.CV | 20161209 | 20170419 | [
{
"id": "1703.06870"
}
] |
1612.03144 | 59 | [27] P. O. Pinheiro, R. Collobert, and P. Dollar. Learning to seg- ment object candidates. In NIPS, 2015.
[28] P. O. Pinheiro, T.-Y. Lin, R. Collobert, and P. Doll´ar. Learn- ing to reï¬ne object segments. In ECCV, 2016.
[29] S. Ren, K. He, R. Girshick, and J. Sun. Faster R-CNN: To- wards real-time object detection with region proposal net- works. In NIPS, 2015.
[30] S. Ren, K. He, R. Girshick, X. Zhang, and J. Sun. Object detection networks on convolutional feature maps. PAMI, 2016.
[31] O. Ronneberger, P. Fischer, and T. Brox. U-Net: Convolu- tional networks for biomedical image segmentation. In MIC- CAI, 2015.
[32] H. Rowley, S. Baluja, and T. Kanade. Human face detec- tion in visual scenes. Technical Report CMU-CS-95-158R, Carnegie Mellon University, 1995. | 1612.03144#59 | Feature Pyramid Networks for Object Detection | Feature pyramids are a basic component in recognition systems for detecting
objects at different scales. But recent deep learning object detectors have
avoided pyramid representations, in part because they are compute and memory
intensive. In this paper, we exploit the inherent multi-scale, pyramidal
hierarchy of deep convolutional networks to construct feature pyramids with
marginal extra cost. A top-down architecture with lateral connections is
developed for building high-level semantic feature maps at all scales. This
architecture, called a Feature Pyramid Network (FPN), shows significant
improvement as a generic feature extractor in several applications. Using FPN
in a basic Faster R-CNN system, our method achieves state-of-the-art
single-model results on the COCO detection benchmark without bells and
whistles, surpassing all existing single-model entries including those from the
COCO 2016 challenge winners. In addition, our method can run at 5 FPS on a GPU
and thus is a practical and accurate solution to multi-scale object detection.
Code will be made publicly available. | http://arxiv.org/pdf/1612.03144 | Tsung-Yi Lin, Piotr Dollár, Ross Girshick, Kaiming He, Bharath Hariharan, Serge Belongie | cs.CV | null | null | cs.CV | 20161209 | 20170419 | [
{
"id": "1703.06870"
}
] |
1612.03144 | 60 | [33] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. IJCV, 2015.
[34] P. Sermanet, D. Eigen, X. Zhang, M. Mathieu, R. Fergus, and Y. LeCun. Overfeat: Integrated recognition, localization and detection using convolutional networks. In ICLR, 2014. [35] A. Shrivastava, A. Gupta, and R. Girshick. Training region- based object detectors with online hard example mining. In CVPR, 2016.
[36] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. In ICLR, 2015. [37] J. R. Uijlings, K. E. van de Sande, T. Gevers, and A. W. IJCV, Smeulders. Selective search for object recognition. 2013.
10 | 1612.03144#60 | Feature Pyramid Networks for Object Detection | Feature pyramids are a basic component in recognition systems for detecting
objects at different scales. But recent deep learning object detectors have
avoided pyramid representations, in part because they are compute and memory
intensive. In this paper, we exploit the inherent multi-scale, pyramidal
hierarchy of deep convolutional networks to construct feature pyramids with
marginal extra cost. A top-down architecture with lateral connections is
developed for building high-level semantic feature maps at all scales. This
architecture, called a Feature Pyramid Network (FPN), shows significant
improvement as a generic feature extractor in several applications. Using FPN
in a basic Faster R-CNN system, our method achieves state-of-the-art
single-model results on the COCO detection benchmark without bells and
whistles, surpassing all existing single-model entries including those from the
COCO 2016 challenge winners. In addition, our method can run at 5 FPS on a GPU
and thus is a practical and accurate solution to multi-scale object detection.
Code will be made publicly available. | http://arxiv.org/pdf/1612.03144 | Tsung-Yi Lin, Piotr Dollár, Ross Girshick, Kaiming He, Bharath Hariharan, Serge Belongie | cs.CV | null | null | cs.CV | 20161209 | 20170419 | [
{
"id": "1703.06870"
}
] |
1612.02136 | 0 | 7 1 0 2
r a M 2 ] G L . s c [
5 v 6 3 1 2 0 . 2 1 6 1 : v i X r a
Published as a conference paper at ICLR 2017
# MODE REGULARIZED GENERATIVE ADVERSARIAL NETWORKS
â Tong Cheâ, â¡Yanran Liâ, â ,§Athul Paul Jacob, â Yoshua Bengio, â¡Wenjie Li â Montreal Institute for Learning Algorithms, Universit´e de Montr´eal, Montr´eal, QC H3T 1J4, Canada â¡Department of Computing, The Hong Kong Polytechnic University, Hong Kong §David R. Cheriton School of Computer Science, University Of Waterloo, Waterloo, ON N2L 3G1, Canada {tong.che,ap.jacob,yoshua.bengio}@umontreal.ca {csyli,cswjli}@comp.polyu.edu.hk
# ABSTRACT | 1612.02136#0 | Mode Regularized Generative Adversarial Networks | Although Generative Adversarial Networks achieve state-of-the-art results on
a variety of generative tasks, they are regarded as highly unstable and prone
to miss modes. We argue that these bad behaviors of GANs are due to the very
particular functional shape of the trained discriminators in high dimensional
spaces, which can easily make training stuck or push probability mass in the
wrong direction, towards that of higher concentration than that of the data
generating distribution. We introduce several ways of regularizing the
objective, which can dramatically stabilize the training of GAN models. We also
show that our regularizers can help the fair distribution of probability mass
across the modes of the data generating distribution, during the early phases
of training and thus providing a unified solution to the missing modes problem. | http://arxiv.org/pdf/1612.02136 | Tong Che, Yanran Li, Athul Paul Jacob, Yoshua Bengio, Wenjie Li | cs.LG, cs.AI, cs.CV, cs.NE | Published as a conference paper at ICLR 2017 | null | cs.LG | 20161207 | 20170302 | [
{
"id": "1511.05440"
},
{
"id": "1611.06624"
},
{
"id": "1609.03126"
},
{
"id": "1604.04382"
},
{
"id": "1609.04802"
},
{
"id": "1612.00005"
},
{
"id": "1606.03498"
},
{
"id": "1605.09782"
},
{
"id": "1605.05396"
},
{
"id": "1610.04490"
},
{
"id": "1606.00704"
},
{
"id": "1602.02644"
},
{
"id": "1611.02163"
},
{
"id": "1511.06434"
},
{
"id": "1512.09300"
}
] |
1612.02136 | 1 | # ABSTRACT
Although Generative Adversarial Networks achieve state-of-the-art results on a variety of generative tasks, they are regarded as highly unstable and prone to miss modes. We argue that these bad behaviors of GANs are due to the very particular functional shape of the trained discriminators in high dimensional spaces, which can easily make training stuck or push probability mass in the wrong direction, towards that of higher concentration than that of the data generating distribution. We introduce several ways of regularizing the objective, which can dramatically stabilize the training of GAN models. We also show that our regularizers can help the fair distribution of probability mass across the modes of the data gener- ating distribution, during the early phases of training and thus providing a uniï¬ed solution to the missing modes problem.
1
# INTRODUCTION | 1612.02136#1 | Mode Regularized Generative Adversarial Networks | Although Generative Adversarial Networks achieve state-of-the-art results on
a variety of generative tasks, they are regarded as highly unstable and prone
to miss modes. We argue that these bad behaviors of GANs are due to the very
particular functional shape of the trained discriminators in high dimensional
spaces, which can easily make training stuck or push probability mass in the
wrong direction, towards that of higher concentration than that of the data
generating distribution. We introduce several ways of regularizing the
objective, which can dramatically stabilize the training of GAN models. We also
show that our regularizers can help the fair distribution of probability mass
across the modes of the data generating distribution, during the early phases
of training and thus providing a unified solution to the missing modes problem. | http://arxiv.org/pdf/1612.02136 | Tong Che, Yanran Li, Athul Paul Jacob, Yoshua Bengio, Wenjie Li | cs.LG, cs.AI, cs.CV, cs.NE | Published as a conference paper at ICLR 2017 | null | cs.LG | 20161207 | 20170302 | [
{
"id": "1511.05440"
},
{
"id": "1611.06624"
},
{
"id": "1609.03126"
},
{
"id": "1604.04382"
},
{
"id": "1609.04802"
},
{
"id": "1612.00005"
},
{
"id": "1606.03498"
},
{
"id": "1605.09782"
},
{
"id": "1605.05396"
},
{
"id": "1610.04490"
},
{
"id": "1606.00704"
},
{
"id": "1602.02644"
},
{
"id": "1611.02163"
},
{
"id": "1511.06434"
},
{
"id": "1512.09300"
}
] |
1612.02136 | 2 | 1
# INTRODUCTION
Generative adversarial networks (GAN) (Goodfellow et al., 2014) have demonstrated their potential on various tasks, such as image generation, image super-resolution, 3D object generation, and video prediction (Radford et al., 2015; Ledig et al., 2016; Sønderby et al., 2016; Nguyen et al., 2016; Wu et al., 2016; Mathieu et al., 2015). The objective is to train a parametrized function (the generator) which maps noise samples (e.g., uniform or Gaussian) to samples whose distribution is close to that of the data generating distribution. The basic scheme of the GAN training procedure is to train a discriminator which assigns higher probabilities to real data samples and lower probabilities to generated data samples, while simultaneously trying to move the generated samples towards the real data manifold using the gradient information provided by the discriminator. In a typical setting, the generator and the discriminator are represented by deep neural networks. | 1612.02136#2 | Mode Regularized Generative Adversarial Networks | Although Generative Adversarial Networks achieve state-of-the-art results on
a variety of generative tasks, they are regarded as highly unstable and prone
to miss modes. We argue that these bad behaviors of GANs are due to the very
particular functional shape of the trained discriminators in high dimensional
spaces, which can easily make training stuck or push probability mass in the
wrong direction, towards that of higher concentration than that of the data
generating distribution. We introduce several ways of regularizing the
objective, which can dramatically stabilize the training of GAN models. We also
show that our regularizers can help the fair distribution of probability mass
across the modes of the data generating distribution, during the early phases
of training and thus providing a unified solution to the missing modes problem. | http://arxiv.org/pdf/1612.02136 | Tong Che, Yanran Li, Athul Paul Jacob, Yoshua Bengio, Wenjie Li | cs.LG, cs.AI, cs.CV, cs.NE | Published as a conference paper at ICLR 2017 | null | cs.LG | 20161207 | 20170302 | [
{
"id": "1511.05440"
},
{
"id": "1611.06624"
},
{
"id": "1609.03126"
},
{
"id": "1604.04382"
},
{
"id": "1609.04802"
},
{
"id": "1612.00005"
},
{
"id": "1606.03498"
},
{
"id": "1605.09782"
},
{
"id": "1605.05396"
},
{
"id": "1610.04490"
},
{
"id": "1606.00704"
},
{
"id": "1602.02644"
},
{
"id": "1611.02163"
},
{
"id": "1511.06434"
},
{
"id": "1512.09300"
}
] |
1612.02136 | 3 | Despite their success, GANs are generally considered as very hard to train due to training instability and sensitivity to hyper-parameters. On the other hand, a common failure pattern observed while training GANs is the collapsing of large volumes of probability mass onto a few modes. Namely, although the generators produce meaningful samples, these samples are often from just a few modes (small regions of high probability under the data distribution). Behind this phenomenon is the miss- ing modes problem, which is widely conceived as a major problem for training GANs: many modes of the data generating distribution are not at all represented in the generated samples, yielding a much lower entropy distribution, with less variety than the data generating distribution.
This issue has been the subject of several recent papers proposing several tricks and new archi- tectures to stabilize GANâs training and encourage its samplesâ diversity. However, we argue that a general cause behind these problems is the lack of control on the discriminator during GAN training. We would like to encourage the manifold of the samples produced by the generator to move towards that of real data, using the discriminator as a metric. However, even if we train the discriminator to distinguish between these two manifolds, we have no control over the shape of the discriminator function in between these manifolds. In fact, the shape of the discriminator function in the data
âAuthors contributed equally.
1 | 1612.02136#3 | Mode Regularized Generative Adversarial Networks | Although Generative Adversarial Networks achieve state-of-the-art results on
a variety of generative tasks, they are regarded as highly unstable and prone
to miss modes. We argue that these bad behaviors of GANs are due to the very
particular functional shape of the trained discriminators in high dimensional
spaces, which can easily make training stuck or push probability mass in the
wrong direction, towards that of higher concentration than that of the data
generating distribution. We introduce several ways of regularizing the
objective, which can dramatically stabilize the training of GAN models. We also
show that our regularizers can help the fair distribution of probability mass
across the modes of the data generating distribution, during the early phases
of training and thus providing a unified solution to the missing modes problem. | http://arxiv.org/pdf/1612.02136 | Tong Che, Yanran Li, Athul Paul Jacob, Yoshua Bengio, Wenjie Li | cs.LG, cs.AI, cs.CV, cs.NE | Published as a conference paper at ICLR 2017 | null | cs.LG | 20161207 | 20170302 | [
{
"id": "1511.05440"
},
{
"id": "1611.06624"
},
{
"id": "1609.03126"
},
{
"id": "1604.04382"
},
{
"id": "1609.04802"
},
{
"id": "1612.00005"
},
{
"id": "1606.03498"
},
{
"id": "1605.09782"
},
{
"id": "1605.05396"
},
{
"id": "1610.04490"
},
{
"id": "1606.00704"
},
{
"id": "1602.02644"
},
{
"id": "1611.02163"
},
{
"id": "1511.06434"
},
{
"id": "1512.09300"
}
] |
1612.02136 | 4 | âAuthors contributed equally.
1
Published as a conference paper at ICLR 2017
space can be very non-linear with bad plateaus and wrong maxima and this can therefore hurt the training of GANs (Figure 1).
To remedy this problem, we propose a novel regu- larizer for the GAN training target. The basic idea is simple yet powerful: in addition to the gradient information provided by the discriminator, we want the generator to take advantage of other similarity metrics with much more predictable behavior, such as the L2 norm. Differentiating these similarity met- rics will provide us with more stable gradients to train our generator. Combining this idea with an ap- proach meant to penalize the missing modes, we pro- pose a family of additional regularizers for the GAN objective. We then design a set of metrics to evaluate the generated samples in terms of both the diversity of modes and the distribution fairness of the probability mass. These metrics are shown to be more robust in judging complex generative models, including those which are well-trained and collapsed ones. | 1612.02136#4 | Mode Regularized Generative Adversarial Networks | Although Generative Adversarial Networks achieve state-of-the-art results on
a variety of generative tasks, they are regarded as highly unstable and prone
to miss modes. We argue that these bad behaviors of GANs are due to the very
particular functional shape of the trained discriminators in high dimensional
spaces, which can easily make training stuck or push probability mass in the
wrong direction, towards that of higher concentration than that of the data
generating distribution. We introduce several ways of regularizing the
objective, which can dramatically stabilize the training of GAN models. We also
show that our regularizers can help the fair distribution of probability mass
across the modes of the data generating distribution, during the early phases
of training and thus providing a unified solution to the missing modes problem. | http://arxiv.org/pdf/1612.02136 | Tong Che, Yanran Li, Athul Paul Jacob, Yoshua Bengio, Wenjie Li | cs.LG, cs.AI, cs.CV, cs.NE | Published as a conference paper at ICLR 2017 | null | cs.LG | 20161207 | 20170302 | [
{
"id": "1511.05440"
},
{
"id": "1611.06624"
},
{
"id": "1609.03126"
},
{
"id": "1604.04382"
},
{
"id": "1609.04802"
},
{
"id": "1612.00005"
},
{
"id": "1606.03498"
},
{
"id": "1605.09782"
},
{
"id": "1605.05396"
},
{
"id": "1610.04490"
},
{
"id": "1606.00704"
},
{
"id": "1602.02644"
},
{
"id": "1611.02163"
},
{
"id": "1511.06434"
},
{
"id": "1512.09300"
}
] |
1612.02136 | 5 | Regularizers usually bring a trade-off between model variance and bias. Our results have shown that, when correctly applied, our regularizers can dramatically reduce model variance, stabilize the training, and ï¬x the missing mode problem all at once, with positive or at the least no negative effects on the generated samples. We also discuss a variant of the regularized GAN algorithm, which can even improve sample quality as compared to the DCGAN baseline.
# 2 RELATED WORK
The GAN approach was initially proposed by Goodfellow et al. (2014) where both the generator and the discriminator are deï¬ned by deep neural networks. | 1612.02136#5 | Mode Regularized Generative Adversarial Networks | Although Generative Adversarial Networks achieve state-of-the-art results on
a variety of generative tasks, they are regarded as highly unstable and prone
to miss modes. We argue that these bad behaviors of GANs are due to the very
particular functional shape of the trained discriminators in high dimensional
spaces, which can easily make training stuck or push probability mass in the
wrong direction, towards that of higher concentration than that of the data
generating distribution. We introduce several ways of regularizing the
objective, which can dramatically stabilize the training of GAN models. We also
show that our regularizers can help the fair distribution of probability mass
across the modes of the data generating distribution, during the early phases
of training and thus providing a unified solution to the missing modes problem. | http://arxiv.org/pdf/1612.02136 | Tong Che, Yanran Li, Athul Paul Jacob, Yoshua Bengio, Wenjie Li | cs.LG, cs.AI, cs.CV, cs.NE | Published as a conference paper at ICLR 2017 | null | cs.LG | 20161207 | 20170302 | [
{
"id": "1511.05440"
},
{
"id": "1611.06624"
},
{
"id": "1609.03126"
},
{
"id": "1604.04382"
},
{
"id": "1609.04802"
},
{
"id": "1612.00005"
},
{
"id": "1606.03498"
},
{
"id": "1605.09782"
},
{
"id": "1605.05396"
},
{
"id": "1610.04490"
},
{
"id": "1606.00704"
},
{
"id": "1602.02644"
},
{
"id": "1611.02163"
},
{
"id": "1511.06434"
},
{
"id": "1512.09300"
}
] |
1612.02136 | 6 | The GAN approach was initially proposed by Goodfellow et al. (2014) where both the generator and the discriminator are deï¬ned by deep neural networks.
In Goodfellow et al. (2014), the GAN is able to generate interesting local structure but globally incoherent images on various datasets. Mirza & Osindero (2014) enlarges GANâs representation capacity by introducing an extra vector to allow the generator to produce samples conditioned on other beneï¬cial information. Motivated from this, several conditional variants of GAN has been applied to a wide range of tasks, including image prediction from a normal map Wang & Gupta (2016), image synthesis from text Reed et al. (2016) and edge map Isola et al. (2016), real-time image manipulation Zhu et al. (2016), temporal image generation Zhou & Berg (2016); Saito & Matsumoto (2016); Vondrick et al. (2016), texture synthesis, style transfer, and video stylization Li & Wand (2016). | 1612.02136#6 | Mode Regularized Generative Adversarial Networks | Although Generative Adversarial Networks achieve state-of-the-art results on
a variety of generative tasks, they are regarded as highly unstable and prone
to miss modes. We argue that these bad behaviors of GANs are due to the very
particular functional shape of the trained discriminators in high dimensional
spaces, which can easily make training stuck or push probability mass in the
wrong direction, towards that of higher concentration than that of the data
generating distribution. We introduce several ways of regularizing the
objective, which can dramatically stabilize the training of GAN models. We also
show that our regularizers can help the fair distribution of probability mass
across the modes of the data generating distribution, during the early phases
of training and thus providing a unified solution to the missing modes problem. | http://arxiv.org/pdf/1612.02136 | Tong Che, Yanran Li, Athul Paul Jacob, Yoshua Bengio, Wenjie Li | cs.LG, cs.AI, cs.CV, cs.NE | Published as a conference paper at ICLR 2017 | null | cs.LG | 20161207 | 20170302 | [
{
"id": "1511.05440"
},
{
"id": "1611.06624"
},
{
"id": "1609.03126"
},
{
"id": "1604.04382"
},
{
"id": "1609.04802"
},
{
"id": "1612.00005"
},
{
"id": "1606.03498"
},
{
"id": "1605.09782"
},
{
"id": "1605.05396"
},
{
"id": "1610.04490"
},
{
"id": "1606.00704"
},
{
"id": "1602.02644"
},
{
"id": "1611.02163"
},
{
"id": "1511.06434"
},
{
"id": "1512.09300"
}
] |
1612.02136 | 7 | Researchers also aim at stretching GANâs limit to generate higher-resolution, photo-realistic images. Denton et al. (2015) initially apply a Laplacian pyramid framework on GAN to generate images of high resolution. At each level of their LAPGAN, both the generator and the discriminator are convo- lutional networks. As an alternative to LAPGAN, Radford et al. (2015) successfully designs a class of deep convolutional generative adversarial networks which has led to signiï¬cant improvements on unsupervised image representation learning. Another line of work aimed at improving GANs are through feature learning, including features from the latent space and image space. The motivation is that features from different spaces are complementary for generating perceptual and natural-looking images. With this perspective, some researchers use distances between learned features as losses for training objectives for generative models. Larsen et al. (2015) combine a variational autoencoder objective with a GAN and utilize the learned features from the discriminator in the GANs for better image similarity metrics. It is shown that the learned distance from the discriminator is of great help for the sample visual ï¬delity. Recent literature have | 1612.02136#7 | Mode Regularized Generative Adversarial Networks | Although Generative Adversarial Networks achieve state-of-the-art results on
a variety of generative tasks, they are regarded as highly unstable and prone
to miss modes. We argue that these bad behaviors of GANs are due to the very
particular functional shape of the trained discriminators in high dimensional
spaces, which can easily make training stuck or push probability mass in the
wrong direction, towards that of higher concentration than that of the data
generating distribution. We introduce several ways of regularizing the
objective, which can dramatically stabilize the training of GAN models. We also
show that our regularizers can help the fair distribution of probability mass
across the modes of the data generating distribution, during the early phases
of training and thus providing a unified solution to the missing modes problem. | http://arxiv.org/pdf/1612.02136 | Tong Che, Yanran Li, Athul Paul Jacob, Yoshua Bengio, Wenjie Li | cs.LG, cs.AI, cs.CV, cs.NE | Published as a conference paper at ICLR 2017 | null | cs.LG | 20161207 | 20170302 | [
{
"id": "1511.05440"
},
{
"id": "1611.06624"
},
{
"id": "1609.03126"
},
{
"id": "1604.04382"
},
{
"id": "1609.04802"
},
{
"id": "1612.00005"
},
{
"id": "1606.03498"
},
{
"id": "1605.09782"
},
{
"id": "1605.05396"
},
{
"id": "1610.04490"
},
{
"id": "1606.00704"
},
{
"id": "1602.02644"
},
{
"id": "1611.02163"
},
{
"id": "1511.06434"
},
{
"id": "1512.09300"
}
] |
1612.02136 | 9 | Despite these promising successes, GANs are notably hard to train. Although Radford et al. (2015) provide a class of empirical architectural choices that are critical to stabilize GANâs training, it would be even better to train GANs more robustly and systematically. Salimans et al. (2016) pro- pose feature matching technique to stabilize GANâs training. The generator is required to match the statistics of intermediate features of the discriminator. Similar idea is adopted by Zhao et al. (2016).
2
Published as a conference paper at ICLR 2017 | 1612.02136#9 | Mode Regularized Generative Adversarial Networks | Although Generative Adversarial Networks achieve state-of-the-art results on
a variety of generative tasks, they are regarded as highly unstable and prone
to miss modes. We argue that these bad behaviors of GANs are due to the very
particular functional shape of the trained discriminators in high dimensional
spaces, which can easily make training stuck or push probability mass in the
wrong direction, towards that of higher concentration than that of the data
generating distribution. We introduce several ways of regularizing the
objective, which can dramatically stabilize the training of GAN models. We also
show that our regularizers can help the fair distribution of probability mass
across the modes of the data generating distribution, during the early phases
of training and thus providing a unified solution to the missing modes problem. | http://arxiv.org/pdf/1612.02136 | Tong Che, Yanran Li, Athul Paul Jacob, Yoshua Bengio, Wenjie Li | cs.LG, cs.AI, cs.CV, cs.NE | Published as a conference paper at ICLR 2017 | null | cs.LG | 20161207 | 20170302 | [
{
"id": "1511.05440"
},
{
"id": "1611.06624"
},
{
"id": "1609.03126"
},
{
"id": "1604.04382"
},
{
"id": "1609.04802"
},
{
"id": "1612.00005"
},
{
"id": "1606.03498"
},
{
"id": "1605.09782"
},
{
"id": "1605.05396"
},
{
"id": "1610.04490"
},
{
"id": "1606.00704"
},
{
"id": "1602.02644"
},
{
"id": "1611.02163"
},
{
"id": "1511.06434"
},
{
"id": "1512.09300"
}
] |
1612.02136 | 10 | 2
Published as a conference paper at ICLR 2017
In addition to feature distances, Dosovitskiy & Brox (2016) found that the counterpart loss in image space further improves GANâs training stability. Furthermore, some researchers make use of infor- mation in both spaces in a uniï¬ed learning procedure (Dumoulin et al., 2016; Donahue et al., 2016). In Dumoulin et al. (2016), one trains not just a generator but also an encoder, and the discriminator is trained to distinguish between two joint distributions over image and latent spaces produced either by the application of the encoder on the training data or by the application of the generator (decoder) to the latent prior. This is in contrast with the regular GAN training, in which the discriminator only attempts to separate the distributions in the image space. Parallelly, Metz et al. (2016) stabilize GANs by unrolling the optimization of discriminator, which can be considered as an orthogonal work with ours. | 1612.02136#10 | Mode Regularized Generative Adversarial Networks | Although Generative Adversarial Networks achieve state-of-the-art results on
a variety of generative tasks, they are regarded as highly unstable and prone
to miss modes. We argue that these bad behaviors of GANs are due to the very
particular functional shape of the trained discriminators in high dimensional
spaces, which can easily make training stuck or push probability mass in the
wrong direction, towards that of higher concentration than that of the data
generating distribution. We introduce several ways of regularizing the
objective, which can dramatically stabilize the training of GAN models. We also
show that our regularizers can help the fair distribution of probability mass
across the modes of the data generating distribution, during the early phases
of training and thus providing a unified solution to the missing modes problem. | http://arxiv.org/pdf/1612.02136 | Tong Che, Yanran Li, Athul Paul Jacob, Yoshua Bengio, Wenjie Li | cs.LG, cs.AI, cs.CV, cs.NE | Published as a conference paper at ICLR 2017 | null | cs.LG | 20161207 | 20170302 | [
{
"id": "1511.05440"
},
{
"id": "1611.06624"
},
{
"id": "1609.03126"
},
{
"id": "1604.04382"
},
{
"id": "1609.04802"
},
{
"id": "1612.00005"
},
{
"id": "1606.03498"
},
{
"id": "1605.09782"
},
{
"id": "1605.05396"
},
{
"id": "1610.04490"
},
{
"id": "1606.00704"
},
{
"id": "1602.02644"
},
{
"id": "1611.02163"
},
{
"id": "1511.06434"
},
{
"id": "1512.09300"
}
] |
1612.02136 | 11 | Our work is related to VAEGAN (Larsen et al., 2015) in terms of training an autoencoder or VAE jointly with the GAN model. However, the variational autoencoder (VAE) in VAEGAN is used to generate samples whereas our autoencoder based losses serves as a regularizer to penalize missing modes and thus improving GANâs training stability and sample qualities. We demonstrate detailed differences from various aspects in Appendix D.
# 3 MODE REGULARIZERS FOR GANS
The GAN training procedure can be viewed as a non-cooperative two player game, in which the discriminator D tries to distinguish real and generated examples, while the generator G tries to fool the discriminator by pushing the generated samples towards the direction of higher discrimination values. Training the discriminator D can be viewed as training an evaluation metric on the sample space. Then the generator G has to take advantage of the local gradient â log D(G) provided by the discriminator to improve itself, namely to move towards the data manifold. | 1612.02136#11 | Mode Regularized Generative Adversarial Networks | Although Generative Adversarial Networks achieve state-of-the-art results on
a variety of generative tasks, they are regarded as highly unstable and prone
to miss modes. We argue that these bad behaviors of GANs are due to the very
particular functional shape of the trained discriminators in high dimensional
spaces, which can easily make training stuck or push probability mass in the
wrong direction, towards that of higher concentration than that of the data
generating distribution. We introduce several ways of regularizing the
objective, which can dramatically stabilize the training of GAN models. We also
show that our regularizers can help the fair distribution of probability mass
across the modes of the data generating distribution, during the early phases
of training and thus providing a unified solution to the missing modes problem. | http://arxiv.org/pdf/1612.02136 | Tong Che, Yanran Li, Athul Paul Jacob, Yoshua Bengio, Wenjie Li | cs.LG, cs.AI, cs.CV, cs.NE | Published as a conference paper at ICLR 2017 | null | cs.LG | 20161207 | 20170302 | [
{
"id": "1511.05440"
},
{
"id": "1611.06624"
},
{
"id": "1609.03126"
},
{
"id": "1604.04382"
},
{
"id": "1609.04802"
},
{
"id": "1612.00005"
},
{
"id": "1606.03498"
},
{
"id": "1605.09782"
},
{
"id": "1605.05396"
},
{
"id": "1610.04490"
},
{
"id": "1606.00704"
},
{
"id": "1602.02644"
},
{
"id": "1611.02163"
},
{
"id": "1511.06434"
},
{
"id": "1512.09300"
}
] |
1612.02136 | 12 | We now take a closer look at the root cause of the instabilities while training GANs. The discrim- inator is trained on both generated and real examples. As pointed out by Goodfellow et al. (2014); Denton et al. (2015); Radford et al. (2015), when the data manifold and the generation manifold are disjoint (which is true in almost all practical situations), it is equivalent to training a characteristic function to be very close to 1 on the data manifold, and 0 on the generation manifold. In order to pass good gradient information to the generator, it is important that the trained discriminator pro- duces stable and smooth gradients. However, since the discriminator objective does not directly depend on the behavior of the discriminator in other parts of the space, training can easily fail if the shape of the discriminator function is not as expected. As an example,Denton et al. (2015) noted a common failure pattern for training GANs which is the vanishing gradient problem, in which the discriminator D perfectly classiï¬es real and fake examples, such that around the fake examples, D is nearly zero. In such cases, the generator will receive no gradient to improve itself.1 | 1612.02136#12 | Mode Regularized Generative Adversarial Networks | Although Generative Adversarial Networks achieve state-of-the-art results on
a variety of generative tasks, they are regarded as highly unstable and prone
to miss modes. We argue that these bad behaviors of GANs are due to the very
particular functional shape of the trained discriminators in high dimensional
spaces, which can easily make training stuck or push probability mass in the
wrong direction, towards that of higher concentration than that of the data
generating distribution. We introduce several ways of regularizing the
objective, which can dramatically stabilize the training of GAN models. We also
show that our regularizers can help the fair distribution of probability mass
across the modes of the data generating distribution, during the early phases
of training and thus providing a unified solution to the missing modes problem. | http://arxiv.org/pdf/1612.02136 | Tong Che, Yanran Li, Athul Paul Jacob, Yoshua Bengio, Wenjie Li | cs.LG, cs.AI, cs.CV, cs.NE | Published as a conference paper at ICLR 2017 | null | cs.LG | 20161207 | 20170302 | [
{
"id": "1511.05440"
},
{
"id": "1611.06624"
},
{
"id": "1609.03126"
},
{
"id": "1604.04382"
},
{
"id": "1609.04802"
},
{
"id": "1612.00005"
},
{
"id": "1606.03498"
},
{
"id": "1605.09782"
},
{
"id": "1605.05396"
},
{
"id": "1610.04490"
},
{
"id": "1606.00704"
},
{
"id": "1602.02644"
},
{
"id": "1611.02163"
},
{
"id": "1511.06434"
},
{
"id": "1512.09300"
}
] |
1612.02136 | 13 | Another important problem while training GANs is mode missing. In theory, if the generated data and the real data come from the same low dimensional manifold, the discriminator can help the generator distribute its probability mass, because the missing modes will not have near-0 probability under the generator and so the samples in these areas can be appropriately concentrated towards regions where D is closer to 1. However, in practice since the two manifolds are disjoint, D tends to be near 1 on all the real data samples, so large modes usually have a much higher chance of attracting the gradient of discriminator. For a typical GAN model, since all modes have similar D values, there is no reason why the generator cannot collapse to just a few major modes. In other words, since the discriminatorâs output is nearly 0 and 1 on fake and real data respectively, the generator is not penalized for missing modes.
3.1 GEOMETRIC METRICS REGULARIZER
Compared with the objective for the GAN generator, the optimization targets for supervised learning are more stable from an optimization point of view. The difference is clear: the optimization target for the GAN generator is a learned discriminator. While in supervised models, the optimization targets are distance functions with nice geometric properties. The latter usually provides much easier training gradients than the former, especially at the early stages of training. | 1612.02136#13 | Mode Regularized Generative Adversarial Networks | Although Generative Adversarial Networks achieve state-of-the-art results on
a variety of generative tasks, they are regarded as highly unstable and prone
to miss modes. We argue that these bad behaviors of GANs are due to the very
particular functional shape of the trained discriminators in high dimensional
spaces, which can easily make training stuck or push probability mass in the
wrong direction, towards that of higher concentration than that of the data
generating distribution. We introduce several ways of regularizing the
objective, which can dramatically stabilize the training of GAN models. We also
show that our regularizers can help the fair distribution of probability mass
across the modes of the data generating distribution, during the early phases
of training and thus providing a unified solution to the missing modes problem. | http://arxiv.org/pdf/1612.02136 | Tong Che, Yanran Li, Athul Paul Jacob, Yoshua Bengio, Wenjie Li | cs.LG, cs.AI, cs.CV, cs.NE | Published as a conference paper at ICLR 2017 | null | cs.LG | 20161207 | 20170302 | [
{
"id": "1511.05440"
},
{
"id": "1611.06624"
},
{
"id": "1609.03126"
},
{
"id": "1604.04382"
},
{
"id": "1609.04802"
},
{
"id": "1612.00005"
},
{
"id": "1606.03498"
},
{
"id": "1605.09782"
},
{
"id": "1605.05396"
},
{
"id": "1610.04490"
},
{
"id": "1606.00704"
},
{
"id": "1602.02644"
},
{
"id": "1611.02163"
},
{
"id": "1511.06434"
},
{
"id": "1512.09300"
}
] |
1612.02136 | 14 | 1This problem exists even when we use log D(G(z)) as target for the generator, as noted by Denton et al. (2015) and our experiments.
3
Published as a conference paper at ICLR 2017
Inspired by this observation, we propose to incorporate a supervised training signal as a regularizer on top of the discriminator target. Assume the generator G(z) : Z â X generates samples by sam- pling ï¬rst from a ï¬xed prior distribution in space Z followed by a deterministic trainable transforma- tion G into the sample space X. Together with G, we also jointly train an encoder E(x) : X â Z. Assume d is some similarity metric in the data space, we add Exâ¼pd [d(x, Gâ¦E(x))] as a regularizer, where pd is the data generating distribution. The encoder itself is trained by minimizing the same reconstruction error. In practice, there are many options for the distance measure d. For instance, the pixel-wise L2 distance, or the distance of learned features by the discriminator (Dumoulin et al., 2016) or by other networks, such as a VGG classiï¬er. (Ledig et al., 2016) | 1612.02136#14 | Mode Regularized Generative Adversarial Networks | Although Generative Adversarial Networks achieve state-of-the-art results on
a variety of generative tasks, they are regarded as highly unstable and prone
to miss modes. We argue that these bad behaviors of GANs are due to the very
particular functional shape of the trained discriminators in high dimensional
spaces, which can easily make training stuck or push probability mass in the
wrong direction, towards that of higher concentration than that of the data
generating distribution. We introduce several ways of regularizing the
objective, which can dramatically stabilize the training of GAN models. We also
show that our regularizers can help the fair distribution of probability mass
across the modes of the data generating distribution, during the early phases
of training and thus providing a unified solution to the missing modes problem. | http://arxiv.org/pdf/1612.02136 | Tong Che, Yanran Li, Athul Paul Jacob, Yoshua Bengio, Wenjie Li | cs.LG, cs.AI, cs.CV, cs.NE | Published as a conference paper at ICLR 2017 | null | cs.LG | 20161207 | 20170302 | [
{
"id": "1511.05440"
},
{
"id": "1611.06624"
},
{
"id": "1609.03126"
},
{
"id": "1604.04382"
},
{
"id": "1609.04802"
},
{
"id": "1612.00005"
},
{
"id": "1606.03498"
},
{
"id": "1605.09782"
},
{
"id": "1605.05396"
},
{
"id": "1610.04490"
},
{
"id": "1606.00704"
},
{
"id": "1602.02644"
},
{
"id": "1611.02163"
},
{
"id": "1511.06434"
},
{
"id": "1512.09300"
}
] |
1612.02136 | 15 | The geometric intuition for this regularizer is straight-forward. We are trying to move the generated manifold to the real data manifold using gradient descent. In addition to the gradient provided by the discriminator, we can also try to match the two manifolds by other geometric distances, say, Ls metric. The idea of adding an encoder is equivalent to ï¬rst training a point to point mapping G(E(x)) between the two manifolds and then trying to minimize the expected distance between the points on these two manifolds.
# 3.2 MODE REGULARIZER
In addition to the metric regularizer, we propose a mode regularizer to further penalize miss- ing modes. In traditional GANs, the optimization target for the generator is the empirical sum >=; Vo log D(Go(z;)). The missing mode problem is caused by the conjunction of two facts: (1) the areas near missing modes are rarely visited by the generator, by definition, thus providing very few examples to improve the generator around those areas, and (2) both missing modes and non- missing modes tend to correspond to a high value of D, because the generator is not perfect so that the discriminator can take strong decisions locally and obtain a high value of D even near non-missing modes. | 1612.02136#15 | Mode Regularized Generative Adversarial Networks | Although Generative Adversarial Networks achieve state-of-the-art results on
a variety of generative tasks, they are regarded as highly unstable and prone
to miss modes. We argue that these bad behaviors of GANs are due to the very
particular functional shape of the trained discriminators in high dimensional
spaces, which can easily make training stuck or push probability mass in the
wrong direction, towards that of higher concentration than that of the data
generating distribution. We introduce several ways of regularizing the
objective, which can dramatically stabilize the training of GAN models. We also
show that our regularizers can help the fair distribution of probability mass
across the modes of the data generating distribution, during the early phases
of training and thus providing a unified solution to the missing modes problem. | http://arxiv.org/pdf/1612.02136 | Tong Che, Yanran Li, Athul Paul Jacob, Yoshua Bengio, Wenjie Li | cs.LG, cs.AI, cs.CV, cs.NE | Published as a conference paper at ICLR 2017 | null | cs.LG | 20161207 | 20170302 | [
{
"id": "1511.05440"
},
{
"id": "1611.06624"
},
{
"id": "1609.03126"
},
{
"id": "1604.04382"
},
{
"id": "1609.04802"
},
{
"id": "1612.00005"
},
{
"id": "1606.03498"
},
{
"id": "1605.09782"
},
{
"id": "1605.05396"
},
{
"id": "1610.04490"
},
{
"id": "1606.00704"
},
{
"id": "1602.02644"
},
{
"id": "1611.02163"
},
{
"id": "1511.06434"
},
{
"id": "1512.09300"
}
] |
1612.02136 | 16 | As an example, consider the situation in Fig- ure 2. For most z, the gradient of the generator âθ log D(Gθ(z)) pushes the generator towards the major mode M1. Only when G(z) is very close to the mode M2 can the generator get gra- dients to push itself towards the minor mode M2. However, it is possible that such z is of low or zero probability in the prior distribution p0.
towards Mr towards Mp generation manifold
Given this observation, consider a regularized GAN model with the metric regularizer. As- sume M0 is a minor mode of the data generat- ing distribution. For x â M0, we know that if G ⦠E is a good autoencoder, G(E(x)) will be located very close to mode M0. Since there are sufï¬cient training examples of mode M0 in the training data, we add the mode regularizer Exâ¼pd [log D(G ⦠E(x))] to our optimization target for the generator, to encourage G(E(x)) to move towards a nearby mode of the data generating distribution. In this way, we can achieve fair probability mass distribution across different modes.
Figure 2: Illustration of missing modes problem.
In short, our regularized optimization target for the generator and the encoder becomes: | 1612.02136#16 | Mode Regularized Generative Adversarial Networks | Although Generative Adversarial Networks achieve state-of-the-art results on
a variety of generative tasks, they are regarded as highly unstable and prone
to miss modes. We argue that these bad behaviors of GANs are due to the very
particular functional shape of the trained discriminators in high dimensional
spaces, which can easily make training stuck or push probability mass in the
wrong direction, towards that of higher concentration than that of the data
generating distribution. We introduce several ways of regularizing the
objective, which can dramatically stabilize the training of GAN models. We also
show that our regularizers can help the fair distribution of probability mass
across the modes of the data generating distribution, during the early phases
of training and thus providing a unified solution to the missing modes problem. | http://arxiv.org/pdf/1612.02136 | Tong Che, Yanran Li, Athul Paul Jacob, Yoshua Bengio, Wenjie Li | cs.LG, cs.AI, cs.CV, cs.NE | Published as a conference paper at ICLR 2017 | null | cs.LG | 20161207 | 20170302 | [
{
"id": "1511.05440"
},
{
"id": "1611.06624"
},
{
"id": "1609.03126"
},
{
"id": "1604.04382"
},
{
"id": "1609.04802"
},
{
"id": "1612.00005"
},
{
"id": "1606.03498"
},
{
"id": "1605.09782"
},
{
"id": "1605.05396"
},
{
"id": "1610.04490"
},
{
"id": "1606.00704"
},
{
"id": "1602.02644"
},
{
"id": "1611.02163"
},
{
"id": "1511.06434"
},
{
"id": "1512.09300"
}
] |
1612.02136 | 17 | Figure 2: Illustration of missing modes problem.
In short, our regularized optimization target for the generator and the encoder becomes:
TG = âEz[log D(G(z))] + Exâ¼pd [λ1d(x, G ⦠E(x)) + λ2 log D(G ⦠E(x))] TE = Exâ¼pd [λ1d(x, G ⦠E(x)) + λ2 log D(G ⦠E(x))] (1) (2)
4
Published as a conference paper at ICLR 2017
3.3 MANIFOLD-DIFFUSION TRAINING FOR REGULARIZED GANS
On some large scale datasets, CelebA for example, the regularizers we have discussed do improve the diversity of generated samples, but the quality of samples may not be as good without care- fully tuning the hyperparameters. Here we propose a new algorithm for training metric-regularized GANs, which is very stable and much easier to tune for producing good samples. | 1612.02136#17 | Mode Regularized Generative Adversarial Networks | Although Generative Adversarial Networks achieve state-of-the-art results on
a variety of generative tasks, they are regarded as highly unstable and prone
to miss modes. We argue that these bad behaviors of GANs are due to the very
particular functional shape of the trained discriminators in high dimensional
spaces, which can easily make training stuck or push probability mass in the
wrong direction, towards that of higher concentration than that of the data
generating distribution. We introduce several ways of regularizing the
objective, which can dramatically stabilize the training of GAN models. We also
show that our regularizers can help the fair distribution of probability mass
across the modes of the data generating distribution, during the early phases
of training and thus providing a unified solution to the missing modes problem. | http://arxiv.org/pdf/1612.02136 | Tong Che, Yanran Li, Athul Paul Jacob, Yoshua Bengio, Wenjie Li | cs.LG, cs.AI, cs.CV, cs.NE | Published as a conference paper at ICLR 2017 | null | cs.LG | 20161207 | 20170302 | [
{
"id": "1511.05440"
},
{
"id": "1611.06624"
},
{
"id": "1609.03126"
},
{
"id": "1604.04382"
},
{
"id": "1609.04802"
},
{
"id": "1612.00005"
},
{
"id": "1606.03498"
},
{
"id": "1605.09782"
},
{
"id": "1605.05396"
},
{
"id": "1610.04490"
},
{
"id": "1606.00704"
},
{
"id": "1602.02644"
},
{
"id": "1611.02163"
},
{
"id": "1511.06434"
},
{
"id": "1512.09300"
}
] |
1612.02136 | 18 | The proposed algorithm divides the training procedure of GANs into two steps: a manifold step and a diffusion step. In the manifold step, we try to match the generation manifold and the real data manifold with the help of an encoder and the geometric metric loss. In the diffusion step, we try to distribute the probability mass on the generation manifold fairly according to the real data distribution.
An example of manifold-diffusion training of GAN (MDGAN for short) is as follows: we train a discriminator D1 which separates between the samples x and G ⦠E(x), for x from the data, and we optimize G with respect to the regularized GAN loss E[log D1(Gâ¦E(x))+λd(x, Gâ¦E(x))] in order to match the two manifolds. In the diffusion step we train a discriminator D2 between distributions G(z) and G ⦠E(x), and we train G to maximize log D2(G(z)). Since these two distributions are now nearly on the same low dimensional manifold, the discriminator D2 provides much smoother and more stable gradients. The detailed training procedure is given in Appendix A. See Figure 6 for the quality of generated samples.
3.4 EVALUATION METRICS FOR MODE MISSING | 1612.02136#18 | Mode Regularized Generative Adversarial Networks | Although Generative Adversarial Networks achieve state-of-the-art results on
a variety of generative tasks, they are regarded as highly unstable and prone
to miss modes. We argue that these bad behaviors of GANs are due to the very
particular functional shape of the trained discriminators in high dimensional
spaces, which can easily make training stuck or push probability mass in the
wrong direction, towards that of higher concentration than that of the data
generating distribution. We introduce several ways of regularizing the
objective, which can dramatically stabilize the training of GAN models. We also
show that our regularizers can help the fair distribution of probability mass
across the modes of the data generating distribution, during the early phases
of training and thus providing a unified solution to the missing modes problem. | http://arxiv.org/pdf/1612.02136 | Tong Che, Yanran Li, Athul Paul Jacob, Yoshua Bengio, Wenjie Li | cs.LG, cs.AI, cs.CV, cs.NE | Published as a conference paper at ICLR 2017 | null | cs.LG | 20161207 | 20170302 | [
{
"id": "1511.05440"
},
{
"id": "1611.06624"
},
{
"id": "1609.03126"
},
{
"id": "1604.04382"
},
{
"id": "1609.04802"
},
{
"id": "1612.00005"
},
{
"id": "1606.03498"
},
{
"id": "1605.09782"
},
{
"id": "1605.05396"
},
{
"id": "1610.04490"
},
{
"id": "1606.00704"
},
{
"id": "1602.02644"
},
{
"id": "1611.02163"
},
{
"id": "1511.06434"
},
{
"id": "1512.09300"
}
] |
1612.02136 | 19 | 3.4 EVALUATION METRICS FOR MODE MISSING
In order to estimate both the missing modes and the sample qualities in our experiments, we used several different metrics for different experiments instead of human annotators.
The inception score (Salimans et al., 2016) was considered as a good assessment for sample quality from a labelled dataset:
exp (ExKL(p(y|x)||pâ(y))) (3)
Where x denotes one sample, p(y|x) is the softmax output of a trained classiï¬er of the labels, and pâ(y) is the overall label distribution of generated samples. The intuition behind this score is that a strong classiï¬er usually has a high conï¬dence for good samples. However, the inception score is sometimes not a good metric for our purpose. Assume a generative model that collapse to a very bad image. Although the model is very bad, it can have a perfect inception score, because p(y|x) can have a high entropy and pâ(y) can have a low entropy. So instead, for labelled datasets, we propose another assessment for both visual quality and variety of samples, the MODE score:
exp (ExKL(p(y|x)||p(y)) â KL(pâ(y)||p(y))) (4) | 1612.02136#19 | Mode Regularized Generative Adversarial Networks | Although Generative Adversarial Networks achieve state-of-the-art results on
a variety of generative tasks, they are regarded as highly unstable and prone
to miss modes. We argue that these bad behaviors of GANs are due to the very
particular functional shape of the trained discriminators in high dimensional
spaces, which can easily make training stuck or push probability mass in the
wrong direction, towards that of higher concentration than that of the data
generating distribution. We introduce several ways of regularizing the
objective, which can dramatically stabilize the training of GAN models. We also
show that our regularizers can help the fair distribution of probability mass
across the modes of the data generating distribution, during the early phases
of training and thus providing a unified solution to the missing modes problem. | http://arxiv.org/pdf/1612.02136 | Tong Che, Yanran Li, Athul Paul Jacob, Yoshua Bengio, Wenjie Li | cs.LG, cs.AI, cs.CV, cs.NE | Published as a conference paper at ICLR 2017 | null | cs.LG | 20161207 | 20170302 | [
{
"id": "1511.05440"
},
{
"id": "1611.06624"
},
{
"id": "1609.03126"
},
{
"id": "1604.04382"
},
{
"id": "1609.04802"
},
{
"id": "1612.00005"
},
{
"id": "1606.03498"
},
{
"id": "1605.09782"
},
{
"id": "1605.05396"
},
{
"id": "1610.04490"
},
{
"id": "1606.00704"
},
{
"id": "1602.02644"
},
{
"id": "1611.02163"
},
{
"id": "1511.06434"
},
{
"id": "1512.09300"
}
] |
1612.02136 | 20 | exp (ExKL(p(y|x)||p(y)) â KL(pâ(y)||p(y))) (4)
where p(y) is the distribution of labels in the training data. According to our human evaluation experiences, the MODE score successfully measures two important aspects of generative models, i.e., variety and visual quality, in one metric.
However, in datasets without labels (LSUN) or where the labels are not sufï¬cient to characterize every data mode (CelebA), the above metric does not work well. We instead train a third party discriminator between the real data and the generated data from the model. It is similar to the GAN discriminator but is not used to train the generator. We can view the output of the discriminator as an estimator for the quantity (See (Goodfellow et al., 2014) for proof):
Dâ(s) â pg(s) pg(s) + pd(s) (5) | 1612.02136#20 | Mode Regularized Generative Adversarial Networks | Although Generative Adversarial Networks achieve state-of-the-art results on
a variety of generative tasks, they are regarded as highly unstable and prone
to miss modes. We argue that these bad behaviors of GANs are due to the very
particular functional shape of the trained discriminators in high dimensional
spaces, which can easily make training stuck or push probability mass in the
wrong direction, towards that of higher concentration than that of the data
generating distribution. We introduce several ways of regularizing the
objective, which can dramatically stabilize the training of GAN models. We also
show that our regularizers can help the fair distribution of probability mass
across the modes of the data generating distribution, during the early phases
of training and thus providing a unified solution to the missing modes problem. | http://arxiv.org/pdf/1612.02136 | Tong Che, Yanran Li, Athul Paul Jacob, Yoshua Bengio, Wenjie Li | cs.LG, cs.AI, cs.CV, cs.NE | Published as a conference paper at ICLR 2017 | null | cs.LG | 20161207 | 20170302 | [
{
"id": "1511.05440"
},
{
"id": "1611.06624"
},
{
"id": "1609.03126"
},
{
"id": "1604.04382"
},
{
"id": "1609.04802"
},
{
"id": "1612.00005"
},
{
"id": "1606.03498"
},
{
"id": "1605.09782"
},
{
"id": "1605.05396"
},
{
"id": "1610.04490"
},
{
"id": "1606.00704"
},
{
"id": "1602.02644"
},
{
"id": "1611.02163"
},
{
"id": "1511.06434"
},
{
"id": "1512.09300"
}
] |
1612.02136 | 21 | Dâ(s) â pg(s) pg(s) + pd(s) (5)
Where pg is the probability density of the generator and pd is the density of the data distribution. To prevent Dâ from learning a perfect 0-1 separation of pg and pd, we inject a zero-mean Gaussian noise to the inputs when training Dâ. After training, we test Dâ on the test set T of the real dataset. If for any test sample t â T , the discrimination value D(t) is close to 1, we can conclude that the mode corresponding to t is missing. In this way, although we cannot measure exactly the number of modes that are missing, we have a good estimator of the total probability mass of all the missing modes.
5
Published as a conference paper at ICLR 2017
4 EXPERIMENTS
# 4.1 MNIST
We perform two classes of experiments on MNIST. For the MNIST dataset, we can assume that the data generating distribution can be approximated with ten dominant modes, if we deï¬ne the term âmodeâ here as a connected component of the data manifold.
Table 1: Grid Search for Hyperparameters. | 1612.02136#21 | Mode Regularized Generative Adversarial Networks | Although Generative Adversarial Networks achieve state-of-the-art results on
a variety of generative tasks, they are regarded as highly unstable and prone
to miss modes. We argue that these bad behaviors of GANs are due to the very
particular functional shape of the trained discriminators in high dimensional
spaces, which can easily make training stuck or push probability mass in the
wrong direction, towards that of higher concentration than that of the data
generating distribution. We introduce several ways of regularizing the
objective, which can dramatically stabilize the training of GAN models. We also
show that our regularizers can help the fair distribution of probability mass
across the modes of the data generating distribution, during the early phases
of training and thus providing a unified solution to the missing modes problem. | http://arxiv.org/pdf/1612.02136 | Tong Che, Yanran Li, Athul Paul Jacob, Yoshua Bengio, Wenjie Li | cs.LG, cs.AI, cs.CV, cs.NE | Published as a conference paper at ICLR 2017 | null | cs.LG | 20161207 | 20170302 | [
{
"id": "1511.05440"
},
{
"id": "1611.06624"
},
{
"id": "1609.03126"
},
{
"id": "1604.04382"
},
{
"id": "1609.04802"
},
{
"id": "1612.00005"
},
{
"id": "1606.03498"
},
{
"id": "1605.09782"
},
{
"id": "1605.05396"
},
{
"id": "1610.04490"
},
{
"id": "1606.00704"
},
{
"id": "1602.02644"
},
{
"id": "1611.02163"
},
{
"id": "1511.06434"
},
{
"id": "1512.09300"
}
] |
1612.02136 | 22 | Table 1: Grid Search for Hyperparameters.
nLayerG nLayerD sizeG sizeD dropoutD [True,False] [SGD,Adam] optimG [SGD,Adam] optimD [1e-2,1e-3,1e-4] lr [2,3,4] [2,3,4] [400,800,1600,3200] [256, 512, 1024]
_
# 4.1.1 GRID SEARCH FOR MNIST GAN MODELS
In order to systemically explore the effect of our pro- posed regularizers on GAN models in terms of im- proving stability and sample quality, we use a large scale grid search of different GAN hyper-parameters on the MNIST dataset. The grid search is based on a pair of randomly selected loss weights: λ1 = 0.2 and λ2 = 0.4. We use the same hyper-parameter settings for both GAN and Regularized GAN, and list the search ranges in Table 1. Our grid search is similar to those proposed in Zhao et al. (2016). Please refer to it for detailed explanations regarding these hyper-parameters. | 1612.02136#22 | Mode Regularized Generative Adversarial Networks | Although Generative Adversarial Networks achieve state-of-the-art results on
a variety of generative tasks, they are regarded as highly unstable and prone
to miss modes. We argue that these bad behaviors of GANs are due to the very
particular functional shape of the trained discriminators in high dimensional
spaces, which can easily make training stuck or push probability mass in the
wrong direction, towards that of higher concentration than that of the data
generating distribution. We introduce several ways of regularizing the
objective, which can dramatically stabilize the training of GAN models. We also
show that our regularizers can help the fair distribution of probability mass
across the modes of the data generating distribution, during the early phases
of training and thus providing a unified solution to the missing modes problem. | http://arxiv.org/pdf/1612.02136 | Tong Che, Yanran Li, Athul Paul Jacob, Yoshua Bengio, Wenjie Li | cs.LG, cs.AI, cs.CV, cs.NE | Published as a conference paper at ICLR 2017 | null | cs.LG | 20161207 | 20170302 | [
{
"id": "1511.05440"
},
{
"id": "1611.06624"
},
{
"id": "1609.03126"
},
{
"id": "1604.04382"
},
{
"id": "1609.04802"
},
{
"id": "1612.00005"
},
{
"id": "1606.03498"
},
{
"id": "1605.09782"
},
{
"id": "1605.05396"
},
{
"id": "1610.04490"
},
{
"id": "1606.00704"
},
{
"id": "1602.02644"
},
{
"id": "1611.02163"
},
{
"id": "1511.06434"
},
{
"id": "1512.09300"
}
] |
1612.02136 | 23 | For evaluation, we ï¬rst train a 4-layer CNN classiï¬er on the MNIST digits, and then apply it to compute the MODE scores for the generated samples from all these models. The resulting distribu- tion of MODE score is shown in Figure 3. Clearly, our proposed regularizer signiï¬cantly improves the MODE scores and thus demonstrates its beneï¬ts on stabilizing GANs and improving sample qualities.
10 59.97 CAN sem Regularized GAN 22.29 20 7.34 14.86 96 * 6.19 6.19 743 soz 433 35 aay 248833 2.79 oles eM 2 cee conf) onl oo Si Py a 00.5 05-1 12 23
Figure 3: The distributions of MODE scores for GAN and regularized GAN.
To illustrate the effect of regularizers with different coefï¬cients, we randomly pick an architecture and train it with different λ1 = λ2. The results are shown in Figure 4. | 1612.02136#23 | Mode Regularized Generative Adversarial Networks | Although Generative Adversarial Networks achieve state-of-the-art results on
a variety of generative tasks, they are regarded as highly unstable and prone
to miss modes. We argue that these bad behaviors of GANs are due to the very
particular functional shape of the trained discriminators in high dimensional
spaces, which can easily make training stuck or push probability mass in the
wrong direction, towards that of higher concentration than that of the data
generating distribution. We introduce several ways of regularizing the
objective, which can dramatically stabilize the training of GAN models. We also
show that our regularizers can help the fair distribution of probability mass
across the modes of the data generating distribution, during the early phases
of training and thus providing a unified solution to the missing modes problem. | http://arxiv.org/pdf/1612.02136 | Tong Che, Yanran Li, Athul Paul Jacob, Yoshua Bengio, Wenjie Li | cs.LG, cs.AI, cs.CV, cs.NE | Published as a conference paper at ICLR 2017 | null | cs.LG | 20161207 | 20170302 | [
{
"id": "1511.05440"
},
{
"id": "1611.06624"
},
{
"id": "1609.03126"
},
{
"id": "1604.04382"
},
{
"id": "1609.04802"
},
{
"id": "1612.00005"
},
{
"id": "1606.03498"
},
{
"id": "1605.09782"
},
{
"id": "1605.05396"
},
{
"id": "1610.04490"
},
{
"id": "1606.00704"
},
{
"id": "1602.02644"
},
{
"id": "1611.02163"
},
{
"id": "1511.06434"
},
{
"id": "1512.09300"
}
] |
1612.02136 | 24 | Figure 4: (Left 1-5) Different hyperparameters for MNIST generation. The values of the λ1 and λ2 in our Regularized GAN are listed below the corresponding samples. (Right 6-7) Best samples through grid search for GAN and Regularized GAN.
4.1.2 COMPOSITIONAL MNIST DATA WITH 1000 MODES
In order to quantitatively study the effect of our regularizers on the missing modes, we concatenate three MNIST digits to a number in [0,999] in a single 64x64 image, and then train DCGAN as a baseline model on the 1000 modes dataset. The digits on the image are sampled with different
6
Published as a conference paper at ICLR 2017
probabilities, in order to test the modelâs capability to preserve small modes in generation. We again use a pre-trained classiï¬er for MNIST instead of a human to evaluate the models.
Table 2: Results for Compositional MNIST with 1000 modes. The proposed regularization (Reg- DCGAN) allows to substantially reduce the number of missed modes as well as the KL divergence that measures the plausibility of the generated samples (like in the Inception score). | 1612.02136#24 | Mode Regularized Generative Adversarial Networks | Although Generative Adversarial Networks achieve state-of-the-art results on
a variety of generative tasks, they are regarded as highly unstable and prone
to miss modes. We argue that these bad behaviors of GANs are due to the very
particular functional shape of the trained discriminators in high dimensional
spaces, which can easily make training stuck or push probability mass in the
wrong direction, towards that of higher concentration than that of the data
generating distribution. We introduce several ways of regularizing the
objective, which can dramatically stabilize the training of GAN models. We also
show that our regularizers can help the fair distribution of probability mass
across the modes of the data generating distribution, during the early phases
of training and thus providing a unified solution to the missing modes problem. | http://arxiv.org/pdf/1612.02136 | Tong Che, Yanran Li, Athul Paul Jacob, Yoshua Bengio, Wenjie Li | cs.LG, cs.AI, cs.CV, cs.NE | Published as a conference paper at ICLR 2017 | null | cs.LG | 20161207 | 20170302 | [
{
"id": "1511.05440"
},
{
"id": "1611.06624"
},
{
"id": "1609.03126"
},
{
"id": "1604.04382"
},
{
"id": "1609.04802"
},
{
"id": "1612.00005"
},
{
"id": "1606.03498"
},
{
"id": "1605.09782"
},
{
"id": "1605.05396"
},
{
"id": "1610.04490"
},
{
"id": "1606.00704"
},
{
"id": "1602.02644"
},
{
"id": "1611.02163"
},
{
"id": "1511.06434"
},
{
"id": "1512.09300"
}
] |
1612.02136 | 25 | Set 1 #Miss KL Set 2 #Miss KL Set 3 #Miss KL Set 4 #Miss KL DCGAN 204.7 77.9 204.3 60.2 103.4 75.9 89.3 77.8 Reg-DCGAN 32.1 62.3 71.5 58.9 42.7 68.4 31.6 67.8
The performances on the compositional experiment are measured by two metrics. #Miss represents the classiï¬er-reported number of missing modes, which is the size of the set of numbers that the model never generates. KL stands for the KL divergence between the classiï¬er-reported distribution of generated numbers and the distribution of numbers in the training data (as for the Inception score). The results are shown in Table 2. With the help of our proposed regularizer, both the number of missing modes and KL divergence drop dramatically among all the sets of the compositional MNIST dataset, which again proves the effectiveness of our regularizer for preventing the missing modes problem.
4.2 CELEBA
To test the effectiveness of our proposal on harder problems, we implement an encoder for the DCGAN algorithm and train our model with different hyper-parameters together with the DCGAN baseline on the CelebA dataset. We provide the detailed architecture of our regularized DCGAN in Appendix B.
4.2.1 MISSING MODES ESTIMATION ON CELEBA | 1612.02136#25 | Mode Regularized Generative Adversarial Networks | Although Generative Adversarial Networks achieve state-of-the-art results on
a variety of generative tasks, they are regarded as highly unstable and prone
to miss modes. We argue that these bad behaviors of GANs are due to the very
particular functional shape of the trained discriminators in high dimensional
spaces, which can easily make training stuck or push probability mass in the
wrong direction, towards that of higher concentration than that of the data
generating distribution. We introduce several ways of regularizing the
objective, which can dramatically stabilize the training of GAN models. We also
show that our regularizers can help the fair distribution of probability mass
across the modes of the data generating distribution, during the early phases
of training and thus providing a unified solution to the missing modes problem. | http://arxiv.org/pdf/1612.02136 | Tong Che, Yanran Li, Athul Paul Jacob, Yoshua Bengio, Wenjie Li | cs.LG, cs.AI, cs.CV, cs.NE | Published as a conference paper at ICLR 2017 | null | cs.LG | 20161207 | 20170302 | [
{
"id": "1511.05440"
},
{
"id": "1611.06624"
},
{
"id": "1609.03126"
},
{
"id": "1604.04382"
},
{
"id": "1609.04802"
},
{
"id": "1612.00005"
},
{
"id": "1606.03498"
},
{
"id": "1605.09782"
},
{
"id": "1605.05396"
},
{
"id": "1610.04490"
},
{
"id": "1606.00704"
},
{
"id": "1602.02644"
},
{
"id": "1611.02163"
},
{
"id": "1511.06434"
},
{
"id": "1512.09300"
}
] |
1612.02136 | 26 | 4.2.1 MISSING MODES ESTIMATION ON CELEBA
We also employ a third party discriminator trained with injected noise as a metric for missing mode estimation. To implement this, we add noise in the input layer in the discriminator network. For each GAN model to be estimated, we independently train this noisy discriminator, as mode estimator, with the same architecture and hyper-parameters on the generated data and the training data. We then apply the mode estimator to the test data. The images which have high mode estimator outputs can be viewed as on the missing modes.
Table 3: Number of images on the missing modes on CelebA estimated by a third-party discrimina- tor. The numbers in the brackets indicate the dimension of prior z. Ï denotes the standard deviation of the added Gaussian noise applied at the input of the discriminator to regularize it. MDGAN achieves a very high reduction in the number of missing modes, in comparison to other methods .
Ï DCGAN (100) DCGAN (200) Reg-GAN (100) Reg-GAN (200) MDGAN (200) 3.5 5463 17089 754 3644 74 4.0 590 15832 42 391 13 | 1612.02136#26 | Mode Regularized Generative Adversarial Networks | Although Generative Adversarial Networks achieve state-of-the-art results on
a variety of generative tasks, they are regarded as highly unstable and prone
to miss modes. We argue that these bad behaviors of GANs are due to the very
particular functional shape of the trained discriminators in high dimensional
spaces, which can easily make training stuck or push probability mass in the
wrong direction, towards that of higher concentration than that of the data
generating distribution. We introduce several ways of regularizing the
objective, which can dramatically stabilize the training of GAN models. We also
show that our regularizers can help the fair distribution of probability mass
across the modes of the data generating distribution, during the early phases
of training and thus providing a unified solution to the missing modes problem. | http://arxiv.org/pdf/1612.02136 | Tong Che, Yanran Li, Athul Paul Jacob, Yoshua Bengio, Wenjie Li | cs.LG, cs.AI, cs.CV, cs.NE | Published as a conference paper at ICLR 2017 | null | cs.LG | 20161207 | 20170302 | [
{
"id": "1511.05440"
},
{
"id": "1611.06624"
},
{
"id": "1609.03126"
},
{
"id": "1604.04382"
},
{
"id": "1609.04802"
},
{
"id": "1612.00005"
},
{
"id": "1606.03498"
},
{
"id": "1605.09782"
},
{
"id": "1605.05396"
},
{
"id": "1610.04490"
},
{
"id": "1606.00704"
},
{
"id": "1602.02644"
},
{
"id": "1611.02163"
},
{
"id": "1511.06434"
},
{
"id": "1512.09300"
}
] |
1612.02136 | 27 | The comparison result is shown in Table 3. Both our proposed Regularized-GAN and MDGAN outperform baseline DCGAN models on all settings. Especially, MDGAN suppresses other models, showing its superiority on modes preserving. We also ï¬nd that, although sharing the same architec- ture, the DCGAN with 200-dimensional noise performs quite worse than that with 100-dimensional noise as input. On the contrary, our regularized GAN performs more consistently.
To get a better understanding of the modelsâ performance, we want to ï¬gure out when and where these models miss the modes. Visualizing the test images associated with missed modes is instruc- tive. In Figure 5, the left three images are missed by all models. It is rare to see in the training data the cap in the second image and the type of background in the third, which thus can be viewed as small modes under this situation. These three images should be considered as the hardest test data
7
Published as a conference paper at ICLR 2017
for GAN to learn. Nonetheless, our best model, MDGAN still capture certain small modes. The seven images on the right in Figure 5 are only missed by DCGAN. The sideface, paleface, black, and the berets are special attributes among these images, but our proposed MDGAN performs well on all of them.
Hada AAAS | 1612.02136#27 | Mode Regularized Generative Adversarial Networks | Although Generative Adversarial Networks achieve state-of-the-art results on
a variety of generative tasks, they are regarded as highly unstable and prone
to miss modes. We argue that these bad behaviors of GANs are due to the very
particular functional shape of the trained discriminators in high dimensional
spaces, which can easily make training stuck or push probability mass in the
wrong direction, towards that of higher concentration than that of the data
generating distribution. We introduce several ways of regularizing the
objective, which can dramatically stabilize the training of GAN models. We also
show that our regularizers can help the fair distribution of probability mass
across the modes of the data generating distribution, during the early phases
of training and thus providing a unified solution to the missing modes problem. | http://arxiv.org/pdf/1612.02136 | Tong Che, Yanran Li, Athul Paul Jacob, Yoshua Bengio, Wenjie Li | cs.LG, cs.AI, cs.CV, cs.NE | Published as a conference paper at ICLR 2017 | null | cs.LG | 20161207 | 20170302 | [
{
"id": "1511.05440"
},
{
"id": "1611.06624"
},
{
"id": "1609.03126"
},
{
"id": "1604.04382"
},
{
"id": "1609.04802"
},
{
"id": "1612.00005"
},
{
"id": "1606.03498"
},
{
"id": "1605.09782"
},
{
"id": "1605.05396"
},
{
"id": "1610.04490"
},
{
"id": "1606.00704"
},
{
"id": "1602.02644"
},
{
"id": "1611.02163"
},
{
"id": "1511.06434"
},
{
"id": "1512.09300"
}
] |
1612.02136 | 28 | Hada AAAS
Figure 5: Test set images that are on missing mode. Left: Both MDGAN and DCGAN missing. Right: Only DCGAN missing.
4.2.2 QUALITATIVE EVALUATION OF GENERATED SAMPLES
After quantitative evaluation, we manually examine the generated samples by our regularized GAN to see whether the proposed regularizer has side-effects on sample quality. We compare our model with ALI (Dumoulin et al., 2016), VAEGAN (Larsen et al., 2015), and DCGAN (Radford et al., 2015) in terms of sample visual quality and mode diversity. Samples generated from these models are shown in Figure 62.
a 5 â a -GAN , a) es d a e " . Te }81 <[2) - i ~ J VAEGAN «ly â_ bg R: ie: rd bE we é| 7 , = |
Figure 6: Samples generated from different generative models. For each compared model, we directly take ten decent samples reported in their corresponding papers and code repositories. Note how MDGAN samples are both globally more coherent and locally have sharp textures. | 1612.02136#28 | Mode Regularized Generative Adversarial Networks | Although Generative Adversarial Networks achieve state-of-the-art results on
a variety of generative tasks, they are regarded as highly unstable and prone
to miss modes. We argue that these bad behaviors of GANs are due to the very
particular functional shape of the trained discriminators in high dimensional
spaces, which can easily make training stuck or push probability mass in the
wrong direction, towards that of higher concentration than that of the data
generating distribution. We introduce several ways of regularizing the
objective, which can dramatically stabilize the training of GAN models. We also
show that our regularizers can help the fair distribution of probability mass
across the modes of the data generating distribution, during the early phases
of training and thus providing a unified solution to the missing modes problem. | http://arxiv.org/pdf/1612.02136 | Tong Che, Yanran Li, Athul Paul Jacob, Yoshua Bengio, Wenjie Li | cs.LG, cs.AI, cs.CV, cs.NE | Published as a conference paper at ICLR 2017 | null | cs.LG | 20161207 | 20170302 | [
{
"id": "1511.05440"
},
{
"id": "1611.06624"
},
{
"id": "1609.03126"
},
{
"id": "1604.04382"
},
{
"id": "1609.04802"
},
{
"id": "1612.00005"
},
{
"id": "1606.03498"
},
{
"id": "1605.09782"
},
{
"id": "1605.05396"
},
{
"id": "1610.04490"
},
{
"id": "1606.00704"
},
{
"id": "1602.02644"
},
{
"id": "1611.02163"
},
{
"id": "1511.06434"
},
{
"id": "1512.09300"
}
] |
1612.02136 | 29 | Both MDGAN and Regularized-GAN generate clear and natural-looking face images. Although ALIâs samples are plausible, they are sightly deformed in comparison with those from MDGAN. The samples from VAEGAN and DCGAN seem globally less coherent and locally less sharp.
As to sample quality, it is worth noting that the samples from MDGAN enjoy fewer distortions. With all four other models, the majority of generated samples suffer from some sort of distortion. However, for the samples generated by MDGAN, the level of distortion is lower compared with the other four compared models. We attribute it to the help of the autoencoder as the regularizer to alter the generation manifolds. In this way, the generator is able to learn ï¬ne-grained details such as face edges. As a result, MDGAN is able to reduce distortions.
2For fair comparison, we also recommend readers to refer to the original papers Dumoulin et al. (2016); Larsen et al. (2015); Radford et al. (2015) for the reported samples of the compared. The ALI sam- ples are from https://github.com/IshmaelBelghazi/ALI/blob/master/paper/celeba_ samples.png and we reverted them to the original 64x64 size. The DCGAN samples are from https: //github.com/Newmu/dcgan_code/
8 | 1612.02136#29 | Mode Regularized Generative Adversarial Networks | Although Generative Adversarial Networks achieve state-of-the-art results on
a variety of generative tasks, they are regarded as highly unstable and prone
to miss modes. We argue that these bad behaviors of GANs are due to the very
particular functional shape of the trained discriminators in high dimensional
spaces, which can easily make training stuck or push probability mass in the
wrong direction, towards that of higher concentration than that of the data
generating distribution. We introduce several ways of regularizing the
objective, which can dramatically stabilize the training of GAN models. We also
show that our regularizers can help the fair distribution of probability mass
across the modes of the data generating distribution, during the early phases
of training and thus providing a unified solution to the missing modes problem. | http://arxiv.org/pdf/1612.02136 | Tong Che, Yanran Li, Athul Paul Jacob, Yoshua Bengio, Wenjie Li | cs.LG, cs.AI, cs.CV, cs.NE | Published as a conference paper at ICLR 2017 | null | cs.LG | 20161207 | 20170302 | [
{
"id": "1511.05440"
},
{
"id": "1611.06624"
},
{
"id": "1609.03126"
},
{
"id": "1604.04382"
},
{
"id": "1609.04802"
},
{
"id": "1612.00005"
},
{
"id": "1606.03498"
},
{
"id": "1605.09782"
},
{
"id": "1605.05396"
},
{
"id": "1610.04490"
},
{
"id": "1606.00704"
},
{
"id": "1602.02644"
},
{
"id": "1611.02163"
},
{
"id": "1511.06434"
},
{
"id": "1512.09300"
}
] |
1612.02136 | 30 | 8
Published as a conference paper at ICLR 2017
MDGAN Regularized an -GAN
Figure 7: Sideface samples generated by Regularized-GAN and MDGAN.
In terms of missing modes problem, we instructed ï¬ve individuals to conduct human evaluation on the generated samples. They achieve consensus that MDGAN wins in terms of mode diversities. Two people pointed out that MDGAN generates a larger amount of samples with side faces than other models. We select several of these side face samples in Figure 7. Clearly, our samples maintain acceptable visual ï¬delity meanwhile share diverse modes. Combined with the above quantitative results, it is convincing that our regularizers bring beneï¬ts for both training stability and mode variety without the loss of sample quality.
# 5 CONCLUSIONS
Although GANs achieve state-of-the-art results on a large variety of unsupervised learning tasks, training them is considered highly unstable, very difï¬cult and sensitive to hyper-parameters, all the while, missing modes from the data distribution or even collapsing large amounts of probability mass on some modes. Successful GAN training usually requires large amounts of human and com- puting efforts to ï¬ne tune the hyper-parameters, in order to stabilize training and avoid collapsing. Researchers usually rely on their own experience and published tricks and hyper-parameters instead of systematic methods for training GANs. | 1612.02136#30 | Mode Regularized Generative Adversarial Networks | Although Generative Adversarial Networks achieve state-of-the-art results on
a variety of generative tasks, they are regarded as highly unstable and prone
to miss modes. We argue that these bad behaviors of GANs are due to the very
particular functional shape of the trained discriminators in high dimensional
spaces, which can easily make training stuck or push probability mass in the
wrong direction, towards that of higher concentration than that of the data
generating distribution. We introduce several ways of regularizing the
objective, which can dramatically stabilize the training of GAN models. We also
show that our regularizers can help the fair distribution of probability mass
across the modes of the data generating distribution, during the early phases
of training and thus providing a unified solution to the missing modes problem. | http://arxiv.org/pdf/1612.02136 | Tong Che, Yanran Li, Athul Paul Jacob, Yoshua Bengio, Wenjie Li | cs.LG, cs.AI, cs.CV, cs.NE | Published as a conference paper at ICLR 2017 | null | cs.LG | 20161207 | 20170302 | [
{
"id": "1511.05440"
},
{
"id": "1611.06624"
},
{
"id": "1609.03126"
},
{
"id": "1604.04382"
},
{
"id": "1609.04802"
},
{
"id": "1612.00005"
},
{
"id": "1606.03498"
},
{
"id": "1605.09782"
},
{
"id": "1605.05396"
},
{
"id": "1610.04490"
},
{
"id": "1606.00704"
},
{
"id": "1602.02644"
},
{
"id": "1611.02163"
},
{
"id": "1511.06434"
},
{
"id": "1512.09300"
}
] |
1612.02136 | 31 | We provide systematic ways to measure and avoid the missing modes problem and stabilize training with the proposed autoencoder-based regularizers. The key idea is that some geometric metrics can provide more stable gradients than trained discriminators, and when combined with the encoder, they can be used as regularizers for training. These regularizers can also penalize missing modes and encourage a fair distribution of probability mass on the generation manifold.
# ACKNOWLEDGEMENTS
We thank Naiyan Wang, Jianbo Ye, Yuchen Ding, Saboya Yang for their GPU support. We also want to thank Huiling Zhen for helpful discussions, Junbo Zhao for providing the details of grid search experiments on the EBGAN model, as well as Anders Boesen Lindbo Larsen for kindly helping us on running VAEGAN experiments. We appreciate for the valuable suggestions and comments from the anonymous reviewers. The work described in this paper was partially supported by NSERC, Calcul Quebec, Compute Canada, the Canada Research Chairs, CIFAR, National Natural Science Foundation of China (61672445 and 61272291), Research Grants Council of Hong Kong (PolyU 152094/14E), and The Hong Kong Polytechnic University (G-YBP6).
# REFERENCES | 1612.02136#31 | Mode Regularized Generative Adversarial Networks | Although Generative Adversarial Networks achieve state-of-the-art results on
a variety of generative tasks, they are regarded as highly unstable and prone
to miss modes. We argue that these bad behaviors of GANs are due to the very
particular functional shape of the trained discriminators in high dimensional
spaces, which can easily make training stuck or push probability mass in the
wrong direction, towards that of higher concentration than that of the data
generating distribution. We introduce several ways of regularizing the
objective, which can dramatically stabilize the training of GAN models. We also
show that our regularizers can help the fair distribution of probability mass
across the modes of the data generating distribution, during the early phases
of training and thus providing a unified solution to the missing modes problem. | http://arxiv.org/pdf/1612.02136 | Tong Che, Yanran Li, Athul Paul Jacob, Yoshua Bengio, Wenjie Li | cs.LG, cs.AI, cs.CV, cs.NE | Published as a conference paper at ICLR 2017 | null | cs.LG | 20161207 | 20170302 | [
{
"id": "1511.05440"
},
{
"id": "1611.06624"
},
{
"id": "1609.03126"
},
{
"id": "1604.04382"
},
{
"id": "1609.04802"
},
{
"id": "1612.00005"
},
{
"id": "1606.03498"
},
{
"id": "1605.09782"
},
{
"id": "1605.05396"
},
{
"id": "1610.04490"
},
{
"id": "1606.00704"
},
{
"id": "1602.02644"
},
{
"id": "1611.02163"
},
{
"id": "1511.06434"
},
{
"id": "1512.09300"
}
] |
1612.02136 | 32 | # REFERENCES
Emily L Denton, Soumith Chintala, Rob Fergus, et al. Deep generative image models using a laplacian pyramid of adversarial networks. In Advances in neural information processing systems, pp. 1486â1494, 2015.
Jeff Donahue, Philipp Kr¨ahenb¨uhl, and Trevor Darrell. Adversarial feature learning. arXiv preprint arXiv:1605.09782, 2016.
Alexey Dosovitskiy and Thomas Brox. Generating images with perceptual similarity metrics based on deep networks. arXiv preprint arXiv:1602.02644, 2016.
Vincent Dumoulin, Ishmael Belghazi, Ben Poole, Alex Lamb, Martin Arjovsky, Olivier Mastropi- etro, and Aaron Courville. Adversarially learned inference. arXiv preprint arXiv:1606.00704, 2016.
9
Published as a conference paper at ICLR 2017
Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Infor- mation Processing Systems, pp. 2672â2680, 2014. | 1612.02136#32 | Mode Regularized Generative Adversarial Networks | Although Generative Adversarial Networks achieve state-of-the-art results on
a variety of generative tasks, they are regarded as highly unstable and prone
to miss modes. We argue that these bad behaviors of GANs are due to the very
particular functional shape of the trained discriminators in high dimensional
spaces, which can easily make training stuck or push probability mass in the
wrong direction, towards that of higher concentration than that of the data
generating distribution. We introduce several ways of regularizing the
objective, which can dramatically stabilize the training of GAN models. We also
show that our regularizers can help the fair distribution of probability mass
across the modes of the data generating distribution, during the early phases
of training and thus providing a unified solution to the missing modes problem. | http://arxiv.org/pdf/1612.02136 | Tong Che, Yanran Li, Athul Paul Jacob, Yoshua Bengio, Wenjie Li | cs.LG, cs.AI, cs.CV, cs.NE | Published as a conference paper at ICLR 2017 | null | cs.LG | 20161207 | 20170302 | [
{
"id": "1511.05440"
},
{
"id": "1611.06624"
},
{
"id": "1609.03126"
},
{
"id": "1604.04382"
},
{
"id": "1609.04802"
},
{
"id": "1612.00005"
},
{
"id": "1606.03498"
},
{
"id": "1605.09782"
},
{
"id": "1605.05396"
},
{
"id": "1610.04490"
},
{
"id": "1606.00704"
},
{
"id": "1602.02644"
},
{
"id": "1611.02163"
},
{
"id": "1511.06434"
},
{
"id": "1512.09300"
}
] |
1612.02136 | 33 | Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A Efros. Image-to-image translation with conditional adversarial networks. arxiv, 2016.
Anders Boesen Lindbo Larsen, Søren Kaae Sønderby, and Ole Winther. Autoencoding beyond pixels using a learned similarity metric. arXiv preprint arXiv:1512.09300, 2015.
Christian Ledig, Lucas Theis, Ferenc Husz´ar, Jose Caballero, Andrew Aitken, Alykhan Tejani, Jo- hannes Totz, Zehan Wang, and Wenzhe Shi. Photo-realistic single image super-resolution using a generative adversarial network. arXiv preprint arXiv:1609.04802, 2016.
Chuan Li and Michael Wand. Precomputed real-time texture synthesis with markovian generative adversarial networks. arXiv preprint arXiv:1604.04382, 2016.
Michael Mathieu, Camille Couprie, and Yann LeCun. Deep multi-scale video prediction beyond mean square error. arXiv preprint arXiv:1511.05440, 2015. | 1612.02136#33 | Mode Regularized Generative Adversarial Networks | Although Generative Adversarial Networks achieve state-of-the-art results on
a variety of generative tasks, they are regarded as highly unstable and prone
to miss modes. We argue that these bad behaviors of GANs are due to the very
particular functional shape of the trained discriminators in high dimensional
spaces, which can easily make training stuck or push probability mass in the
wrong direction, towards that of higher concentration than that of the data
generating distribution. We introduce several ways of regularizing the
objective, which can dramatically stabilize the training of GAN models. We also
show that our regularizers can help the fair distribution of probability mass
across the modes of the data generating distribution, during the early phases
of training and thus providing a unified solution to the missing modes problem. | http://arxiv.org/pdf/1612.02136 | Tong Che, Yanran Li, Athul Paul Jacob, Yoshua Bengio, Wenjie Li | cs.LG, cs.AI, cs.CV, cs.NE | Published as a conference paper at ICLR 2017 | null | cs.LG | 20161207 | 20170302 | [
{
"id": "1511.05440"
},
{
"id": "1611.06624"
},
{
"id": "1609.03126"
},
{
"id": "1604.04382"
},
{
"id": "1609.04802"
},
{
"id": "1612.00005"
},
{
"id": "1606.03498"
},
{
"id": "1605.09782"
},
{
"id": "1605.05396"
},
{
"id": "1610.04490"
},
{
"id": "1606.00704"
},
{
"id": "1602.02644"
},
{
"id": "1611.02163"
},
{
"id": "1511.06434"
},
{
"id": "1512.09300"
}
] |
1612.02136 | 34 | Luke Metz, Ben Poole, David Pfau, and Jascha Sohl-Dickstein. Unrolled generative adversarial networks. arXiv preprint arXiv:1611.02163, 2016.
Mehdi Mirza and Simon Osindero. Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784, 2014.
Anh Nguyen, Jason Yosinski, Yoshua Bengio, Alexey Dosovitskiy, and Jeff Clune. Plug & play generative networks: Conditional iterative generation of images in latent space. arXiv preprint arXiv:1612.00005, 2016.
Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434, 2015.
Scott Reed, Zeynep Akata, Xinchen Yan, Lajanugen Logeswaran, Bernt Schiele, and Honglak Lee. Generative adversarial text to image synthesis. arXiv preprint arXiv:1605.05396, 2016.
Masaki Saito and Eiichi Matsumoto. Temporal generative adversarial nets. arXiv preprint arXiv:1611.06624, 2016. | 1612.02136#34 | Mode Regularized Generative Adversarial Networks | Although Generative Adversarial Networks achieve state-of-the-art results on
a variety of generative tasks, they are regarded as highly unstable and prone
to miss modes. We argue that these bad behaviors of GANs are due to the very
particular functional shape of the trained discriminators in high dimensional
spaces, which can easily make training stuck or push probability mass in the
wrong direction, towards that of higher concentration than that of the data
generating distribution. We introduce several ways of regularizing the
objective, which can dramatically stabilize the training of GAN models. We also
show that our regularizers can help the fair distribution of probability mass
across the modes of the data generating distribution, during the early phases
of training and thus providing a unified solution to the missing modes problem. | http://arxiv.org/pdf/1612.02136 | Tong Che, Yanran Li, Athul Paul Jacob, Yoshua Bengio, Wenjie Li | cs.LG, cs.AI, cs.CV, cs.NE | Published as a conference paper at ICLR 2017 | null | cs.LG | 20161207 | 20170302 | [
{
"id": "1511.05440"
},
{
"id": "1611.06624"
},
{
"id": "1609.03126"
},
{
"id": "1604.04382"
},
{
"id": "1609.04802"
},
{
"id": "1612.00005"
},
{
"id": "1606.03498"
},
{
"id": "1605.09782"
},
{
"id": "1605.05396"
},
{
"id": "1610.04490"
},
{
"id": "1606.00704"
},
{
"id": "1602.02644"
},
{
"id": "1611.02163"
},
{
"id": "1511.06434"
},
{
"id": "1512.09300"
}
] |
1612.02136 | 35 | Masaki Saito and Eiichi Matsumoto. Temporal generative adversarial nets. arXiv preprint arXiv:1611.06624, 2016.
Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improved techniques for training gans. arXiv preprint arXiv:1606.03498, 2016.
Casper Kaae Sønderby, Jose Caballero, Lucas Theis, Wenzhe Shi, and Ferenc Husz´ar. Amortised map inference for image super-resolution. arXiv preprint arXiv:1610.04490, 2016.
Carl Vondrick, Hamed Pirsiavash, and Antonio Torralba. Generating videos with scene dynamics. In Advances In Neural Information Processing Systems, pp. 613â621, 2016.
Xiaolong Wang and Abhinav Gupta. Generative image modeling using style and structure adversar- ial networks. In ECCV, 2016.
Jiajun Wu, Chengkai Zhang, Tianfan Xue, William T Freeman, and Joshua B Tenenbaum. Learning a probabilistic latent space of object shapes via 3d generative-adversarial modeling. In Neural Information Processing Systems (NIPS), 2016. | 1612.02136#35 | Mode Regularized Generative Adversarial Networks | Although Generative Adversarial Networks achieve state-of-the-art results on
a variety of generative tasks, they are regarded as highly unstable and prone
to miss modes. We argue that these bad behaviors of GANs are due to the very
particular functional shape of the trained discriminators in high dimensional
spaces, which can easily make training stuck or push probability mass in the
wrong direction, towards that of higher concentration than that of the data
generating distribution. We introduce several ways of regularizing the
objective, which can dramatically stabilize the training of GAN models. We also
show that our regularizers can help the fair distribution of probability mass
across the modes of the data generating distribution, during the early phases
of training and thus providing a unified solution to the missing modes problem. | http://arxiv.org/pdf/1612.02136 | Tong Che, Yanran Li, Athul Paul Jacob, Yoshua Bengio, Wenjie Li | cs.LG, cs.AI, cs.CV, cs.NE | Published as a conference paper at ICLR 2017 | null | cs.LG | 20161207 | 20170302 | [
{
"id": "1511.05440"
},
{
"id": "1611.06624"
},
{
"id": "1609.03126"
},
{
"id": "1604.04382"
},
{
"id": "1609.04802"
},
{
"id": "1612.00005"
},
{
"id": "1606.03498"
},
{
"id": "1605.09782"
},
{
"id": "1605.05396"
},
{
"id": "1610.04490"
},
{
"id": "1606.00704"
},
{
"id": "1602.02644"
},
{
"id": "1611.02163"
},
{
"id": "1511.06434"
},
{
"id": "1512.09300"
}
] |
1612.02136 | 36 | Junbo Zhao, Michael Mathieu, and Yann LeCun. Energy-based generative adversarial network. arXiv preprint arXiv:1609.03126, 2016.
Yipin Zhou and Tamara L Berg. Learning temporal transformations from time-lapse videos. In European Conference on Computer Vision, pp. 262â277. Springer, 2016.
Jun-Yan Zhu, Philipp Kr¨ahenb¨uhl, Eli Shechtman, and Alexei A. Efros. Generative visual manipula- tion on the natural image manifold. In Proceedings of European Conference on Computer Vision (ECCV), 2016.
10
Published as a conference paper at ICLR 2017
# A APPENDIX: PSEUDO CODE FOR MDGAN
In this Appendix, we give the detailed training procedure of an MDGAN example we discuss in Section 3.3.
Manifold Step: 1. Sample {x1, x2, · · · xm} from data generating distribution pdata(x). 2. Update discriminator D1 using SGD with gradient ascent:
1 Voy = > aflog Di(xi) + log(1 â Di (G(E(%:))))] i=l
3. Update generator G using SGD with gradient ascent: | 1612.02136#36 | Mode Regularized Generative Adversarial Networks | Although Generative Adversarial Networks achieve state-of-the-art results on
a variety of generative tasks, they are regarded as highly unstable and prone
to miss modes. We argue that these bad behaviors of GANs are due to the very
particular functional shape of the trained discriminators in high dimensional
spaces, which can easily make training stuck or push probability mass in the
wrong direction, towards that of higher concentration than that of the data
generating distribution. We introduce several ways of regularizing the
objective, which can dramatically stabilize the training of GAN models. We also
show that our regularizers can help the fair distribution of probability mass
across the modes of the data generating distribution, during the early phases
of training and thus providing a unified solution to the missing modes problem. | http://arxiv.org/pdf/1612.02136 | Tong Che, Yanran Li, Athul Paul Jacob, Yoshua Bengio, Wenjie Li | cs.LG, cs.AI, cs.CV, cs.NE | Published as a conference paper at ICLR 2017 | null | cs.LG | 20161207 | 20170302 | [
{
"id": "1511.05440"
},
{
"id": "1611.06624"
},
{
"id": "1609.03126"
},
{
"id": "1604.04382"
},
{
"id": "1609.04802"
},
{
"id": "1612.00005"
},
{
"id": "1606.03498"
},
{
"id": "1605.09782"
},
{
"id": "1605.05396"
},
{
"id": "1610.04490"
},
{
"id": "1606.00704"
},
{
"id": "1602.02644"
},
{
"id": "1611.02163"
},
{
"id": "1511.06434"
},
{
"id": "1512.09300"
}
] |
1612.02136 | 37 | 1 Voy = > aflog Di(xi) + log(1 â Di (G(E(%:))))] i=l
3. Update generator G using SGD with gradient ascent:
i< : Vo, â > [Alog Di (G(E(x:))) â |i - @(E(:))|7) i=1
Diffusion Step: 4. Sample {x1, x2, · · · xm} from data generating distribution pdata(x). 5. Sample {z1, z2, · · · zm} from prior distribution pÏ(z). 6. Update discriminator D2 using SGD with gradient ascent:
1 Var SV flog D2(G(E(x;))) + log(1 â Da(z:))] i=1
7. Update generator G using SGD with gradient ascent:
m Vo, <> log Da(G(es)) i=l
Figure 8: The detailed training procedure of an MDGAN example.
# B APPENDIX: ARCHITECTURE FOR EXPERIMENTS | 1612.02136#37 | Mode Regularized Generative Adversarial Networks | Although Generative Adversarial Networks achieve state-of-the-art results on
a variety of generative tasks, they are regarded as highly unstable and prone
to miss modes. We argue that these bad behaviors of GANs are due to the very
particular functional shape of the trained discriminators in high dimensional
spaces, which can easily make training stuck or push probability mass in the
wrong direction, towards that of higher concentration than that of the data
generating distribution. We introduce several ways of regularizing the
objective, which can dramatically stabilize the training of GAN models. We also
show that our regularizers can help the fair distribution of probability mass
across the modes of the data generating distribution, during the early phases
of training and thus providing a unified solution to the missing modes problem. | http://arxiv.org/pdf/1612.02136 | Tong Che, Yanran Li, Athul Paul Jacob, Yoshua Bengio, Wenjie Li | cs.LG, cs.AI, cs.CV, cs.NE | Published as a conference paper at ICLR 2017 | null | cs.LG | 20161207 | 20170302 | [
{
"id": "1511.05440"
},
{
"id": "1611.06624"
},
{
"id": "1609.03126"
},
{
"id": "1604.04382"
},
{
"id": "1609.04802"
},
{
"id": "1612.00005"
},
{
"id": "1606.03498"
},
{
"id": "1605.09782"
},
{
"id": "1605.05396"
},
{
"id": "1610.04490"
},
{
"id": "1606.00704"
},
{
"id": "1602.02644"
},
{
"id": "1611.02163"
},
{
"id": "1511.06434"
},
{
"id": "1512.09300"
}
] |
1612.02136 | 38 | Figure 8: The detailed training procedure of an MDGAN example.
# B APPENDIX: ARCHITECTURE FOR EXPERIMENTS
We use similar architectures for Compositional MNIST and CelebA experiments. The architecture is based on that found in DCGAN Radford et al. (2015). Apart from the discriminator and generator which are the same as DCGAN, we add an encoder which is the âinverseâ of the generator, by reversing the order of layers and replacing the de-convolutional layers with convolutional layers.
One has to pay particular attention to batch normalization layers. In DCGAN, there are batch nor- malization layers both in the generator and the discriminator. However, two classes of data go through the batch normalization layers in the generator. One come from sampled noise z, the other one come from the encoder. In our implementation, we separate the batch statistics for these two classes of data in the generator, while keeping the parameters of BN layer to be shared. In this way, the batch statistics of these two kinds of batches cannot interfere with each other.
# C APPENDIX: ADDITIONAL SYNTHESIZED EXPERIMENTS
To demonstrate the effectiveness of mode-regularized GANs proposed in this paper, we train a very simple GAN architecture on synthesized 2D dataset, following Metz et al. (2016). | 1612.02136#38 | Mode Regularized Generative Adversarial Networks | Although Generative Adversarial Networks achieve state-of-the-art results on
a variety of generative tasks, they are regarded as highly unstable and prone
to miss modes. We argue that these bad behaviors of GANs are due to the very
particular functional shape of the trained discriminators in high dimensional
spaces, which can easily make training stuck or push probability mass in the
wrong direction, towards that of higher concentration than that of the data
generating distribution. We introduce several ways of regularizing the
objective, which can dramatically stabilize the training of GAN models. We also
show that our regularizers can help the fair distribution of probability mass
across the modes of the data generating distribution, during the early phases
of training and thus providing a unified solution to the missing modes problem. | http://arxiv.org/pdf/1612.02136 | Tong Che, Yanran Li, Athul Paul Jacob, Yoshua Bengio, Wenjie Li | cs.LG, cs.AI, cs.CV, cs.NE | Published as a conference paper at ICLR 2017 | null | cs.LG | 20161207 | 20170302 | [
{
"id": "1511.05440"
},
{
"id": "1611.06624"
},
{
"id": "1609.03126"
},
{
"id": "1604.04382"
},
{
"id": "1609.04802"
},
{
"id": "1612.00005"
},
{
"id": "1606.03498"
},
{
"id": "1605.09782"
},
{
"id": "1605.05396"
},
{
"id": "1610.04490"
},
{
"id": "1606.00704"
},
{
"id": "1602.02644"
},
{
"id": "1611.02163"
},
{
"id": "1511.06434"
},
{
"id": "1512.09300"
}
] |
1612.02136 | 39 | The data is sampled from a mixture of 6 Gaussians, with standard derivation of 0.1. The means of the Gaussians are placed around a circle with radius 5. The generator network has two ReLU hidden layers with 128 neurons. It generates 2D output samples from 3D uniform noise from [0,1]. The discriminator consists of only one fully connected layer of ReLU neurons, mapping the 2D input to
11
Published as a conference paper at ICLR 2017
a real 1D number. Both networks are optimized with the Adam optimizer with the learning rate of 1e-4.
In the regularized version, we choose λ1 = λ2 = 0.005. The comparison between the generator distribution from standard GAN and our proposed regularized GAN are shown in Figure 9.
oo. GAN . - . . â . . | . \ « âoo â s â+ N Reg-GAN + 7 : ; . : 4 s ? â ° ° @ . ad . . . Epoch | Epoch 200 Epoch 400 Epoch 600 Epoch 800 Epoch 1000 Target | 1612.02136#39 | Mode Regularized Generative Adversarial Networks | Although Generative Adversarial Networks achieve state-of-the-art results on
a variety of generative tasks, they are regarded as highly unstable and prone
to miss modes. We argue that these bad behaviors of GANs are due to the very
particular functional shape of the trained discriminators in high dimensional
spaces, which can easily make training stuck or push probability mass in the
wrong direction, towards that of higher concentration than that of the data
generating distribution. We introduce several ways of regularizing the
objective, which can dramatically stabilize the training of GAN models. We also
show that our regularizers can help the fair distribution of probability mass
across the modes of the data generating distribution, during the early phases
of training and thus providing a unified solution to the missing modes problem. | http://arxiv.org/pdf/1612.02136 | Tong Che, Yanran Li, Athul Paul Jacob, Yoshua Bengio, Wenjie Li | cs.LG, cs.AI, cs.CV, cs.NE | Published as a conference paper at ICLR 2017 | null | cs.LG | 20161207 | 20170302 | [
{
"id": "1511.05440"
},
{
"id": "1611.06624"
},
{
"id": "1609.03126"
},
{
"id": "1604.04382"
},
{
"id": "1609.04802"
},
{
"id": "1612.00005"
},
{
"id": "1606.03498"
},
{
"id": "1605.09782"
},
{
"id": "1605.05396"
},
{
"id": "1610.04490"
},
{
"id": "1606.00704"
},
{
"id": "1602.02644"
},
{
"id": "1611.02163"
},
{
"id": "1511.06434"
},
{
"id": "1512.09300"
}
] |
1612.02136 | 40 | Figure 9: Comparison results on a toy 2D mixture of Gaussians dataset. The columns on the left shows heatmaps of the generator distributions as the number of training epochs increases, whereas the rightmost column presents the target, the original data distribution. The top row shows standard GAN result. The generator has a hard time oscillating among the modes of the data distribution, and is only able to ârecoverâ a single data mode at once. In contrast, the bottom row shows results of our regularized GAN. Its generator quickly captures the underlying multiple modes and ï¬ts the target distribution.
# D APPENDIX: COMPARISON WITH VAEGAN
In this appendix section, we demonstrate the effectiveness and uniqueness of mode-regularized GANs proposed in this paper as compared to Larsen et al. (2015) in terms of its theoretical dif- ference, sample quality and number of missing modes.
With regard to the theoretical difference, the optimization of VAEGAN relies on the probabilistic variational bound, namely p(x) ⥠Eq(z|x)[log p(x|z)] â KL(q(z|x)||p(z)). This variational bound together with a GAN loss is optimized with several assumptions imposed in VAEGAN: | 1612.02136#40 | Mode Regularized Generative Adversarial Networks | Although Generative Adversarial Networks achieve state-of-the-art results on
a variety of generative tasks, they are regarded as highly unstable and prone
to miss modes. We argue that these bad behaviors of GANs are due to the very
particular functional shape of the trained discriminators in high dimensional
spaces, which can easily make training stuck or push probability mass in the
wrong direction, towards that of higher concentration than that of the data
generating distribution. We introduce several ways of regularizing the
objective, which can dramatically stabilize the training of GAN models. We also
show that our regularizers can help the fair distribution of probability mass
across the modes of the data generating distribution, during the early phases
of training and thus providing a unified solution to the missing modes problem. | http://arxiv.org/pdf/1612.02136 | Tong Che, Yanran Li, Athul Paul Jacob, Yoshua Bengio, Wenjie Li | cs.LG, cs.AI, cs.CV, cs.NE | Published as a conference paper at ICLR 2017 | null | cs.LG | 20161207 | 20170302 | [
{
"id": "1511.05440"
},
{
"id": "1611.06624"
},
{
"id": "1609.03126"
},
{
"id": "1604.04382"
},
{
"id": "1609.04802"
},
{
"id": "1612.00005"
},
{
"id": "1606.03498"
},
{
"id": "1605.09782"
},
{
"id": "1605.05396"
},
{
"id": "1610.04490"
},
{
"id": "1606.00704"
},
{
"id": "1602.02644"
},
{
"id": "1611.02163"
},
{
"id": "1511.06434"
},
{
"id": "1512.09300"
}
] |
1612.02136 | 42 | The ï¬rst assumption does not necessarily hold for GANs. We have found that in some trained models of DCGANs, the real posterior p(z|x) is even not guaranteed to have only one mode, not to mention it is anything close to factorized Gaussian. We believe that this difference in probabilistic framework is an essential obstacle when one tries to use the objective of VAEGAN as a regularizer. However, in our algorithm, where we use a plain auto-encoder instead of VAE as the objective. Plain auto-encooders works better than VAE for our purposes because as long as the model G(z) is able to generate training samples, there always exists a function Eâ(x) such that G(E(x)) = x. Our encoder can therefore be viewed as being trained to approximate this real encoder Eâ. There are no conï¬icts between a good GAN generator and our regularization objective. Hence, our objectives can be used as regularizers for encoding the prior knowledge that good models should be able to generate the training samples. This is why our work is essentially different from VAEGAN. In our experiments, we also believe that this is the reason why VAEGAN generates worse samples than a carefully tuned regularized GANs. | 1612.02136#42 | Mode Regularized Generative Adversarial Networks | Although Generative Adversarial Networks achieve state-of-the-art results on
a variety of generative tasks, they are regarded as highly unstable and prone
to miss modes. We argue that these bad behaviors of GANs are due to the very
particular functional shape of the trained discriminators in high dimensional
spaces, which can easily make training stuck or push probability mass in the
wrong direction, towards that of higher concentration than that of the data
generating distribution. We introduce several ways of regularizing the
objective, which can dramatically stabilize the training of GAN models. We also
show that our regularizers can help the fair distribution of probability mass
across the modes of the data generating distribution, during the early phases
of training and thus providing a unified solution to the missing modes problem. | http://arxiv.org/pdf/1612.02136 | Tong Che, Yanran Li, Athul Paul Jacob, Yoshua Bengio, Wenjie Li | cs.LG, cs.AI, cs.CV, cs.NE | Published as a conference paper at ICLR 2017 | null | cs.LG | 20161207 | 20170302 | [
{
"id": "1511.05440"
},
{
"id": "1611.06624"
},
{
"id": "1609.03126"
},
{
"id": "1604.04382"
},
{
"id": "1609.04802"
},
{
"id": "1612.00005"
},
{
"id": "1606.03498"
},
{
"id": "1605.09782"
},
{
"id": "1605.05396"
},
{
"id": "1610.04490"
},
{
"id": "1606.00704"
},
{
"id": "1602.02644"
},
{
"id": "1611.02163"
},
{
"id": "1511.06434"
},
{
"id": "1512.09300"
}
] |
1612.02136 | 43 | In terms of sample quality and missing modes, we run the ofï¬cial code of VAEGAN 3 with their default setting. We train VAEGAN for 30 epochs 4 and our models for only 20 epochs. For fairness,
3https://github.com/andersbll/autoencoding_beyond_pixels 4Note that we also trained 20-epoch version of VAEGAN, however the samples seemed worse.
12
Published as a conference paper at ICLR 2017
their model was run 3 times and the trained model with the best sample visual quality was taken for the comparison.
The generated samples are shown in Figure 10. The most obvious difference between our samples and VAEGANâs samples is the face distortion, which is consistent with our experimental results in Section 4.2.2. We conjecture that the distortions of VAEGANâs samples are due to the conï¬icts be- tween the two objectives, as we present above. In other words, the way we introduce auto-encoders as regularizers for GAN models is different from VAEGANâs. The difference is that the second as- sumption mentioned above is not required in our approaches. In our framework, the auto-encoders helps alter the generation manifolds, leading to fewer distortions in ï¬ne-grained details in our gen- erated samples. | 1612.02136#43 | Mode Regularized Generative Adversarial Networks | Although Generative Adversarial Networks achieve state-of-the-art results on
a variety of generative tasks, they are regarded as highly unstable and prone
to miss modes. We argue that these bad behaviors of GANs are due to the very
particular functional shape of the trained discriminators in high dimensional
spaces, which can easily make training stuck or push probability mass in the
wrong direction, towards that of higher concentration than that of the data
generating distribution. We introduce several ways of regularizing the
objective, which can dramatically stabilize the training of GAN models. We also
show that our regularizers can help the fair distribution of probability mass
across the modes of the data generating distribution, during the early phases
of training and thus providing a unified solution to the missing modes problem. | http://arxiv.org/pdf/1612.02136 | Tong Che, Yanran Li, Athul Paul Jacob, Yoshua Bengio, Wenjie Li | cs.LG, cs.AI, cs.CV, cs.NE | Published as a conference paper at ICLR 2017 | null | cs.LG | 20161207 | 20170302 | [
{
"id": "1511.05440"
},
{
"id": "1611.06624"
},
{
"id": "1609.03126"
},
{
"id": "1604.04382"
},
{
"id": "1609.04802"
},
{
"id": "1612.00005"
},
{
"id": "1606.03498"
},
{
"id": "1605.09782"
},
{
"id": "1605.05396"
},
{
"id": "1610.04490"
},
{
"id": "1606.00704"
},
{
"id": "1602.02644"
},
{
"id": "1611.02163"
},
{
"id": "1511.06434"
},
{
"id": "1512.09300"
}
] |
1612.02136 | 44 | â_ oO EE 6 2A VAEGAN -trained VAEGAN -reported yy = 222
Figure 10: Samples generated by our models and VAEGAN. The third line are samples generated by our self-trained VAEGAN model, with default settings. The last line are generated samples reported in the original VAEGAN paper. We depict both of them here for a fair comparison.
In terms of the missing modes problem, we use the same method described in Section 4.2.1 for computing the number of images with missing modes. The results are shown below.
Table 4: Number of images on the missing modes on CelebA estimated by a third-party discrimina- tor. The numbers in the brackets indicate the dimension of prior z. Ï denotes the standard deviation of the added Gaussian noise applied at the input of the discriminator to regularize it. MDGAN achieves a very high reduction in the number of missing modes, in comparison to VAEGAN.
Ï VAEGAN (100) Reg-GAN (100) Reg-GAN (200) MDGAN (200) 3.5 9720 754 3644 74 4.0 5862 42 391 13 | 1612.02136#44 | Mode Regularized Generative Adversarial Networks | Although Generative Adversarial Networks achieve state-of-the-art results on
a variety of generative tasks, they are regarded as highly unstable and prone
to miss modes. We argue that these bad behaviors of GANs are due to the very
particular functional shape of the trained discriminators in high dimensional
spaces, which can easily make training stuck or push probability mass in the
wrong direction, towards that of higher concentration than that of the data
generating distribution. We introduce several ways of regularizing the
objective, which can dramatically stabilize the training of GAN models. We also
show that our regularizers can help the fair distribution of probability mass
across the modes of the data generating distribution, during the early phases
of training and thus providing a unified solution to the missing modes problem. | http://arxiv.org/pdf/1612.02136 | Tong Che, Yanran Li, Athul Paul Jacob, Yoshua Bengio, Wenjie Li | cs.LG, cs.AI, cs.CV, cs.NE | Published as a conference paper at ICLR 2017 | null | cs.LG | 20161207 | 20170302 | [
{
"id": "1511.05440"
},
{
"id": "1611.06624"
},
{
"id": "1609.03126"
},
{
"id": "1604.04382"
},
{
"id": "1609.04802"
},
{
"id": "1612.00005"
},
{
"id": "1606.03498"
},
{
"id": "1605.09782"
},
{
"id": "1605.05396"
},
{
"id": "1610.04490"
},
{
"id": "1606.00704"
},
{
"id": "1602.02644"
},
{
"id": "1611.02163"
},
{
"id": "1511.06434"
},
{
"id": "1512.09300"
}
] |
1612.02136 | 45 | We see that using our proposed regularizers results in a huge drop in the number of missing modes. We conjecture that the reason why VAEGAN performs very bad in our metric for missing modes is because the samples generated are of low quality, so the discriminator classiï¬es the samples as ânot on modeâ. Namely, the data generated is too far away from many real data modes. Essentially if a model generates very bad samples, we can say that the model misses all or most modes.
To conduct more fair evaluation between VAEGAN and our methods, we also perform a blind human evaluation. Again we instructed ï¬ve individuals to conduct this evaluation of sample variability. Without telling them which is generated by VAEGAN and which is generated by our methods, four people agree that our method wins in terms of sample diversity. One person thinks the samples are equally diverse.
In conclusion, we demonstrate that our proposed mode-regularized GANs, i.e., Reg-GAN and MDGAN, are different from VAEGAN theoretically as discussed above. Such differences empiri- cally result in better sample quality and mode preserving ability, which are our main contributions.
13 | 1612.02136#45 | Mode Regularized Generative Adversarial Networks | Although Generative Adversarial Networks achieve state-of-the-art results on
a variety of generative tasks, they are regarded as highly unstable and prone
to miss modes. We argue that these bad behaviors of GANs are due to the very
particular functional shape of the trained discriminators in high dimensional
spaces, which can easily make training stuck or push probability mass in the
wrong direction, towards that of higher concentration than that of the data
generating distribution. We introduce several ways of regularizing the
objective, which can dramatically stabilize the training of GAN models. We also
show that our regularizers can help the fair distribution of probability mass
across the modes of the data generating distribution, during the early phases
of training and thus providing a unified solution to the missing modes problem. | http://arxiv.org/pdf/1612.02136 | Tong Che, Yanran Li, Athul Paul Jacob, Yoshua Bengio, Wenjie Li | cs.LG, cs.AI, cs.CV, cs.NE | Published as a conference paper at ICLR 2017 | null | cs.LG | 20161207 | 20170302 | [
{
"id": "1511.05440"
},
{
"id": "1611.06624"
},
{
"id": "1609.03126"
},
{
"id": "1604.04382"
},
{
"id": "1609.04802"
},
{
"id": "1612.00005"
},
{
"id": "1606.03498"
},
{
"id": "1605.09782"
},
{
"id": "1605.05396"
},
{
"id": "1610.04490"
},
{
"id": "1606.00704"
},
{
"id": "1602.02644"
},
{
"id": "1611.02163"
},
{
"id": "1511.06434"
},
{
"id": "1512.09300"
}
] |
1612.01543 | 1 | # ABSTRACT
Network quantization is one of network compression techniques to reduce the re- dundancy of deep neural networks. It reduces the number of distinct network pa- rameter values by quantization in order to save the storage for them. In this paper, we design network quantization schemes that minimize the performance loss due to quantization given a compression ratio constraint. We analyze the quantitative relation of quantization errors to the neural network loss function and identify that the Hessian-weighted distortion measure is locally the right objective function for the optimization of network quantization. As a result, Hessian-weighted k-means clustering is proposed for clustering network parameters to quantize. When opti- mal variable-length binary codes, e.g., Huffman codes, are employed for further compression, we derive that the network quantization problem can be related to the entropy-constrained scalar quantization (ECSQ) problem in information the- ory and consequently propose two solutions of ECSQ for network quantization, i.e., uniform quantization and an iterative solution similar to Lloydâs algorithm. Finally, using the simple uniform quantization followed by Huffman coding, we show from our experiments that the compression ratios of 51.25, 22.17 and 40.65 are achievable for LeNet, 32-layer ResNet and AlexNet, respectively.
# INTRODUCTION | 1612.01543#1 | Towards the Limit of Network Quantization | Network quantization is one of network compression techniques to reduce the
redundancy of deep neural networks. It reduces the number of distinct network
parameter values by quantization in order to save the storage for them. In this
paper, we design network quantization schemes that minimize the performance
loss due to quantization given a compression ratio constraint. We analyze the
quantitative relation of quantization errors to the neural network loss
function and identify that the Hessian-weighted distortion measure is locally
the right objective function for the optimization of network quantization. As a
result, Hessian-weighted k-means clustering is proposed for clustering network
parameters to quantize. When optimal variable-length binary codes, e.g.,
Huffman codes, are employed for further compression, we derive that the network
quantization problem can be related to the entropy-constrained scalar
quantization (ECSQ) problem in information theory and consequently propose two
solutions of ECSQ for network quantization, i.e., uniform quantization and an
iterative solution similar to Lloyd's algorithm. Finally, using the simple
uniform quantization followed by Huffman coding, we show from our experiments
that the compression ratios of 51.25, 22.17 and 40.65 are achievable for LeNet,
32-layer ResNet and AlexNet, respectively. | http://arxiv.org/pdf/1612.01543 | Yoojin Choi, Mostafa El-Khamy, Jungwon Lee | cs.CV, cs.LG, cs.NE | Published as a conference paper at ICLR 2017 | null | cs.CV | 20161205 | 20171113 | [
{
"id": "1510.03009"
},
{
"id": "1511.06067"
},
{
"id": "1510.00149"
},
{
"id": "1511.06393"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
},
{
"id": "1511.06530"
}
] |
1612.01543 | 2 | # INTRODUCTION
Deep neural networks have emerged to be the state-of-the-art in the ï¬eld of machine learning for image classiï¬cation, object detection, speech recognition, natural language processing, and machine translation (LeCun et al., 2015). The substantial progress of neural networks however comes with high cost of computations and hardware resources resulting from a large number of parameters. For example, Krizhevsky et al. (2012) came up with a deep convolutional neural network consisting of 61 million parameters and won the ImageNet competition in 2012. It is followed by deeper neural networks with even larger numbers of parameters, e.g., Simonyan & Zisserman (2014).
The large sizes of deep neural networks make it difï¬cult to deploy them on resource-limited devices, e.g., mobile or portable devices, and network compression is of great interest in recent years to reduce computational cost and memory requirements for deep neural networks. Our interest in this paper is mainly on curtailing the size of the storage (memory) for network parameters (weights and biases). In particular, we focus on the network size compression by reducing the number of distinct network parameters by quantization. | 1612.01543#2 | Towards the Limit of Network Quantization | Network quantization is one of network compression techniques to reduce the
redundancy of deep neural networks. It reduces the number of distinct network
parameter values by quantization in order to save the storage for them. In this
paper, we design network quantization schemes that minimize the performance
loss due to quantization given a compression ratio constraint. We analyze the
quantitative relation of quantization errors to the neural network loss
function and identify that the Hessian-weighted distortion measure is locally
the right objective function for the optimization of network quantization. As a
result, Hessian-weighted k-means clustering is proposed for clustering network
parameters to quantize. When optimal variable-length binary codes, e.g.,
Huffman codes, are employed for further compression, we derive that the network
quantization problem can be related to the entropy-constrained scalar
quantization (ECSQ) problem in information theory and consequently propose two
solutions of ECSQ for network quantization, i.e., uniform quantization and an
iterative solution similar to Lloyd's algorithm. Finally, using the simple
uniform quantization followed by Huffman coding, we show from our experiments
that the compression ratios of 51.25, 22.17 and 40.65 are achievable for LeNet,
32-layer ResNet and AlexNet, respectively. | http://arxiv.org/pdf/1612.01543 | Yoojin Choi, Mostafa El-Khamy, Jungwon Lee | cs.CV, cs.LG, cs.NE | Published as a conference paper at ICLR 2017 | null | cs.CV | 20161205 | 20171113 | [
{
"id": "1510.03009"
},
{
"id": "1511.06067"
},
{
"id": "1510.00149"
},
{
"id": "1511.06393"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
},
{
"id": "1511.06530"
}
] |
1612.01543 | 3 | Besides network quantization, network pruning has been studied for network compression to remove redundant parameters permanently from neural networks (Mozer & Smolensky, 1989; LeCun et al., 1989; Hassibi & Stork, 1993; Han et al., 2015b; Lebedev & Lempitsky, 2016; Wen et al., 2016). Matrix/tensor factorization and low-rank approximation have been investigated as well to ï¬nd more efï¬cient representations of neural networks with a smaller number of parameters and consequently to save computations (Sainath et al., 2013; Xue et al., 2013; Jaderberg et al., 2014; Lebedev et al., 2014; Yang et al., 2015; Liu et al., 2015; Kim et al., 2015; Tai et al., 2015; Novikov et al., 2015). Moreover, similar to network quantization, low-precision network implementation has been exam- ined in Vanhoucke et al. (2011); Courbariaux et al. (2014); Anwar et al. (2015); Gupta et al. (2015); Lin et al. (2015a). Some extremes of low-precision neural networks consisting of binary or ternary parameters can be found in | 1612.01543#3 | Towards the Limit of Network Quantization | Network quantization is one of network compression techniques to reduce the
redundancy of deep neural networks. It reduces the number of distinct network
parameter values by quantization in order to save the storage for them. In this
paper, we design network quantization schemes that minimize the performance
loss due to quantization given a compression ratio constraint. We analyze the
quantitative relation of quantization errors to the neural network loss
function and identify that the Hessian-weighted distortion measure is locally
the right objective function for the optimization of network quantization. As a
result, Hessian-weighted k-means clustering is proposed for clustering network
parameters to quantize. When optimal variable-length binary codes, e.g.,
Huffman codes, are employed for further compression, we derive that the network
quantization problem can be related to the entropy-constrained scalar
quantization (ECSQ) problem in information theory and consequently propose two
solutions of ECSQ for network quantization, i.e., uniform quantization and an
iterative solution similar to Lloyd's algorithm. Finally, using the simple
uniform quantization followed by Huffman coding, we show from our experiments
that the compression ratios of 51.25, 22.17 and 40.65 are achievable for LeNet,
32-layer ResNet and AlexNet, respectively. | http://arxiv.org/pdf/1612.01543 | Yoojin Choi, Mostafa El-Khamy, Jungwon Lee | cs.CV, cs.LG, cs.NE | Published as a conference paper at ICLR 2017 | null | cs.CV | 20161205 | 20171113 | [
{
"id": "1510.03009"
},
{
"id": "1511.06067"
},
{
"id": "1510.00149"
},
{
"id": "1511.06393"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
},
{
"id": "1511.06530"
}
] |
1612.01543 | 5 | 1
Published as a conference paper at ICLR 2017
The most related work to our investigation in this paper can be found in Gong et al. (2014); Han et al. (2015a), where a conventional quantization method using k-means clustering is employed for net- work quantization. This conventional approach however is proposed with little consideration for the impact of quantization errors on the neural network performance loss and no effort to optimize the quantization procedure for a given compression ratio constraint. In this paper, we reveal the subop- timality of this conventional method and newly design quantization schemes for neural networks. In particular, we formulate an optimization problem to minimize the network performance loss due to quantization given a compression ratio constraint and ï¬nd efï¬cient quantization methods for neural networks.
The main contribution of the paper can be summarized as follows:
⢠It is derived that the performance loss due to quantization in neural networks can be quan- tiï¬ed approximately by the Hessian-weighted distortion measure. Then, Hessian-weighted k-means clustering is proposed for network quantization to minimize the performance loss. | 1612.01543#5 | Towards the Limit of Network Quantization | Network quantization is one of network compression techniques to reduce the
redundancy of deep neural networks. It reduces the number of distinct network
parameter values by quantization in order to save the storage for them. In this
paper, we design network quantization schemes that minimize the performance
loss due to quantization given a compression ratio constraint. We analyze the
quantitative relation of quantization errors to the neural network loss
function and identify that the Hessian-weighted distortion measure is locally
the right objective function for the optimization of network quantization. As a
result, Hessian-weighted k-means clustering is proposed for clustering network
parameters to quantize. When optimal variable-length binary codes, e.g.,
Huffman codes, are employed for further compression, we derive that the network
quantization problem can be related to the entropy-constrained scalar
quantization (ECSQ) problem in information theory and consequently propose two
solutions of ECSQ for network quantization, i.e., uniform quantization and an
iterative solution similar to Lloyd's algorithm. Finally, using the simple
uniform quantization followed by Huffman coding, we show from our experiments
that the compression ratios of 51.25, 22.17 and 40.65 are achievable for LeNet,
32-layer ResNet and AlexNet, respectively. | http://arxiv.org/pdf/1612.01543 | Yoojin Choi, Mostafa El-Khamy, Jungwon Lee | cs.CV, cs.LG, cs.NE | Published as a conference paper at ICLR 2017 | null | cs.CV | 20161205 | 20171113 | [
{
"id": "1510.03009"
},
{
"id": "1511.06067"
},
{
"id": "1510.00149"
},
{
"id": "1511.06393"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
},
{
"id": "1511.06530"
}
] |
1612.01543 | 6 | ⢠It is identiï¬ed that the optimization problem for network quantization provided a compres- sion ratio constraint can be reduced to an entropy-constrained scalar quantization (ECSQ) problem when optimal variable-length binary coding is employed after quantization. Two efï¬cient heuristic solutions for ECSQ are proposed for network quantization, i.e., uniform quantization and an iterative solution similar to Lloydâs algorithm.
⢠As an alternative of Hessian, it is proposed to utilize some function (e.g., square root) of the second moment estimates of gradients when the Adam (Kingma & Ba, 2014) stochastic gradient descent (SGD) optimizer is used in training. The advantage of using this alterna- tive is that it is computed while training and can be obtained at the end of training at no additional cost.
⢠It is shown how the proposed network quantization schemes can be applied for quantizing network parameters of all layers together at once, rather than layer-by-layer network quan- tization in Gong et al. (2014); Han et al. (2015a). This follows from our investigation that Hessian-weighting can handle the different impact of quantization errors properly not only within layers but also across layers. Moreover, quantizing network parameters of all layers together, one can even avoid layer-by-layer compression rate optimization. | 1612.01543#6 | Towards the Limit of Network Quantization | Network quantization is one of network compression techniques to reduce the
redundancy of deep neural networks. It reduces the number of distinct network
parameter values by quantization in order to save the storage for them. In this
paper, we design network quantization schemes that minimize the performance
loss due to quantization given a compression ratio constraint. We analyze the
quantitative relation of quantization errors to the neural network loss
function and identify that the Hessian-weighted distortion measure is locally
the right objective function for the optimization of network quantization. As a
result, Hessian-weighted k-means clustering is proposed for clustering network
parameters to quantize. When optimal variable-length binary codes, e.g.,
Huffman codes, are employed for further compression, we derive that the network
quantization problem can be related to the entropy-constrained scalar
quantization (ECSQ) problem in information theory and consequently propose two
solutions of ECSQ for network quantization, i.e., uniform quantization and an
iterative solution similar to Lloyd's algorithm. Finally, using the simple
uniform quantization followed by Huffman coding, we show from our experiments
that the compression ratios of 51.25, 22.17 and 40.65 are achievable for LeNet,
32-layer ResNet and AlexNet, respectively. | http://arxiv.org/pdf/1612.01543 | Yoojin Choi, Mostafa El-Khamy, Jungwon Lee | cs.CV, cs.LG, cs.NE | Published as a conference paper at ICLR 2017 | null | cs.CV | 20161205 | 20171113 | [
{
"id": "1510.03009"
},
{
"id": "1511.06067"
},
{
"id": "1510.00149"
},
{
"id": "1511.06393"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
},
{
"id": "1511.06530"
}
] |
1612.01543 | 7 | The rest of the paper is organized as follows. In Section 2, we deï¬ne the network quantization prob- lem and review the conventional quantization method using k-means clustering. Section 3 discusses Hessian-weighted network quantization. Our entropy-constrained network quantization schemes follow in Section 4. Finally, experiment results and conclusion can be found in Section 5 and Sec- tion 6, respectively.
# 2 NETWORK QUANTIZATION
We consider a neural network that is already trained, pruned if employed and ï¬ne-tuned before quan- tization. If no network pruning is employed, all parameters in a network are subject to quantization. For pruned networks, our focus is on quantization of unpruned parameters. | 1612.01543#7 | Towards the Limit of Network Quantization | Network quantization is one of network compression techniques to reduce the
redundancy of deep neural networks. It reduces the number of distinct network
parameter values by quantization in order to save the storage for them. In this
paper, we design network quantization schemes that minimize the performance
loss due to quantization given a compression ratio constraint. We analyze the
quantitative relation of quantization errors to the neural network loss
function and identify that the Hessian-weighted distortion measure is locally
the right objective function for the optimization of network quantization. As a
result, Hessian-weighted k-means clustering is proposed for clustering network
parameters to quantize. When optimal variable-length binary codes, e.g.,
Huffman codes, are employed for further compression, we derive that the network
quantization problem can be related to the entropy-constrained scalar
quantization (ECSQ) problem in information theory and consequently propose two
solutions of ECSQ for network quantization, i.e., uniform quantization and an
iterative solution similar to Lloyd's algorithm. Finally, using the simple
uniform quantization followed by Huffman coding, we show from our experiments
that the compression ratios of 51.25, 22.17 and 40.65 are achievable for LeNet,
32-layer ResNet and AlexNet, respectively. | http://arxiv.org/pdf/1612.01543 | Yoojin Choi, Mostafa El-Khamy, Jungwon Lee | cs.CV, cs.LG, cs.NE | Published as a conference paper at ICLR 2017 | null | cs.CV | 20161205 | 20171113 | [
{
"id": "1510.03009"
},
{
"id": "1511.06067"
},
{
"id": "1510.00149"
},
{
"id": "1511.06393"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
},
{
"id": "1511.06530"
}
] |
1612.01543 | 8 | The goal of network quantization is to quantize (unpruned) network parameters in order to reduce the size of the storage for them while minimizing the performance degradation due to quantization. For network quantization, network parameters are grouped into clusters. Parameters in the same cluster share their quantized value, which is the representative value (i.e., cluster center) of the cluster they belong to. After quantization, lossless binary coding follows to encode quantized parameters into binary codewords to store instead of actual parameter values. Either ï¬xed-length binary coding or variable-length binary coding, e.g., Huffman coding, can be employed to this end.
2.1 COMPRESSION RATIO
Suppose that we have total N parameters in a neural network. Before quantization, each parameter is assumed to be of b bits. For quantization, we partition the network parameters into k clusters. Let Ci be the set of network parameters in cluster i and let bi be the number of bits of the codeword assigned to the network parameters in cluster i for 1 ⤠i ⤠k. For a lookup table to decode quantized
2
Published as a conference paper at ICLR 2017
values from their binary encoded codewords, we store k binary codewords (bi bits for 1 ⤠i ⤠k) and corresponding quantized values (b bits for each). The compression ratio is then given by | 1612.01543#8 | Towards the Limit of Network Quantization | Network quantization is one of network compression techniques to reduce the
redundancy of deep neural networks. It reduces the number of distinct network
parameter values by quantization in order to save the storage for them. In this
paper, we design network quantization schemes that minimize the performance
loss due to quantization given a compression ratio constraint. We analyze the
quantitative relation of quantization errors to the neural network loss
function and identify that the Hessian-weighted distortion measure is locally
the right objective function for the optimization of network quantization. As a
result, Hessian-weighted k-means clustering is proposed for clustering network
parameters to quantize. When optimal variable-length binary codes, e.g.,
Huffman codes, are employed for further compression, we derive that the network
quantization problem can be related to the entropy-constrained scalar
quantization (ECSQ) problem in information theory and consequently propose two
solutions of ECSQ for network quantization, i.e., uniform quantization and an
iterative solution similar to Lloyd's algorithm. Finally, using the simple
uniform quantization followed by Huffman coding, we show from our experiments
that the compression ratios of 51.25, 22.17 and 40.65 are achievable for LeNet,
32-layer ResNet and AlexNet, respectively. | http://arxiv.org/pdf/1612.01543 | Yoojin Choi, Mostafa El-Khamy, Jungwon Lee | cs.CV, cs.LG, cs.NE | Published as a conference paper at ICLR 2017 | null | cs.CV | 20161205 | 20171113 | [
{
"id": "1510.03009"
},
{
"id": "1511.06067"
},
{
"id": "1510.00149"
},
{
"id": "1511.06393"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
},
{
"id": "1511.06530"
}
] |
1612.01543 | 10 | Observe in (1) that the compression ratio depends not only on the number of clusters but also on the P sizes of the clusters and the lengths of the binary codewords assigned to them, in particular, when a variable-length code is used for encoding quantized values. For ï¬xed-length codes, however, all codewords are of the same length, i.e., bi = âlog2 kâ for all 1 ⤠i ⤠k, and thus the compression ratio is reduced to only a function of the number of clusters, i.e., k, assuming that N and b are given.
2.2 K-MEANS CLUSTERING
Provided network parameters {wi}N i=1 to quantize, k-means clustering partitions them into k dis- joint sets (clusters), denoted by C1, C2, . . . , Ck, while minimizing the mean square quantization error (MSQE) as follows:
# k
argmin C1,C2,...,Ck |w â ci|2, where ci = 1 |Ci| w. (2) | 1612.01543#10 | Towards the Limit of Network Quantization | Network quantization is one of network compression techniques to reduce the
redundancy of deep neural networks. It reduces the number of distinct network
parameter values by quantization in order to save the storage for them. In this
paper, we design network quantization schemes that minimize the performance
loss due to quantization given a compression ratio constraint. We analyze the
quantitative relation of quantization errors to the neural network loss
function and identify that the Hessian-weighted distortion measure is locally
the right objective function for the optimization of network quantization. As a
result, Hessian-weighted k-means clustering is proposed for clustering network
parameters to quantize. When optimal variable-length binary codes, e.g.,
Huffman codes, are employed for further compression, we derive that the network
quantization problem can be related to the entropy-constrained scalar
quantization (ECSQ) problem in information theory and consequently propose two
solutions of ECSQ for network quantization, i.e., uniform quantization and an
iterative solution similar to Lloyd's algorithm. Finally, using the simple
uniform quantization followed by Huffman coding, we show from our experiments
that the compression ratios of 51.25, 22.17 and 40.65 are achievable for LeNet,
32-layer ResNet and AlexNet, respectively. | http://arxiv.org/pdf/1612.01543 | Yoojin Choi, Mostafa El-Khamy, Jungwon Lee | cs.CV, cs.LG, cs.NE | Published as a conference paper at ICLR 2017 | null | cs.CV | 20161205 | 20171113 | [
{
"id": "1510.03009"
},
{
"id": "1511.06067"
},
{
"id": "1510.00149"
},
{
"id": "1511.06393"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
},
{
"id": "1511.06530"
}
] |
1612.01543 | 11 | # wâCi X
# wâCi X
# i=1 X
We observe two issues with employing k-means clustering for network quantization.
⢠First, although k-means clustering minimizes the MSQE, it does not imply that k-means clustering minimizes the performance loss due to quantization as well in neural networks. K-means clustering treats quantization errors from all network parameters with equal im- portance. However, quantization errors from some network parameters may degrade the performance more signiï¬cantly that the others. Thus, for minimizing the loss due to quan- tization in neural networks, one needs to take this dissimilarity into account. | 1612.01543#11 | Towards the Limit of Network Quantization | Network quantization is one of network compression techniques to reduce the
redundancy of deep neural networks. It reduces the number of distinct network
parameter values by quantization in order to save the storage for them. In this
paper, we design network quantization schemes that minimize the performance
loss due to quantization given a compression ratio constraint. We analyze the
quantitative relation of quantization errors to the neural network loss
function and identify that the Hessian-weighted distortion measure is locally
the right objective function for the optimization of network quantization. As a
result, Hessian-weighted k-means clustering is proposed for clustering network
parameters to quantize. When optimal variable-length binary codes, e.g.,
Huffman codes, are employed for further compression, we derive that the network
quantization problem can be related to the entropy-constrained scalar
quantization (ECSQ) problem in information theory and consequently propose two
solutions of ECSQ for network quantization, i.e., uniform quantization and an
iterative solution similar to Lloyd's algorithm. Finally, using the simple
uniform quantization followed by Huffman coding, we show from our experiments
that the compression ratios of 51.25, 22.17 and 40.65 are achievable for LeNet,
32-layer ResNet and AlexNet, respectively. | http://arxiv.org/pdf/1612.01543 | Yoojin Choi, Mostafa El-Khamy, Jungwon Lee | cs.CV, cs.LG, cs.NE | Published as a conference paper at ICLR 2017 | null | cs.CV | 20161205 | 20171113 | [
{
"id": "1510.03009"
},
{
"id": "1511.06067"
},
{
"id": "1510.00149"
},
{
"id": "1511.06393"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
},
{
"id": "1511.06530"
}
] |
1612.01543 | 12 | ⢠Second, k-means clustering does not consider any compression ratio constraint. It simply minimizes its distortion measure for a given number of clusters, i.e., for k clusters. This is however suboptimal when variable-length coding follows since the compression ratio de- pends not only on the number of clusters but also on the sizes of the clusters and assigned codeword lengths to them, which are determined by the binary coding scheme employed af- ter clustering. Therefore, for the optimization of network quantization given a compression ratio constraint, one need to take the impact of binary coding into account, i.e., we need to solve the quantization problem under the actual compression ratio constraint imposed by the speciï¬c binary coding scheme employed after clustering.
# 3 HESSIAN-WEIGHTED NETWORK QUANTIZATION
In this section, we analyze the impact of quantization errors on the neural network loss function and derive that the Hessian-weighted distortion measure is a relevant objective function for network quantization in order to minimize the quantization loss locally. Moreover, from this analysis, we pro- pose Hessian-weighted k-means clustering for network quantization to minimize the performance loss due to quantization in neural networks.
3.1 NETWORK MODEL | 1612.01543#12 | Towards the Limit of Network Quantization | Network quantization is one of network compression techniques to reduce the
redundancy of deep neural networks. It reduces the number of distinct network
parameter values by quantization in order to save the storage for them. In this
paper, we design network quantization schemes that minimize the performance
loss due to quantization given a compression ratio constraint. We analyze the
quantitative relation of quantization errors to the neural network loss
function and identify that the Hessian-weighted distortion measure is locally
the right objective function for the optimization of network quantization. As a
result, Hessian-weighted k-means clustering is proposed for clustering network
parameters to quantize. When optimal variable-length binary codes, e.g.,
Huffman codes, are employed for further compression, we derive that the network
quantization problem can be related to the entropy-constrained scalar
quantization (ECSQ) problem in information theory and consequently propose two
solutions of ECSQ for network quantization, i.e., uniform quantization and an
iterative solution similar to Lloyd's algorithm. Finally, using the simple
uniform quantization followed by Huffman coding, we show from our experiments
that the compression ratios of 51.25, 22.17 and 40.65 are achievable for LeNet,
32-layer ResNet and AlexNet, respectively. | http://arxiv.org/pdf/1612.01543 | Yoojin Choi, Mostafa El-Khamy, Jungwon Lee | cs.CV, cs.LG, cs.NE | Published as a conference paper at ICLR 2017 | null | cs.CV | 20161205 | 20171113 | [
{
"id": "1510.03009"
},
{
"id": "1511.06067"
},
{
"id": "1510.00149"
},
{
"id": "1511.06393"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
},
{
"id": "1511.06530"
}
] |
1612.01543 | 13 | 3.1 NETWORK MODEL
We consider a general non-linear neural network that yields output y = f (x; w) from input x, where w = [w1 · · · wN ]T is the vector consisting of all trainable network parameters in the network; N is the total number of trainable parameters in the network. A loss function loss(y, Ëy) is deï¬ned as the objective function that we aim to minimize in average, where Ëy = Ëy(x) is the expected (ground- truth) output for input x. Cross entropy or mean square error are typical examples of a loss function. Given a training data set Xtrain, we optimize network parameters by solving the following problem, e.g., approximately by using a stochastic gradient descent (SGD) method with mini-batches:
Ëw = argmin w L(Xtrain; w), where L(X ; w) = 1 |X | loss(f (x; w), Ëy(x)). | 1612.01543#13 | Towards the Limit of Network Quantization | Network quantization is one of network compression techniques to reduce the
redundancy of deep neural networks. It reduces the number of distinct network
parameter values by quantization in order to save the storage for them. In this
paper, we design network quantization schemes that minimize the performance
loss due to quantization given a compression ratio constraint. We analyze the
quantitative relation of quantization errors to the neural network loss
function and identify that the Hessian-weighted distortion measure is locally
the right objective function for the optimization of network quantization. As a
result, Hessian-weighted k-means clustering is proposed for clustering network
parameters to quantize. When optimal variable-length binary codes, e.g.,
Huffman codes, are employed for further compression, we derive that the network
quantization problem can be related to the entropy-constrained scalar
quantization (ECSQ) problem in information theory and consequently propose two
solutions of ECSQ for network quantization, i.e., uniform quantization and an
iterative solution similar to Lloyd's algorithm. Finally, using the simple
uniform quantization followed by Huffman coding, we show from our experiments
that the compression ratios of 51.25, 22.17 and 40.65 are achievable for LeNet,
32-layer ResNet and AlexNet, respectively. | http://arxiv.org/pdf/1612.01543 | Yoojin Choi, Mostafa El-Khamy, Jungwon Lee | cs.CV, cs.LG, cs.NE | Published as a conference paper at ICLR 2017 | null | cs.CV | 20161205 | 20171113 | [
{
"id": "1510.03009"
},
{
"id": "1511.06067"
},
{
"id": "1510.00149"
},
{
"id": "1511.06393"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
},
{
"id": "1511.06530"
}
] |
1612.01543 | 15 | the square matrix H(w) consisting of second-order partial derivatives is called as Hessian matrix or Hessian. Assume that the loss function has reached to one of its local minima, at w = Ëw, after training. At local minima, gradients are all zero, i.e., we have g( Ëw) = 0, and thus the ï¬rst term in the right-hand side of (3) can be neglected at w = Ëw. The third term in the right-hand side of (3) is also ignored under the assumption that the average loss function is approximately quadratic at the local minimum w = Ëw. Finally, for simplicity, we approximate the Hessian matrix as a diagonal matrix by setting its off-diagonal terms to be zero. Then, it follows from (3) that
N 1 2 hii( Ëw)|δ Ëwi|2, δL(X ; Ëw) â (4) | 1612.01543#15 | Towards the Limit of Network Quantization | Network quantization is one of network compression techniques to reduce the
redundancy of deep neural networks. It reduces the number of distinct network
parameter values by quantization in order to save the storage for them. In this
paper, we design network quantization schemes that minimize the performance
loss due to quantization given a compression ratio constraint. We analyze the
quantitative relation of quantization errors to the neural network loss
function and identify that the Hessian-weighted distortion measure is locally
the right objective function for the optimization of network quantization. As a
result, Hessian-weighted k-means clustering is proposed for clustering network
parameters to quantize. When optimal variable-length binary codes, e.g.,
Huffman codes, are employed for further compression, we derive that the network
quantization problem can be related to the entropy-constrained scalar
quantization (ECSQ) problem in information theory and consequently propose two
solutions of ECSQ for network quantization, i.e., uniform quantization and an
iterative solution similar to Lloyd's algorithm. Finally, using the simple
uniform quantization followed by Huffman coding, we show from our experiments
that the compression ratios of 51.25, 22.17 and 40.65 are achievable for LeNet,
32-layer ResNet and AlexNet, respectively. | http://arxiv.org/pdf/1612.01543 | Yoojin Choi, Mostafa El-Khamy, Jungwon Lee | cs.CV, cs.LG, cs.NE | Published as a conference paper at ICLR 2017 | null | cs.CV | 20161205 | 20171113 | [
{
"id": "1510.03009"
},
{
"id": "1511.06067"
},
{
"id": "1510.00149"
},
{
"id": "1511.06393"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
},
{
"id": "1511.06530"
}
] |
1612.01543 | 16 | i=1 X where hii( Ëw) is the second-order partial derivative of the average loss function with respect to wi evaluated at w = Ëw, which is the i-th diagonal element of the Hessian matrix H( Ëw).
Now, we connect (4) with the problem of network quantization by treating δ Ëwi as the quantization error of network parameter wi at its local optimum wi = Ëwi, i.e.,
δ Ëwi = ¯wi â Ëwi, (5)
where ¯wi is a quantized value of Ëwi. Finally, combining (4) and (5), we derive that the local impact of quantization on the average loss function at w = Ëw can be quantiï¬ed approximately as follows:
δL(X ; Ëw) â 1 2 N hii( Ëw)| Ëwi â ¯wi|2. (6) | 1612.01543#16 | Towards the Limit of Network Quantization | Network quantization is one of network compression techniques to reduce the
redundancy of deep neural networks. It reduces the number of distinct network
parameter values by quantization in order to save the storage for them. In this
paper, we design network quantization schemes that minimize the performance
loss due to quantization given a compression ratio constraint. We analyze the
quantitative relation of quantization errors to the neural network loss
function and identify that the Hessian-weighted distortion measure is locally
the right objective function for the optimization of network quantization. As a
result, Hessian-weighted k-means clustering is proposed for clustering network
parameters to quantize. When optimal variable-length binary codes, e.g.,
Huffman codes, are employed for further compression, we derive that the network
quantization problem can be related to the entropy-constrained scalar
quantization (ECSQ) problem in information theory and consequently propose two
solutions of ECSQ for network quantization, i.e., uniform quantization and an
iterative solution similar to Lloyd's algorithm. Finally, using the simple
uniform quantization followed by Huffman coding, we show from our experiments
that the compression ratios of 51.25, 22.17 and 40.65 are achievable for LeNet,
32-layer ResNet and AlexNet, respectively. | http://arxiv.org/pdf/1612.01543 | Yoojin Choi, Mostafa El-Khamy, Jungwon Lee | cs.CV, cs.LG, cs.NE | Published as a conference paper at ICLR 2017 | null | cs.CV | 20161205 | 20171113 | [
{
"id": "1510.03009"
},
{
"id": "1511.06067"
},
{
"id": "1510.00149"
},
{
"id": "1511.06393"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
},
{
"id": "1511.06530"
}
] |
1612.01543 | 17 | # i=1 X
At a local minimum, the diagonal elements of Hessian, i.e., hii( Ëw)âs, are all non-negative and thus the summation in (6) is always additive, implying that the average loss function either increases or stays the same. Therefore, the performance degradation due to quantization of a neural network can be measured approximately by the Hessian-weighted distortion as shown in (6). Further discussion on the Hessian-weighted distortion measure can be found in Appendix A.1.
# 3.3 HESSIAN-WEIGHTED K-MEANS CLUSTERING
For notational simplicity, we use wi â¡ Ëwi and hii â¡ hii( Ëw) from now on. The optimal clustering that minimizes the Hessian-weighted distortion measure is given by
argmin C1,C2,...,Ck k hii|wi â cj|2, where cj = wiâCj hiiwi wiâCj hii P . (7) | 1612.01543#17 | Towards the Limit of Network Quantization | Network quantization is one of network compression techniques to reduce the
redundancy of deep neural networks. It reduces the number of distinct network
parameter values by quantization in order to save the storage for them. In this
paper, we design network quantization schemes that minimize the performance
loss due to quantization given a compression ratio constraint. We analyze the
quantitative relation of quantization errors to the neural network loss
function and identify that the Hessian-weighted distortion measure is locally
the right objective function for the optimization of network quantization. As a
result, Hessian-weighted k-means clustering is proposed for clustering network
parameters to quantize. When optimal variable-length binary codes, e.g.,
Huffman codes, are employed for further compression, we derive that the network
quantization problem can be related to the entropy-constrained scalar
quantization (ECSQ) problem in information theory and consequently propose two
solutions of ECSQ for network quantization, i.e., uniform quantization and an
iterative solution similar to Lloyd's algorithm. Finally, using the simple
uniform quantization followed by Huffman coding, we show from our experiments
that the compression ratios of 51.25, 22.17 and 40.65 are achievable for LeNet,
32-layer ResNet and AlexNet, respectively. | http://arxiv.org/pdf/1612.01543 | Yoojin Choi, Mostafa El-Khamy, Jungwon Lee | cs.CV, cs.LG, cs.NE | Published as a conference paper at ICLR 2017 | null | cs.CV | 20161205 | 20171113 | [
{
"id": "1510.03009"
},
{
"id": "1511.06067"
},
{
"id": "1510.00149"
},
{
"id": "1511.06393"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
},
{
"id": "1511.06530"
}
] |
1612.01543 | 18 | # wiâCj X
# j=1 X
# P
We call this as Hessian-weighted k-means clustering. Observe in (7) that we give a larger penalty for a network parameter in deï¬ning the distortion measure for clustering when its second-order partial derivative is larger, in order to avoid a large deviation from its original value, since the impact on the loss function due to quantization is expected to be larger for that parameter.
Hessian-weighted k-means clustering is locally optimal in minimizing the quantization loss when ï¬xed-length binary coding follows, where the compression ratio solely depends on the number of clusters as shown in Section 2.1. Similar to the conventional k-means clustering, solving this op- timization is not easy, but Lloydâs algorithm is still applicable as an efï¬cient heuristic solution for this problem if Hessian-weighted means are used as cluster centers instead of non-weighted regular means.
4
Published as a conference paper at ICLR 2017
3.4 HESSIAN COMPUTATION
For obtaining Hessian, one needs to evaluate the second-order partial derivative of the average loss function with respect to each of network parameters, i.e., we need to calculate | 1612.01543#18 | Towards the Limit of Network Quantization | Network quantization is one of network compression techniques to reduce the
redundancy of deep neural networks. It reduces the number of distinct network
parameter values by quantization in order to save the storage for them. In this
paper, we design network quantization schemes that minimize the performance
loss due to quantization given a compression ratio constraint. We analyze the
quantitative relation of quantization errors to the neural network loss
function and identify that the Hessian-weighted distortion measure is locally
the right objective function for the optimization of network quantization. As a
result, Hessian-weighted k-means clustering is proposed for clustering network
parameters to quantize. When optimal variable-length binary codes, e.g.,
Huffman codes, are employed for further compression, we derive that the network
quantization problem can be related to the entropy-constrained scalar
quantization (ECSQ) problem in information theory and consequently propose two
solutions of ECSQ for network quantization, i.e., uniform quantization and an
iterative solution similar to Lloyd's algorithm. Finally, using the simple
uniform quantization followed by Huffman coding, we show from our experiments
that the compression ratios of 51.25, 22.17 and 40.65 are achievable for LeNet,
32-layer ResNet and AlexNet, respectively. | http://arxiv.org/pdf/1612.01543 | Yoojin Choi, Mostafa El-Khamy, Jungwon Lee | cs.CV, cs.LG, cs.NE | Published as a conference paper at ICLR 2017 | null | cs.CV | 20161205 | 20171113 | [
{
"id": "1510.03009"
},
{
"id": "1511.06067"
},
{
"id": "1510.00149"
},
{
"id": "1511.06393"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
},
{
"id": "1511.06530"
}
] |
1612.01543 | 20 | # xâX X
Recall that we are interested in only the diagonal elements of Hessian. An efï¬cient way of computing the diagonal of Hessian is presented in Le Cun (1987); Becker & Le Cun (1988) and it is based on the back propagation method that is similar to the back propagation algorithm used for computing ï¬rst-order partial derivatives (gradients). That is, computing the diagonal of Hessian is of the same order of complexity as computing gradients.
Hessian computation and our network quantization are performed after completing network training. For the data set X used to compute Hessian in (8), we can either reuse a training data set or use some other data set, e.g., validation data set. We observed from our experiments that even using a small subset of the training or validation data set is sufï¬cient to yield good approximation of Hessian for network quantization.
3.5 ALTERNATIVE OF HESSIAN | 1612.01543#20 | Towards the Limit of Network Quantization | Network quantization is one of network compression techniques to reduce the
redundancy of deep neural networks. It reduces the number of distinct network
parameter values by quantization in order to save the storage for them. In this
paper, we design network quantization schemes that minimize the performance
loss due to quantization given a compression ratio constraint. We analyze the
quantitative relation of quantization errors to the neural network loss
function and identify that the Hessian-weighted distortion measure is locally
the right objective function for the optimization of network quantization. As a
result, Hessian-weighted k-means clustering is proposed for clustering network
parameters to quantize. When optimal variable-length binary codes, e.g.,
Huffman codes, are employed for further compression, we derive that the network
quantization problem can be related to the entropy-constrained scalar
quantization (ECSQ) problem in information theory and consequently propose two
solutions of ECSQ for network quantization, i.e., uniform quantization and an
iterative solution similar to Lloyd's algorithm. Finally, using the simple
uniform quantization followed by Huffman coding, we show from our experiments
that the compression ratios of 51.25, 22.17 and 40.65 are achievable for LeNet,
32-layer ResNet and AlexNet, respectively. | http://arxiv.org/pdf/1612.01543 | Yoojin Choi, Mostafa El-Khamy, Jungwon Lee | cs.CV, cs.LG, cs.NE | Published as a conference paper at ICLR 2017 | null | cs.CV | 20161205 | 20171113 | [
{
"id": "1510.03009"
},
{
"id": "1511.06067"
},
{
"id": "1510.00149"
},
{
"id": "1511.06393"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
},
{
"id": "1511.06530"
}
] |
1612.01543 | 21 | 3.5 ALTERNATIVE OF HESSIAN
Although there is an efï¬cient way to obtain the diagonal of Hessian as discussed in the previous sub- section, Hessian computation is not free. In order to avoid this additional Hessian computation, we propose to use an alternative metric instead of Hessian. In particular, we consider neural networks trained with the Adam SGD optimizer (Kingma & Ba, 2014) and propose to use some function (e.g., square root) of the second moment estimates of gradients as an alternative of Hessian.
The Adam algorithm computes adaptive learning rates for individual network parameters from the ï¬rst and second moment estimates of gradients. We compare the Adam method to Newtonâs op- timization method using Hessian and notice that the second moment estimates of gradients in the Adam method act like the Hessian in Newtonâs method. This observation leads us to use some func- tion (e.g., square root) of the second moment estimates of gradients as an alternative of Hessian. | 1612.01543#21 | Towards the Limit of Network Quantization | Network quantization is one of network compression techniques to reduce the
redundancy of deep neural networks. It reduces the number of distinct network
parameter values by quantization in order to save the storage for them. In this
paper, we design network quantization schemes that minimize the performance
loss due to quantization given a compression ratio constraint. We analyze the
quantitative relation of quantization errors to the neural network loss
function and identify that the Hessian-weighted distortion measure is locally
the right objective function for the optimization of network quantization. As a
result, Hessian-weighted k-means clustering is proposed for clustering network
parameters to quantize. When optimal variable-length binary codes, e.g.,
Huffman codes, are employed for further compression, we derive that the network
quantization problem can be related to the entropy-constrained scalar
quantization (ECSQ) problem in information theory and consequently propose two
solutions of ECSQ for network quantization, i.e., uniform quantization and an
iterative solution similar to Lloyd's algorithm. Finally, using the simple
uniform quantization followed by Huffman coding, we show from our experiments
that the compression ratios of 51.25, 22.17 and 40.65 are achievable for LeNet,
32-layer ResNet and AlexNet, respectively. | http://arxiv.org/pdf/1612.01543 | Yoojin Choi, Mostafa El-Khamy, Jungwon Lee | cs.CV, cs.LG, cs.NE | Published as a conference paper at ICLR 2017 | null | cs.CV | 20161205 | 20171113 | [
{
"id": "1510.03009"
},
{
"id": "1511.06067"
},
{
"id": "1510.00149"
},
{
"id": "1511.06393"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
},
{
"id": "1511.06530"
}
] |
1612.01543 | 22 | The advantage of using the second moment estimates from the Adam method is that they are com- puted while training and we can obtain them at the end of training at no additional cost. It makes Hessian-weighting more feasible for deep neural networks, which have millions of parameters. We note that similar quantities can be found and used for other SGD optimization methods using adaptive learning rates, e.g., AdaGrad (Duchi et al., 2011), Adadelta (Zeiler, 2012) and RMSProp (Tieleman & Hinton, 2012).
3.6 QUANTIZATION OF ALL LAYERS
We propose quantizing the network parameters of all layers in a neural network together at once by taking Hessian-weight into account. Layer-by-layer quantization was examined in the previous work (Gong et al., 2014; Han et al., 2015a). However, e.g., in Han et al. (2015a), a larger number of bits (a larger number of clusters) are assigned to convolutional layers than fully-connected layers, which implies that they heuristically treat convolutional layers more importantly. This follows from the fact that the impact of quantization errors on the performance varies signiï¬cantly across layers; some layers, e.g., convolutional layers, may be more important than the others. This concern is exactly what we can address by Hessian-weighting. | 1612.01543#22 | Towards the Limit of Network Quantization | Network quantization is one of network compression techniques to reduce the
redundancy of deep neural networks. It reduces the number of distinct network
parameter values by quantization in order to save the storage for them. In this
paper, we design network quantization schemes that minimize the performance
loss due to quantization given a compression ratio constraint. We analyze the
quantitative relation of quantization errors to the neural network loss
function and identify that the Hessian-weighted distortion measure is locally
the right objective function for the optimization of network quantization. As a
result, Hessian-weighted k-means clustering is proposed for clustering network
parameters to quantize. When optimal variable-length binary codes, e.g.,
Huffman codes, are employed for further compression, we derive that the network
quantization problem can be related to the entropy-constrained scalar
quantization (ECSQ) problem in information theory and consequently propose two
solutions of ECSQ for network quantization, i.e., uniform quantization and an
iterative solution similar to Lloyd's algorithm. Finally, using the simple
uniform quantization followed by Huffman coding, we show from our experiments
that the compression ratios of 51.25, 22.17 and 40.65 are achievable for LeNet,
32-layer ResNet and AlexNet, respectively. | http://arxiv.org/pdf/1612.01543 | Yoojin Choi, Mostafa El-Khamy, Jungwon Lee | cs.CV, cs.LG, cs.NE | Published as a conference paper at ICLR 2017 | null | cs.CV | 20161205 | 20171113 | [
{
"id": "1510.03009"
},
{
"id": "1511.06067"
},
{
"id": "1510.00149"
},
{
"id": "1511.06393"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
},
{
"id": "1511.06530"
}
] |
1612.01543 | 23 | Hessian-weighting properly handles the different impact of quantization errors not only within layers but also across layers and thus it can be employed for quantizing all layers of a network together. The impact of quantization errors may vary more substantially across layers than within layers. Thus, Hessian-weighting may show more beneï¬t in deeper neural networks. We note that Hessian- weighting can still provide gain even for layer-by-layer quantization since it can address the different impact of the quantization errors of network parameters within each layer as well.
Recent neural networks are getting deeper, e.g., see Szegedy et al. (2015a;b); He et al. (2015). For such deep neural networks, quantizing network parameters of all layers together is even more advan- tageous since we can avoid layer-by-layer compression rate optimization. Optimizing compression
5
Published as a conference paper at ICLR 2017
ratios jointly across all individual layers (to maximize the overall compression ratio for a network) requires exponential time complexity with respect to the number of layers. This is because the total number of possible combinations of compression ratios for individual layers increases exponentially as the number of layers increases.
# 4 ENTROPY-CONSTRAINED NETWORK QUANTIZATION | 1612.01543#23 | Towards the Limit of Network Quantization | Network quantization is one of network compression techniques to reduce the
redundancy of deep neural networks. It reduces the number of distinct network
parameter values by quantization in order to save the storage for them. In this
paper, we design network quantization schemes that minimize the performance
loss due to quantization given a compression ratio constraint. We analyze the
quantitative relation of quantization errors to the neural network loss
function and identify that the Hessian-weighted distortion measure is locally
the right objective function for the optimization of network quantization. As a
result, Hessian-weighted k-means clustering is proposed for clustering network
parameters to quantize. When optimal variable-length binary codes, e.g.,
Huffman codes, are employed for further compression, we derive that the network
quantization problem can be related to the entropy-constrained scalar
quantization (ECSQ) problem in information theory and consequently propose two
solutions of ECSQ for network quantization, i.e., uniform quantization and an
iterative solution similar to Lloyd's algorithm. Finally, using the simple
uniform quantization followed by Huffman coding, we show from our experiments
that the compression ratios of 51.25, 22.17 and 40.65 are achievable for LeNet,
32-layer ResNet and AlexNet, respectively. | http://arxiv.org/pdf/1612.01543 | Yoojin Choi, Mostafa El-Khamy, Jungwon Lee | cs.CV, cs.LG, cs.NE | Published as a conference paper at ICLR 2017 | null | cs.CV | 20161205 | 20171113 | [
{
"id": "1510.03009"
},
{
"id": "1511.06067"
},
{
"id": "1510.00149"
},
{
"id": "1511.06393"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
},
{
"id": "1511.06530"
}
] |
1612.01543 | 24 | # 4 ENTROPY-CONSTRAINED NETWORK QUANTIZATION
In this section, we investigate how to solve the network quantization problem under a constraint on the compression ratio. In designing network quantization schemes, we not only want to minimize the performance loss but also want to maximize the compression ratio. In Section 3, we explored how to quantify and minimize the loss due to quantization. In this section, we investigate how to take the compression ratio into account properly in the optimization of network quantization.
4.1 ENTROPY CODING
After quantizing network parameters by clustering, lossless data compression by variable-length bi- nary coding can be followed for compressing quantized values. There is a set of optimal codes that achieve the minimum average codeword length for a given source. Entropy is the theoretical limit of the average codeword length per symbol that we can achieve by lossless data compression, proved by Shannon (see, e.g., Cover & Thomas (2012, Section 5.3)). It is known that optimal codes achieve this limit with some overhead less than 1 bit when only integer-length codewords are allowed. So optimal coding is also called as entropy coding. Huffman coding is one of entropy coding schemes commonly used when the source distribution is provided (see, e.g., Cover & Thomas (2012, Sec- tion 5.6)), or can be estimated. | 1612.01543#24 | Towards the Limit of Network Quantization | Network quantization is one of network compression techniques to reduce the
redundancy of deep neural networks. It reduces the number of distinct network
parameter values by quantization in order to save the storage for them. In this
paper, we design network quantization schemes that minimize the performance
loss due to quantization given a compression ratio constraint. We analyze the
quantitative relation of quantization errors to the neural network loss
function and identify that the Hessian-weighted distortion measure is locally
the right objective function for the optimization of network quantization. As a
result, Hessian-weighted k-means clustering is proposed for clustering network
parameters to quantize. When optimal variable-length binary codes, e.g.,
Huffman codes, are employed for further compression, we derive that the network
quantization problem can be related to the entropy-constrained scalar
quantization (ECSQ) problem in information theory and consequently propose two
solutions of ECSQ for network quantization, i.e., uniform quantization and an
iterative solution similar to Lloyd's algorithm. Finally, using the simple
uniform quantization followed by Huffman coding, we show from our experiments
that the compression ratios of 51.25, 22.17 and 40.65 are achievable for LeNet,
32-layer ResNet and AlexNet, respectively. | http://arxiv.org/pdf/1612.01543 | Yoojin Choi, Mostafa El-Khamy, Jungwon Lee | cs.CV, cs.LG, cs.NE | Published as a conference paper at ICLR 2017 | null | cs.CV | 20161205 | 20171113 | [
{
"id": "1510.03009"
},
{
"id": "1511.06067"
},
{
"id": "1510.00149"
},
{
"id": "1511.06393"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
},
{
"id": "1511.06530"
}
] |
1612.01543 | 26 | # i=1 X
which follows from (1). This optimization problem is too complex to solve for any arbitrary variable- length binary code since the average codeword length ¯b can be arbitrary. However, we identify that it can be simpliï¬ed if optimal codes, e.g., Huffman codes, are assumed to be used. In particular, optimal coding closely achieves the lower limit of the average source code length, i.e., entropy, and then we approximately have
# k
¯b â H = â pi log2 pi, (10)
i=1 X where H is the entropy of the quantized network parameters after clustering (i.e., source), given that pi = |Ci|/N is the ratio of the number of network parameters in cluster Ci to the number of all network parameters (i.e., source distribution). Moreover, assuming that N â« k, we have k
1 N bi + kb â 0, ! (11) | 1612.01543#26 | Towards the Limit of Network Quantization | Network quantization is one of network compression techniques to reduce the
redundancy of deep neural networks. It reduces the number of distinct network
parameter values by quantization in order to save the storage for them. In this
paper, we design network quantization schemes that minimize the performance
loss due to quantization given a compression ratio constraint. We analyze the
quantitative relation of quantization errors to the neural network loss
function and identify that the Hessian-weighted distortion measure is locally
the right objective function for the optimization of network quantization. As a
result, Hessian-weighted k-means clustering is proposed for clustering network
parameters to quantize. When optimal variable-length binary codes, e.g.,
Huffman codes, are employed for further compression, we derive that the network
quantization problem can be related to the entropy-constrained scalar
quantization (ECSQ) problem in information theory and consequently propose two
solutions of ECSQ for network quantization, i.e., uniform quantization and an
iterative solution similar to Lloyd's algorithm. Finally, using the simple
uniform quantization followed by Huffman coding, we show from our experiments
that the compression ratios of 51.25, 22.17 and 40.65 are achievable for LeNet,
32-layer ResNet and AlexNet, respectively. | http://arxiv.org/pdf/1612.01543 | Yoojin Choi, Mostafa El-Khamy, Jungwon Lee | cs.CV, cs.LG, cs.NE | Published as a conference paper at ICLR 2017 | null | cs.CV | 20161205 | 20171113 | [
{
"id": "1510.03009"
},
{
"id": "1511.06067"
},
{
"id": "1510.00149"
},
{
"id": "1511.06393"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
},
{
"id": "1511.06530"
}
] |
1612.01543 | 27 | # i=1 X
in (9). From (10) and (11), the constraint in (9) can be altered to an entropy constraint given by
k H = â pi log2 pi < R,
# i=1 X
where R â b/C. In summary, assuming that optimal coding is employed after clustering, one can approximately replace a compression ratio constraint with an entropy constraint for the clustering output. The network quantization problem is then translated into a quantization problem with an en- tropy constraint, which is called as entropy-constrained scalar quantization (ECSQ) in information theory. Two efï¬cient heuristic solutions for ECSQ are proposed for network quantization in the fol- lowing subsections, i.e., uniform quantization and an iterative solution similar to Lloydâs algorithm for k-means clustering.
6
Published as a conference paper at ICLR 2017
4.3 UNIFORM QUANTIZATION | 1612.01543#27 | Towards the Limit of Network Quantization | Network quantization is one of network compression techniques to reduce the
redundancy of deep neural networks. It reduces the number of distinct network
parameter values by quantization in order to save the storage for them. In this
paper, we design network quantization schemes that minimize the performance
loss due to quantization given a compression ratio constraint. We analyze the
quantitative relation of quantization errors to the neural network loss
function and identify that the Hessian-weighted distortion measure is locally
the right objective function for the optimization of network quantization. As a
result, Hessian-weighted k-means clustering is proposed for clustering network
parameters to quantize. When optimal variable-length binary codes, e.g.,
Huffman codes, are employed for further compression, we derive that the network
quantization problem can be related to the entropy-constrained scalar
quantization (ECSQ) problem in information theory and consequently propose two
solutions of ECSQ for network quantization, i.e., uniform quantization and an
iterative solution similar to Lloyd's algorithm. Finally, using the simple
uniform quantization followed by Huffman coding, we show from our experiments
that the compression ratios of 51.25, 22.17 and 40.65 are achievable for LeNet,
32-layer ResNet and AlexNet, respectively. | http://arxiv.org/pdf/1612.01543 | Yoojin Choi, Mostafa El-Khamy, Jungwon Lee | cs.CV, cs.LG, cs.NE | Published as a conference paper at ICLR 2017 | null | cs.CV | 20161205 | 20171113 | [
{
"id": "1510.03009"
},
{
"id": "1511.06067"
},
{
"id": "1510.00149"
},
{
"id": "1511.06393"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
},
{
"id": "1511.06530"
}
] |
1612.01543 | 28 | 6
Published as a conference paper at ICLR 2017
4.3 UNIFORM QUANTIZATION
It is shown in Gish & Pierce (1968) that the uniform quantizer is asymptotically optimal in mini- mizing the mean square quantization error for any random source with a reasonably smooth density function as the resolution becomes inï¬nite, i.e., as the number of clusters k â â. This asymptotic result leads us to come up with a very simple but efï¬cient network quantization scheme as follows:
1. We ï¬rst set uniformly spaced thresholds and divide network parameters into clusters. 2. After determining clusters, their quantized values (cluster centers) are obtained by taking
the mean of network parameters in each cluster.
Note that one can use Hessian-weighted mean instead of non-weighted mean in computing clus- ter centers in the second step above in order to take the beneï¬t of Hessian-weighting. A perfor- mance comparison of uniform quantization with non-weighted mean and uniform quantization with Hessian-weighted mean can be found in Appendix A.2. | 1612.01543#28 | Towards the Limit of Network Quantization | Network quantization is one of network compression techniques to reduce the
redundancy of deep neural networks. It reduces the number of distinct network
parameter values by quantization in order to save the storage for them. In this
paper, we design network quantization schemes that minimize the performance
loss due to quantization given a compression ratio constraint. We analyze the
quantitative relation of quantization errors to the neural network loss
function and identify that the Hessian-weighted distortion measure is locally
the right objective function for the optimization of network quantization. As a
result, Hessian-weighted k-means clustering is proposed for clustering network
parameters to quantize. When optimal variable-length binary codes, e.g.,
Huffman codes, are employed for further compression, we derive that the network
quantization problem can be related to the entropy-constrained scalar
quantization (ECSQ) problem in information theory and consequently propose two
solutions of ECSQ for network quantization, i.e., uniform quantization and an
iterative solution similar to Lloyd's algorithm. Finally, using the simple
uniform quantization followed by Huffman coding, we show from our experiments
that the compression ratios of 51.25, 22.17 and 40.65 are achievable for LeNet,
32-layer ResNet and AlexNet, respectively. | http://arxiv.org/pdf/1612.01543 | Yoojin Choi, Mostafa El-Khamy, Jungwon Lee | cs.CV, cs.LG, cs.NE | Published as a conference paper at ICLR 2017 | null | cs.CV | 20161205 | 20171113 | [
{
"id": "1510.03009"
},
{
"id": "1511.06067"
},
{
"id": "1510.00149"
},
{
"id": "1511.06393"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
},
{
"id": "1511.06530"
}
] |
1612.01543 | 29 | Although uniform quantization is a straightforward method, it has never been shown before in the literature that it is actually one of the most efï¬cient quantization schemes for neural networks when optimal variable-length coding, e.g., Huffman coding, follows. We note that uniform quantization is not always good; it is inefï¬cient for ï¬xed-length coding, which is also ï¬rst shown in this paper.
4.4
# ITERATIVE ALGORITHM TO SOLVE ECSQ
Another scheme proposed to solve the ECSQ problem for network quantization is an iterative algo- rithm, which is similar to Lloydâs algorithm for k-means clustering. Although this iterative solution is more complicated than the uniform quantization in Section 4.3, it ï¬nds a local optimum for a given discrete source. An iterative algorithm to solve the general ECSQ problem is provided in Chou et al. (1989). We derive a similar iterative algorithm to solve the ECSQ problem for network quantization. The main difference from the method in Chou et al. (1989) is that we minimize the Hessian-weighted distortion measure instead of the non-weighted regular distortion measure for op- timal quantization. The detailed algorithm and further discussion can be found in Appendix A.3.
# 5 EXPERIMENTS | 1612.01543#29 | Towards the Limit of Network Quantization | Network quantization is one of network compression techniques to reduce the
redundancy of deep neural networks. It reduces the number of distinct network
parameter values by quantization in order to save the storage for them. In this
paper, we design network quantization schemes that minimize the performance
loss due to quantization given a compression ratio constraint. We analyze the
quantitative relation of quantization errors to the neural network loss
function and identify that the Hessian-weighted distortion measure is locally
the right objective function for the optimization of network quantization. As a
result, Hessian-weighted k-means clustering is proposed for clustering network
parameters to quantize. When optimal variable-length binary codes, e.g.,
Huffman codes, are employed for further compression, we derive that the network
quantization problem can be related to the entropy-constrained scalar
quantization (ECSQ) problem in information theory and consequently propose two
solutions of ECSQ for network quantization, i.e., uniform quantization and an
iterative solution similar to Lloyd's algorithm. Finally, using the simple
uniform quantization followed by Huffman coding, we show from our experiments
that the compression ratios of 51.25, 22.17 and 40.65 are achievable for LeNet,
32-layer ResNet and AlexNet, respectively. | http://arxiv.org/pdf/1612.01543 | Yoojin Choi, Mostafa El-Khamy, Jungwon Lee | cs.CV, cs.LG, cs.NE | Published as a conference paper at ICLR 2017 | null | cs.CV | 20161205 | 20171113 | [
{
"id": "1510.03009"
},
{
"id": "1511.06067"
},
{
"id": "1510.00149"
},
{
"id": "1511.06393"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
},
{
"id": "1511.06530"
}
] |
1612.01543 | 30 | # 5 EXPERIMENTS
This section presents our experiment results for the proposed network quantization schemes in three exemplary convolutional neural networks: (a) LeNet (LeCun et al., 1998) for the MNIST data set, (b) ResNet (He et al., 2015) for the CIFAR-10 data set, and (c) AlexNet (Krizhevsky et al., 2012) for the ImageNet ILSVRC-2012 data set. Our experiments can be summarized as follows:
⢠We employ the proposed network quantization methods to quantize all of network param- eters in a network together at once, as discussed in Section 3.6.
We evaluate the performance of the proposed network quantization methods with and with- out network pruning. For a pruned model, we need to store not only the values of unpruned parameters but also their respective indexes (locations) in the original model. For the index information, we compute index differences between unpruned network parameters in the original model and further compress them by Huffman coding as in Han et al. (2015a). ⢠For Hessian computation, 50,000 samples of the training set are reused. We also evaluate
the performance when Hessian is computed with 1,000 samples only. | 1612.01543#30 | Towards the Limit of Network Quantization | Network quantization is one of network compression techniques to reduce the
redundancy of deep neural networks. It reduces the number of distinct network
parameter values by quantization in order to save the storage for them. In this
paper, we design network quantization schemes that minimize the performance
loss due to quantization given a compression ratio constraint. We analyze the
quantitative relation of quantization errors to the neural network loss
function and identify that the Hessian-weighted distortion measure is locally
the right objective function for the optimization of network quantization. As a
result, Hessian-weighted k-means clustering is proposed for clustering network
parameters to quantize. When optimal variable-length binary codes, e.g.,
Huffman codes, are employed for further compression, we derive that the network
quantization problem can be related to the entropy-constrained scalar
quantization (ECSQ) problem in information theory and consequently propose two
solutions of ECSQ for network quantization, i.e., uniform quantization and an
iterative solution similar to Lloyd's algorithm. Finally, using the simple
uniform quantization followed by Huffman coding, we show from our experiments
that the compression ratios of 51.25, 22.17 and 40.65 are achievable for LeNet,
32-layer ResNet and AlexNet, respectively. | http://arxiv.org/pdf/1612.01543 | Yoojin Choi, Mostafa El-Khamy, Jungwon Lee | cs.CV, cs.LG, cs.NE | Published as a conference paper at ICLR 2017 | null | cs.CV | 20161205 | 20171113 | [
{
"id": "1510.03009"
},
{
"id": "1511.06067"
},
{
"id": "1510.00149"
},
{
"id": "1511.06393"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
},
{
"id": "1511.06530"
}
] |
1612.01543 | 31 | the performance when Hessian is computed with 1,000 samples only.
⢠Finally, we evaluate the performance of our network quantization schemes using Hessian when its alternative is used instead, as discussed in Section 3.5. To this end, we retrain the considered neural networks with the Adam SGD optimizer and obtain the second moment estimates of gradients at the end of training. Then, we use the square roots of the second moment estimates instead of Hessian and evaluate the performance.
# 5.1 EXPERIMENT MODELS
First, we evaluate our network quantization schemes for the MNIST data set with a simpliï¬ed ver- sion of LeNet5 (LeCun et al., 1998), consisting of two convolutional layers and two fully-connected
7
Published as a conference paper at ICLR 2017
100
100 | 1612.01543#31 | Towards the Limit of Network Quantization | Network quantization is one of network compression techniques to reduce the
redundancy of deep neural networks. It reduces the number of distinct network
parameter values by quantization in order to save the storage for them. In this
paper, we design network quantization schemes that minimize the performance
loss due to quantization given a compression ratio constraint. We analyze the
quantitative relation of quantization errors to the neural network loss
function and identify that the Hessian-weighted distortion measure is locally
the right objective function for the optimization of network quantization. As a
result, Hessian-weighted k-means clustering is proposed for clustering network
parameters to quantize. When optimal variable-length binary codes, e.g.,
Huffman codes, are employed for further compression, we derive that the network
quantization problem can be related to the entropy-constrained scalar
quantization (ECSQ) problem in information theory and consequently propose two
solutions of ECSQ for network quantization, i.e., uniform quantization and an
iterative solution similar to Lloyd's algorithm. Finally, using the simple
uniform quantization followed by Huffman coding, we show from our experiments
that the compression ratios of 51.25, 22.17 and 40.65 are achievable for LeNet,
32-layer ResNet and AlexNet, respectively. | http://arxiv.org/pdf/1612.01543 | Yoojin Choi, Mostafa El-Khamy, Jungwon Lee | cs.CV, cs.LG, cs.NE | Published as a conference paper at ICLR 2017 | null | cs.CV | 20161205 | 20171113 | [
{
"id": "1510.03009"
},
{
"id": "1511.06067"
},
{
"id": "1510.00149"
},
{
"id": "1511.06393"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
},
{
"id": "1511.06530"
}
] |
1612.01543 | 32 | 100 100 90 90 80 80 ) % ( 70 60 ) % ( 70 60 y c a r u c c A 50 40 30 y c a r u c c A 50 40 30 20 10 0 0 1 kâmeans Hessianâweighted kâmeans Uniform quantization Iterative ECSQ 3 2 7 Codeword length (bits) 4 5 6 8 9 20 10 0 0 1 kâmeans Hessianâweighted kâmeans Uniform quantization Iterative ECSQ 3 2 7 Codeword length (bits) 4 5 6 8 (a) Fixed-length coding (b) Fixed-length coding + ï¬ne-tuning 100 100 90 90 80 80 ) % ( 70 60 ) % ( 70 60 y c a r u c c A 50 40 30 y c a r u c c A 50 40 30 20 10 0 0 kâmeans Hessianâweighted kâmeans Uniform quantization Iterative ECSQ 3 1 8 Average codeword length (bits) 2 4 5 6 7 (c) Huffman coding 9 kâmeans Hessianâweighted kâmeans Uniform quantization Iterative ECSQ 3 20 10 0 0 8 1 Average codeword length (bits) (d) Huffman coding + ï¬ne-tuning 2 4 5 6 7 9 9 | 1612.01543#32 | Towards the Limit of Network Quantization | Network quantization is one of network compression techniques to reduce the
redundancy of deep neural networks. It reduces the number of distinct network
parameter values by quantization in order to save the storage for them. In this
paper, we design network quantization schemes that minimize the performance
loss due to quantization given a compression ratio constraint. We analyze the
quantitative relation of quantization errors to the neural network loss
function and identify that the Hessian-weighted distortion measure is locally
the right objective function for the optimization of network quantization. As a
result, Hessian-weighted k-means clustering is proposed for clustering network
parameters to quantize. When optimal variable-length binary codes, e.g.,
Huffman codes, are employed for further compression, we derive that the network
quantization problem can be related to the entropy-constrained scalar
quantization (ECSQ) problem in information theory and consequently propose two
solutions of ECSQ for network quantization, i.e., uniform quantization and an
iterative solution similar to Lloyd's algorithm. Finally, using the simple
uniform quantization followed by Huffman coding, we show from our experiments
that the compression ratios of 51.25, 22.17 and 40.65 are achievable for LeNet,
32-layer ResNet and AlexNet, respectively. | http://arxiv.org/pdf/1612.01543 | Yoojin Choi, Mostafa El-Khamy, Jungwon Lee | cs.CV, cs.LG, cs.NE | Published as a conference paper at ICLR 2017 | null | cs.CV | 20161205 | 20171113 | [
{
"id": "1510.03009"
},
{
"id": "1511.06067"
},
{
"id": "1510.00149"
},
{
"id": "1511.06393"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
},
{
"id": "1511.06530"
}
] |
1612.01543 | 33 | Figure 1: Accuracy versus average codeword length per network parameter after network quantiza- tion for 32-layer ResNet.
layers followed by a soft-max layer. It has total 431,080 parameters and achieves 99.25% accuracy. For a pruned model, we prune 91% of the original network parameters and ï¬ne-tune the rest.
Second, we experiment our network quantization schemes for the CIFAR-10 data set (Krizhevsky, 2009) with a pre-trained 32-layer ResNet (He et al., 2015). The 32-layer ResNet consists of 464,154 parameters in total and achieves 92.58% accuracy. For a pruned model, we prune 80% of the original network parameters and ï¬ne-tune the rest. | 1612.01543#33 | Towards the Limit of Network Quantization | Network quantization is one of network compression techniques to reduce the
redundancy of deep neural networks. It reduces the number of distinct network
parameter values by quantization in order to save the storage for them. In this
paper, we design network quantization schemes that minimize the performance
loss due to quantization given a compression ratio constraint. We analyze the
quantitative relation of quantization errors to the neural network loss
function and identify that the Hessian-weighted distortion measure is locally
the right objective function for the optimization of network quantization. As a
result, Hessian-weighted k-means clustering is proposed for clustering network
parameters to quantize. When optimal variable-length binary codes, e.g.,
Huffman codes, are employed for further compression, we derive that the network
quantization problem can be related to the entropy-constrained scalar
quantization (ECSQ) problem in information theory and consequently propose two
solutions of ECSQ for network quantization, i.e., uniform quantization and an
iterative solution similar to Lloyd's algorithm. Finally, using the simple
uniform quantization followed by Huffman coding, we show from our experiments
that the compression ratios of 51.25, 22.17 and 40.65 are achievable for LeNet,
32-layer ResNet and AlexNet, respectively. | http://arxiv.org/pdf/1612.01543 | Yoojin Choi, Mostafa El-Khamy, Jungwon Lee | cs.CV, cs.LG, cs.NE | Published as a conference paper at ICLR 2017 | null | cs.CV | 20161205 | 20171113 | [
{
"id": "1510.03009"
},
{
"id": "1511.06067"
},
{
"id": "1510.00149"
},
{
"id": "1511.06393"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
},
{
"id": "1511.06530"
}
] |
1612.01543 | 34 | Third, we evaluate our network quantization schemes with AlexNet (Krizhevsky et al., 2012) for the ImageNet ILSVRC-2012 data set (Russakovsky et al., 2015). We obtain a pre-trained AlexNet Caffe model, which achieves 57.16% top-1 accuracy. For a pruned model, we prune 89% parameters and ï¬ne-tune the rest. In ï¬ne-tuning, the Adam SGD optimizer is used in order to avoid the computation of Hessian by utilizing its alternative (see Section 3.5). However, the pruned model does not recover the original accuracy after ï¬ne-tuning with the Adam method; the top-1 accuracy recovered after pruning and ï¬ne-tuning is 56.00%. We are able to ï¬nd a better pruned model achieving the original accuracy by pruning and retraining iteratively (Han et al., 2015b), which is however not used here.
5.2 EXPERIMENT RESULTS | 1612.01543#34 | Towards the Limit of Network Quantization | Network quantization is one of network compression techniques to reduce the
redundancy of deep neural networks. It reduces the number of distinct network
parameter values by quantization in order to save the storage for them. In this
paper, we design network quantization schemes that minimize the performance
loss due to quantization given a compression ratio constraint. We analyze the
quantitative relation of quantization errors to the neural network loss
function and identify that the Hessian-weighted distortion measure is locally
the right objective function for the optimization of network quantization. As a
result, Hessian-weighted k-means clustering is proposed for clustering network
parameters to quantize. When optimal variable-length binary codes, e.g.,
Huffman codes, are employed for further compression, we derive that the network
quantization problem can be related to the entropy-constrained scalar
quantization (ECSQ) problem in information theory and consequently propose two
solutions of ECSQ for network quantization, i.e., uniform quantization and an
iterative solution similar to Lloyd's algorithm. Finally, using the simple
uniform quantization followed by Huffman coding, we show from our experiments
that the compression ratios of 51.25, 22.17 and 40.65 are achievable for LeNet,
32-layer ResNet and AlexNet, respectively. | http://arxiv.org/pdf/1612.01543 | Yoojin Choi, Mostafa El-Khamy, Jungwon Lee | cs.CV, cs.LG, cs.NE | Published as a conference paper at ICLR 2017 | null | cs.CV | 20161205 | 20171113 | [
{
"id": "1510.03009"
},
{
"id": "1511.06067"
},
{
"id": "1510.00149"
},
{
"id": "1511.06393"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
},
{
"id": "1511.06530"
}
] |
1612.01543 | 35 | 5.2 EXPERIMENT RESULTS
We ï¬rst present the quantization results without pruning for 32-layer ResNet in Figure 1, where the accuracy of 32-layer ResNet is plotted against the average codeword length per network pa- rameter after quantization. When ï¬xed-length coding is employed, the proposed Hessian-weighted k-means clustering method performs the best, as expected. Observe that Hessian-weighted k-means clustering yields better accuracy than others even after ï¬ne-tuning. On the other hand, when Huff- man coding is employed, uniform quantization and the iterative algorithm for ECSQ outperform Hessian-weighted k-means clustering and k-means clustering. However, these two ECSQ solutions underperform Hessian-weighted k-means clustering and even k-means clustering when ï¬xed-length coding is employed since they are optimized for optimal variable-length coding.
8
Published as a conference paper at ICLR 2017 | 1612.01543#35 | Towards the Limit of Network Quantization | Network quantization is one of network compression techniques to reduce the
redundancy of deep neural networks. It reduces the number of distinct network
parameter values by quantization in order to save the storage for them. In this
paper, we design network quantization schemes that minimize the performance
loss due to quantization given a compression ratio constraint. We analyze the
quantitative relation of quantization errors to the neural network loss
function and identify that the Hessian-weighted distortion measure is locally
the right objective function for the optimization of network quantization. As a
result, Hessian-weighted k-means clustering is proposed for clustering network
parameters to quantize. When optimal variable-length binary codes, e.g.,
Huffman codes, are employed for further compression, we derive that the network
quantization problem can be related to the entropy-constrained scalar
quantization (ECSQ) problem in information theory and consequently propose two
solutions of ECSQ for network quantization, i.e., uniform quantization and an
iterative solution similar to Lloyd's algorithm. Finally, using the simple
uniform quantization followed by Huffman coding, we show from our experiments
that the compression ratios of 51.25, 22.17 and 40.65 are achievable for LeNet,
32-layer ResNet and AlexNet, respectively. | http://arxiv.org/pdf/1612.01543 | Yoojin Choi, Mostafa El-Khamy, Jungwon Lee | cs.CV, cs.LG, cs.NE | Published as a conference paper at ICLR 2017 | null | cs.CV | 20161205 | 20171113 | [
{
"id": "1510.03009"
},
{
"id": "1511.06067"
},
{
"id": "1510.00149"
},
{
"id": "1511.06393"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
},
{
"id": "1511.06530"
}
] |
1612.01543 | 36 | 8
Published as a conference paper at ICLR 2017
100 100 99.5 90 99 80 ) % ( y c a r u c c A 98.5 98 97.5 97 96.5 ) % ( y c a r u c c A 70 60 50 40 30 96 95.5 95 0 kâmeans Hessianâweighted kâmeans (50,000) Hessianâweighted kâmeans (1,000) AltâHessianâweighted kâmeans 1 2 3 4 5 Average codeword length (bits) 6 7 20 10 0 0 kâmeans Hessianâweighted kâmeans (50,000) Hessianâweighted kâmeans (1,000) AltâHessianâweighted kâmeans 1 8 Average codeword length (bits) 2 3 4 5 6 7 (a) LeNet (b) ResNet 9 | 1612.01543#36 | Towards the Limit of Network Quantization | Network quantization is one of network compression techniques to reduce the
redundancy of deep neural networks. It reduces the number of distinct network
parameter values by quantization in order to save the storage for them. In this
paper, we design network quantization schemes that minimize the performance
loss due to quantization given a compression ratio constraint. We analyze the
quantitative relation of quantization errors to the neural network loss
function and identify that the Hessian-weighted distortion measure is locally
the right objective function for the optimization of network quantization. As a
result, Hessian-weighted k-means clustering is proposed for clustering network
parameters to quantize. When optimal variable-length binary codes, e.g.,
Huffman codes, are employed for further compression, we derive that the network
quantization problem can be related to the entropy-constrained scalar
quantization (ECSQ) problem in information theory and consequently propose two
solutions of ECSQ for network quantization, i.e., uniform quantization and an
iterative solution similar to Lloyd's algorithm. Finally, using the simple
uniform quantization followed by Huffman coding, we show from our experiments
that the compression ratios of 51.25, 22.17 and 40.65 are achievable for LeNet,
32-layer ResNet and AlexNet, respectively. | http://arxiv.org/pdf/1612.01543 | Yoojin Choi, Mostafa El-Khamy, Jungwon Lee | cs.CV, cs.LG, cs.NE | Published as a conference paper at ICLR 2017 | null | cs.CV | 20161205 | 20171113 | [
{
"id": "1510.03009"
},
{
"id": "1511.06067"
},
{
"id": "1510.00149"
},
{
"id": "1511.06393"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
},
{
"id": "1511.06530"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.