doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1708.03888 | 7 | The main obstacle for scaling up batch is the instability of training with high LR. Hoffer et al. (2017) tried to use less aggressive "square root scaling" of LR with special form of Batch Normalization ("Ghost Batch Normalization") to train Alexnet with B=8K, but still the accuracy (53.93%) was much worse than baseline 58%. To overcome the instability during initial phase, Goyal et al. (2017) proposed to use LR warm-up: training starts with small LR, and then LR is gradually increased to the target. After the warm-up period (usually a few epochs), you switch to the regular LR policy ("multi-steps", polynomial decay etc). Using LR warm-up and linear scaling Goyal et al. (2017) trained Resnet-50 with batch B=8K without loss in accuracy. These recipes constitute the current state-of-the-art for large batch training, and we used them as the starting point of our experiments | 1708.03888#7 | Large Batch Training of Convolutional Networks | A common way to speed up training of large convolutional networks is to add
computational units. Training is then performed using data-parallel synchronous
Stochastic Gradient Descent (SGD) with mini-batch divided between computational
units. With an increase in the number of nodes, the batch size grows. But
training with large batch size often results in the lower model accuracy. We
argue that the current recipe for large batch training (linear learning rate
scaling with warm-up) is not general enough and training may diverge. To
overcome this optimization difficulties we propose a new training algorithm
based on Layer-wise Adaptive Rate Scaling (LARS). Using LARS, we scaled Alexnet
up to a batch size of 8K, and Resnet-50 to a batch size of 32K without loss in
accuracy. | http://arxiv.org/pdf/1708.03888 | Yang You, Igor Gitman, Boris Ginsburg | cs.CV | null | null | cs.CV | 20170813 | 20170913 | [
{
"id": "1609.04836"
},
{
"id": "1705.08741"
},
{
"id": "1706.02677"
},
{
"id": "1604.00981"
},
{
"id": "1708.02188"
},
{
"id": "1705.09319"
}
] |
1708.03888 | 8 | Another problem related to large batch training is so called "generalization gap", observed by Keskar et al. (2016). They came to conclusion that "the lack of generalization ability is due to the fact that large-batch methods tend to converge to sharp minimizers of the training function." They tried a few methods to improve the generalization with data augmentation and warm-starting with small batch, but they did not ï¬nd a working solution.
2
Technical Report
# 3 ANALYSIS OF ALEXNET TRAINING WITH LARGE BATCH
We used BVLC1 Alexnet with batch B=512 as baseline. Model was trained using SGD with momentum 0.9 with initial LR=0.01 and the polynomial (power=2) decay LR policy for 100 epochs. The baseline accuracy is 58% (averaged over last 5 epochs). Next we tried to train Alexnet with B=4K by using larger LR. In our experiments we changed the base LR from 0.01 to 0.08, but training diverged with LR > 0.06 even with warm-up 2. The best accuracy for B=4K is 53.1%, achieved for LR=0.05. For B=8K we couldnât scale-up LR either, and the best accuracy is 44.8% , achieved for LR=0.03 (see Table 1(a) ). | 1708.03888#8 | Large Batch Training of Convolutional Networks | A common way to speed up training of large convolutional networks is to add
computational units. Training is then performed using data-parallel synchronous
Stochastic Gradient Descent (SGD) with mini-batch divided between computational
units. With an increase in the number of nodes, the batch size grows. But
training with large batch size often results in the lower model accuracy. We
argue that the current recipe for large batch training (linear learning rate
scaling with warm-up) is not general enough and training may diverge. To
overcome this optimization difficulties we propose a new training algorithm
based on Layer-wise Adaptive Rate Scaling (LARS). Using LARS, we scaled Alexnet
up to a batch size of 8K, and Resnet-50 to a batch size of 32K without loss in
accuracy. | http://arxiv.org/pdf/1708.03888 | Yang You, Igor Gitman, Boris Ginsburg | cs.CV | null | null | cs.CV | 20170813 | 20170913 | [
{
"id": "1609.04836"
},
{
"id": "1705.08741"
},
{
"id": "1706.02677"
},
{
"id": "1604.00981"
},
{
"id": "1708.02188"
},
{
"id": "1705.09319"
}
] |
1708.03888 | 9 | To stabilize the initial training phase we replaced Local Response Normalization layers with Batch Normalization (BN). We will refer to this model as Alexnet-BN 3. The baseline accuracy for Alexnet- BN with B=512 is 60.2%. 4 With BN we could use large LR-s even without warm-up. For B=4K the best accuracy 58.9% was achieved for LR=0.18, and for B=8K the best accuracy 58% was achieved for LR=0.3. We also observed that BN signiï¬cantly widen the range of LRs with good accuracy.
Table 1: Alexnet and Alexnet-BN: B=4K and 8K. BN makes it possible to use larger learning rates.
0.02 0.04 0.05 0.06 0.07 0.02 0.03 0.04 0.05 0.02 0.16 0.18 0.21 0.30 0.23 0.30 0.32 0.41 60.2 58.1 58.9 58.5 57.1 57.6 58.0 57.7 56.5 | 1708.03888#9 | Large Batch Training of Convolutional Networks | A common way to speed up training of large convolutional networks is to add
computational units. Training is then performed using data-parallel synchronous
Stochastic Gradient Descent (SGD) with mini-batch divided between computational
units. With an increase in the number of nodes, the batch size grows. But
training with large batch size often results in the lower model accuracy. We
argue that the current recipe for large batch training (linear learning rate
scaling with warm-up) is not general enough and training may diverge. To
overcome this optimization difficulties we propose a new training algorithm
based on Layer-wise Adaptive Rate Scaling (LARS). Using LARS, we scaled Alexnet
up to a batch size of 8K, and Resnet-50 to a batch size of 32K without loss in
accuracy. | http://arxiv.org/pdf/1708.03888 | Yang You, Igor Gitman, Boris Ginsburg | cs.CV | null | null | cs.CV | 20170813 | 20170913 | [
{
"id": "1609.04836"
},
{
"id": "1705.08741"
},
{
"id": "1706.02677"
},
{
"id": "1604.00981"
},
{
"id": "1708.02188"
},
{
"id": "1705.09319"
}
] |
1708.03888 | 10 | Still there is a 2.2% accuracy loss for B=8K. To check if it is related to the "generalization gap" (Keskar et al. (2016)), we looked at the loss gap between training and testing (see Fig. 1). We did not ï¬nd the signiï¬cant difference in the loss gap between B=256 and B=8K. We conclude that in this case the accuracy loss is not related to a generalization gap, and it is caused by the low training.
AlexNet with Batch Normalization and poly LR (power=2) ââ Batch=512, Base LR=0.02 ââ Batch=8192, Base LR=0.32 os [Test Loss - Train Loss| 00 3 2 By 3 3 wo Eo Epochs
Figure 1: Alexnet-BN: Gap between training and testing loss | 1708.03888#10 | Large Batch Training of Convolutional Networks | A common way to speed up training of large convolutional networks is to add
computational units. Training is then performed using data-parallel synchronous
Stochastic Gradient Descent (SGD) with mini-batch divided between computational
units. With an increase in the number of nodes, the batch size grows. But
training with large batch size often results in the lower model accuracy. We
argue that the current recipe for large batch training (linear learning rate
scaling with warm-up) is not general enough and training may diverge. To
overcome this optimization difficulties we propose a new training algorithm
based on Layer-wise Adaptive Rate Scaling (LARS). Using LARS, we scaled Alexnet
up to a batch size of 8K, and Resnet-50 to a batch size of 32K without loss in
accuracy. | http://arxiv.org/pdf/1708.03888 | Yang You, Igor Gitman, Boris Ginsburg | cs.CV | null | null | cs.CV | 20170813 | 20170913 | [
{
"id": "1609.04836"
},
{
"id": "1705.08741"
},
{
"id": "1706.02677"
},
{
"id": "1604.00981"
},
{
"id": "1708.02188"
},
{
"id": "1705.09319"
}
] |
1708.03888 | 11 | Figure 1: Alexnet-BN: Gap between training and testing loss
1https://github.com/BVLC/caffe/tree/master/models/bvlc_alexnet 2LR starts from 0.001 and is linearly increased it to the target LR during 2.5 epochs 3 https://github.com/borisgin/nvcaffe-0.16/tree/caffe-0.16/models/alexnet_bn 4 Alexnet-BN baseline was trained using SGD with momentum=0.9, weight decay=0.0005 for 128 epochs. We used polynomial (power 2) decay LR policy with base LR=0.02.
3
Technical Report
# 4 LAYER-WISE ADAPTIVE RATE SCALING (LARS) | 1708.03888#11 | Large Batch Training of Convolutional Networks | A common way to speed up training of large convolutional networks is to add
computational units. Training is then performed using data-parallel synchronous
Stochastic Gradient Descent (SGD) with mini-batch divided between computational
units. With an increase in the number of nodes, the batch size grows. But
training with large batch size often results in the lower model accuracy. We
argue that the current recipe for large batch training (linear learning rate
scaling with warm-up) is not general enough and training may diverge. To
overcome this optimization difficulties we propose a new training algorithm
based on Layer-wise Adaptive Rate Scaling (LARS). Using LARS, we scaled Alexnet
up to a batch size of 8K, and Resnet-50 to a batch size of 32K without loss in
accuracy. | http://arxiv.org/pdf/1708.03888 | Yang You, Igor Gitman, Boris Ginsburg | cs.CV | null | null | cs.CV | 20170813 | 20170913 | [
{
"id": "1609.04836"
},
{
"id": "1705.08741"
},
{
"id": "1706.02677"
},
{
"id": "1604.00981"
},
{
"id": "1708.02188"
},
{
"id": "1705.09319"
}
] |
1708.03888 | 12 | 3
Technical Report
# 4 LAYER-WISE ADAPTIVE RATE SCALING (LARS)
The standard SGD uses the same LR λ for all layers: wt+1 = wt â λâL(wt). When λ is large, the update ||λ â âL(wt)|| can become larger than ||w||, and this can cause the divergence. This makes the initial phase of training highly sensitive to the weight initialization and to initial LR. We found that the ratio the L2-norm of weights and gradients ||w||/||âL(wt)|| varies signiï¬cantly between weights and biases, and between different layers. For example, letâs take AlexNet-BN after one iteration (Table 2, "*.w" means layer weights, and "*.b" - biases). The ratio ||w||/||âL(w)|| for the 1st convolutional layer ("conv1.w") is 5.76, and for the last fully connected layer ("fc6.w") - 1345. The ratio is high during the initial phase, and it is rapidly decrease after few epochs (see Figure 2).
Table 2: AlexNet-BN: The norm of weights and gradients at 1st iteration. | 1708.03888#12 | Large Batch Training of Convolutional Networks | A common way to speed up training of large convolutional networks is to add
computational units. Training is then performed using data-parallel synchronous
Stochastic Gradient Descent (SGD) with mini-batch divided between computational
units. With an increase in the number of nodes, the batch size grows. But
training with large batch size often results in the lower model accuracy. We
argue that the current recipe for large batch training (linear learning rate
scaling with warm-up) is not general enough and training may diverge. To
overcome this optimization difficulties we propose a new training algorithm
based on Layer-wise Adaptive Rate Scaling (LARS). Using LARS, we scaled Alexnet
up to a batch size of 8K, and Resnet-50 to a batch size of 32K without loss in
accuracy. | http://arxiv.org/pdf/1708.03888 | Yang You, Igor Gitman, Boris Ginsburg | cs.CV | null | null | cs.CV | 20170813 | 20170913 | [
{
"id": "1609.04836"
},
{
"id": "1705.08741"
},
{
"id": "1706.02677"
},
{
"id": "1604.00981"
},
{
"id": "1708.02188"
},
{
"id": "1705.09319"
}
] |
1708.03888 | 13 | Table 2: AlexNet-BN: The norm of weights and gradients at 1st iteration.
Layer ||w|| ||âL(w)|| ||w|| ||âL(w)|| Layer ||w|| ||âL(w)|| ||w|| ||âL(w)|| conv1.b 1.86 0.22 8.48 conv5.b 6.65 0.09 73.6 conv1.w conv2.b 5.546 0.165 33.6 0.098 0.017 5.76 conv5.w 0.16 0.0002 69 fc6.b 30.7 0.26 117 conv2.w conv3.b 0.16 0.002 83.5 9.40 0.135 69.9 fc6.w 6.4 0.005 1345 fc7.b 20.5 0.30 68 conv3.w conv4.b 0.196 0.0015 127 8.15 0.109 74.6 fc7.w 6.4 0.013 489 fc8.b 20.2 0.22 93 fc8.w 0.316 0.016 19 | 1708.03888#13 | Large Batch Training of Convolutional Networks | A common way to speed up training of large convolutional networks is to add
computational units. Training is then performed using data-parallel synchronous
Stochastic Gradient Descent (SGD) with mini-batch divided between computational
units. With an increase in the number of nodes, the batch size grows. But
training with large batch size often results in the lower model accuracy. We
argue that the current recipe for large batch training (linear learning rate
scaling with warm-up) is not general enough and training may diverge. To
overcome this optimization difficulties we propose a new training algorithm
based on Layer-wise Adaptive Rate Scaling (LARS). Using LARS, we scaled Alexnet
up to a batch size of 8K, and Resnet-50 to a batch size of 32K without loss in
accuracy. | http://arxiv.org/pdf/1708.03888 | Yang You, Igor Gitman, Boris Ginsburg | cs.CV | null | null | cs.CV | 20170813 | 20170913 | [
{
"id": "1609.04836"
},
{
"id": "1705.08741"
},
{
"id": "1706.02677"
},
{
"id": "1604.00981"
},
{
"id": "1708.02188"
},
{
"id": "1705.09319"
}
] |
1708.03888 | 14 | If LR is large comparing to the ratio for some layer, then training may becomes unstable. The LR "warm-up" attempts to overcome this difficulty by starting from small LR, which can be safely used for all layers, and then slowly increasing it until weights will grow up enough to use larger LRs. We would like to use different approach. We use local LR \! for each layer 1: Aw) =7* A! * VL(w!) (4)
# t = γ â λl â âL(wl t)
where γ is a global LR. Local LR λl is deï¬ned for each layer through "trust" coefï¬cient η < 1:
λl = η à ||wl|| ||âL(wl)|| (5)
The η deï¬nes how much we trust the layer to change its weights during one update 5. Note that now the magnitude of the update for each layer doesnât depend on the magnitude of the gradient anymore, so it helps to partially eliminate vanishing and exploding gradient problems. This deï¬nition can be easily extended for SGD to balance the local learning rate and the weight decay term β:
λl = η à ||wl|| ||âL(wl)|| + β â ||wl|| (6) | 1708.03888#14 | Large Batch Training of Convolutional Networks | A common way to speed up training of large convolutional networks is to add
computational units. Training is then performed using data-parallel synchronous
Stochastic Gradient Descent (SGD) with mini-batch divided between computational
units. With an increase in the number of nodes, the batch size grows. But
training with large batch size often results in the lower model accuracy. We
argue that the current recipe for large batch training (linear learning rate
scaling with warm-up) is not general enough and training may diverge. To
overcome this optimization difficulties we propose a new training algorithm
based on Layer-wise Adaptive Rate Scaling (LARS). Using LARS, we scaled Alexnet
up to a batch size of 8K, and Resnet-50 to a batch size of 32K without loss in
accuracy. | http://arxiv.org/pdf/1708.03888 | Yang You, Igor Gitman, Boris Ginsburg | cs.CV | null | null | cs.CV | 20170813 | 20170913 | [
{
"id": "1609.04836"
},
{
"id": "1705.08741"
},
{
"id": "1706.02677"
},
{
"id": "1604.00981"
},
{
"id": "1708.02188"
},
{
"id": "1705.09319"
}
] |
1708.03888 | 15 | λl = η à ||wl|| ||âL(wl)|| + β â ||wl|| (6)
Algorithm 1 SGD with LARS. Example with weight decay, momentum and polynomial LR decay. Parameters: base LR γ0, momentum m, weight decay β, LARS coefï¬cient η, number of steps T Init: t = 0, v = 0. Init weight wl while t < T for each layer l do
Parameters: base LR 7, momentum m, weight decay 3, LARS coefficient 7, number of steps Init: ¢ = 0, v = 0. Init weight w/, for each layer | while t < T for each layer | do g; â VL(w!) (obtain a stochastic gradient for the current mini-batch) yo * (1- 4)? (compute the global learning rate) L ical l Ne TCHIESEIICAL (compute the local LR 2°) Via H mv; + Ye41 * A! * (Gg; + Bw}) (update the momentum) why, â w; â V4, (update the weights) end while
The network training for SGD with LARS are summarized in the Algorithm 1. One can ï¬nd more implementation details at https://github.com/borisgin/nvcaffe-0.16 | 1708.03888#15 | Large Batch Training of Convolutional Networks | A common way to speed up training of large convolutional networks is to add
computational units. Training is then performed using data-parallel synchronous
Stochastic Gradient Descent (SGD) with mini-batch divided between computational
units. With an increase in the number of nodes, the batch size grows. But
training with large batch size often results in the lower model accuracy. We
argue that the current recipe for large batch training (linear learning rate
scaling with warm-up) is not general enough and training may diverge. To
overcome this optimization difficulties we propose a new training algorithm
based on Layer-wise Adaptive Rate Scaling (LARS). Using LARS, we scaled Alexnet
up to a batch size of 8K, and Resnet-50 to a batch size of 32K without loss in
accuracy. | http://arxiv.org/pdf/1708.03888 | Yang You, Igor Gitman, Boris Ginsburg | cs.CV | null | null | cs.CV | 20170813 | 20170913 | [
{
"id": "1609.04836"
},
{
"id": "1705.08741"
},
{
"id": "1706.02677"
},
{
"id": "1604.00981"
},
{
"id": "1708.02188"
},
{
"id": "1705.09319"
}
] |
1708.03888 | 16 | The local LR strongly depends on the layer and batch size (see Figure. 2 )
5 One can consider LARS as a private case of block-diagonal re-scaling from Lafond et al. (2017).
4
# Technical Report
AlexNet-BN with LARS, Layer 1: Convolutional, Weight W5 â Batch 256 ââ Batch 1024 wo â Batch 8192 2 5 g om 10.0 £ E 1s s 50 25 0.0 0 20 40 60 80 400 Epochs
AlexNet-BN with LARS, Layer 1: Convolutional, Bias os â Batch 256 ââ Batch 1024 os ââ Batch 8192 2 ge a £03 = Gor on ry) 20 40 60 80 400 Epochs
(a) Local LR, conv1-weights
(b) Local LR, conv1-bias
AlexNet-BN with LARS, Layer 5: Convolutional, Weight âe â Batch 256 150 â Batch 1024 ââ Batch 8192 @ 125 £ © © 00 £ Eos o o A 0.50 0.25 o.00 fy 20 40 Epochs 60 80 | 1708.03888#16 | Large Batch Training of Convolutional Networks | A common way to speed up training of large convolutional networks is to add
computational units. Training is then performed using data-parallel synchronous
Stochastic Gradient Descent (SGD) with mini-batch divided between computational
units. With an increase in the number of nodes, the batch size grows. But
training with large batch size often results in the lower model accuracy. We
argue that the current recipe for large batch training (linear learning rate
scaling with warm-up) is not general enough and training may diverge. To
overcome this optimization difficulties we propose a new training algorithm
based on Layer-wise Adaptive Rate Scaling (LARS). Using LARS, we scaled Alexnet
up to a batch size of 8K, and Resnet-50 to a batch size of 32K without loss in
accuracy. | http://arxiv.org/pdf/1708.03888 | Yang You, Igor Gitman, Boris Ginsburg | cs.CV | null | null | cs.CV | 20170813 | 20170913 | [
{
"id": "1609.04836"
},
{
"id": "1705.08741"
},
{
"id": "1706.02677"
},
{
"id": "1604.00981"
},
{
"id": "1708.02188"
},
{
"id": "1705.09319"
}
] |
1708.03888 | 17 | AlexNet-BN with LARS, Layer 5: Convolutional, Bias â Batch 256 0.08 â Batch 1024 ââ Batch 8192 o £ © 0.06 4 £ © 004 o o A 0.02 0.00 fy 20 40 60 80 100 Epochs
(c) Local LR , conv5-weights (d) Local LR, conv5-bias
Figure 2: LARS: local LR for different layers and batch sizes
# 5 TRAINING WITH LARS
We re-trained Alexnet and Alexnet-BN with LARS for batches up to 32K 6. For B=8K the accuracy of both networks matched the baseline B=512 (see Figure 3). Alexnet-BN trained with B=16K lost 0.9% in accuracy, and trained with B=32K lost 2.6%.
Table 3: Alexnet and Alexnet-BN: Training with LARS
# (a) Alexnet (warm-up for 2 epochs)
# (b) Alexnet-BN (warm-up for 5 epochs)
(b) Alexnet-BN (warm-up for 5 epochs) | 1708.03888#17 | Large Batch Training of Convolutional Networks | A common way to speed up training of large convolutional networks is to add
computational units. Training is then performed using data-parallel synchronous
Stochastic Gradient Descent (SGD) with mini-batch divided between computational
units. With an increase in the number of nodes, the batch size grows. But
training with large batch size often results in the lower model accuracy. We
argue that the current recipe for large batch training (linear learning rate
scaling with warm-up) is not general enough and training may diverge. To
overcome this optimization difficulties we propose a new training algorithm
based on Layer-wise Adaptive Rate Scaling (LARS). Using LARS, we scaled Alexnet
up to a batch size of 8K, and Resnet-50 to a batch size of 32K without loss in
accuracy. | http://arxiv.org/pdf/1708.03888 | Yang You, Igor Gitman, Boris Ginsburg | cs.CV | null | null | cs.CV | 20170813 | 20170913 | [
{
"id": "1609.04836"
},
{
"id": "1705.08741"
},
{
"id": "1706.02677"
},
{
"id": "1604.00981"
},
{
"id": "1708.02188"
},
{
"id": "1705.09319"
}
] |
1708.03888 | 18 | # (b) Alexnet-BN (warm-up for 5 epochs)
(b) Alexnet-BN (warm-up for 5 epochs)
Batch 512 4K 8K 16K 32K LR 2 10 10 14 TBD accuracy,% 58.7 58.5 58.2 55.0 TBD Batch LR 2 512 10 4K 14 8K 23 16K 22 32K accuracy,% 60.2 60.4 60.1 59.3 57.8
6 Models have been trained for 100 epochs using SGD with momentum=0.9, weight decay=0.0005, polyno- mial (p=2) decay LR policy, and LARS coefï¬cient η = 0.001. Training have been done on NVIDIA DGX1. To emulate large batches (B=16K and 32K) we used iter_size parameter to partition mini-batch into smaller chunks. The weights update is done after gradients for the last chunk are computed.
5
Technical Report
AlexNet-BN for ImageNet 0.6 ° a © ES ° © Top-1 Test Accuracy Top-1 Test Accuracy oo â Batch=512 ââ Batch=8192 0.0 Oo 20 40 60 80 100 Epochs | 1708.03888#18 | Large Batch Training of Convolutional Networks | A common way to speed up training of large convolutional networks is to add
computational units. Training is then performed using data-parallel synchronous
Stochastic Gradient Descent (SGD) with mini-batch divided between computational
units. With an increase in the number of nodes, the batch size grows. But
training with large batch size often results in the lower model accuracy. We
argue that the current recipe for large batch training (linear learning rate
scaling with warm-up) is not general enough and training may diverge. To
overcome this optimization difficulties we propose a new training algorithm
based on Layer-wise Adaptive Rate Scaling (LARS). Using LARS, we scaled Alexnet
up to a batch size of 8K, and Resnet-50 to a batch size of 32K without loss in
accuracy. | http://arxiv.org/pdf/1708.03888 | Yang You, Igor Gitman, Boris Ginsburg | cs.CV | null | null | cs.CV | 20170813 | 20170913 | [
{
"id": "1609.04836"
},
{
"id": "1705.08741"
},
{
"id": "1706.02677"
},
{
"id": "1604.00981"
},
{
"id": "1708.02188"
},
{
"id": "1705.09319"
}
] |
1708.03888 | 19 | AlexNet-BN for ImageNet 0.6 © a ° FS ° Top-1 Test Accuracy on â Batch=512, Baseline ââ Batch=8192, LARS 0.0 Oo 20 40 60 80 100 Epochs
(a) Training without LARS (b) Training with LARS
Figure 3: LARS: Alexnet-BN with B=8K
There is a relatively wide interval of base LRs which gives the "best" accuracy. for example, for Alexnet-BN with B=16K LRs from [13;22] give the accuracy â 59.3, for B=32k, LRs from [17,28] give â 57.5
Alexnet-BN: Accurcay vs Base LR âBatch=16K âBatch=32K Base LR (LARS)
Figure 4: Alexnet-BN, B=16K and 32k: Accuracy as function of LR
Next we retrained Resnet-50, ver.1 from He et al. (2016) with LARS. As a baseline we used B=256 with corresponding top-1 accuracy 73%. 7
Table 4: ResNet50 with LARS. | 1708.03888#19 | Large Batch Training of Convolutional Networks | A common way to speed up training of large convolutional networks is to add
computational units. Training is then performed using data-parallel synchronous
Stochastic Gradient Descent (SGD) with mini-batch divided between computational
units. With an increase in the number of nodes, the batch size grows. But
training with large batch size often results in the lower model accuracy. We
argue that the current recipe for large batch training (linear learning rate
scaling with warm-up) is not general enough and training may diverge. To
overcome this optimization difficulties we propose a new training algorithm
based on Layer-wise Adaptive Rate Scaling (LARS). Using LARS, we scaled Alexnet
up to a batch size of 8K, and Resnet-50 to a batch size of 32K without loss in
accuracy. | http://arxiv.org/pdf/1708.03888 | Yang You, Igor Gitman, Boris Ginsburg | cs.CV | null | null | cs.CV | 20170813 | 20170913 | [
{
"id": "1609.04836"
},
{
"id": "1705.08741"
},
{
"id": "1706.02677"
},
{
"id": "1604.00981"
},
{
"id": "1708.02188"
},
{
"id": "1705.09319"
}
] |
1708.03888 | 20 | Table 4: ResNet50 with LARS.
Batch 256 8K 16K 32K LR policy poly(2) LARS+poly(2) LARS+poly(2) LARS+poly(2) γ 0.2 0.6 2.5 2.9 warm-up N/A 5 5 5 accuracy, % 73.0 72.7 73.0 72.3
7 Note that our baseline 73% is lower than the published state-of-the-art 75% Goyal et al. (2017) and Cho et al. (2017) for few reasons. We trained with the minimal data augmentation (pre-scale images to 256x256 and use random 224x224 crop with horizontal ï¬ip). During testing we used one model and 1 central crop. The state-of- the art accuracy 75% was achieved with more extensive data augmentation during testing, and with multi-model, multi-crop testing. For more details see log ï¬les https://people.eecs.berkeley.edu/â¼youyang/publications/batch.
6
Technical Report | 1708.03888#20 | Large Batch Training of Convolutional Networks | A common way to speed up training of large convolutional networks is to add
computational units. Training is then performed using data-parallel synchronous
Stochastic Gradient Descent (SGD) with mini-batch divided between computational
units. With an increase in the number of nodes, the batch size grows. But
training with large batch size often results in the lower model accuracy. We
argue that the current recipe for large batch training (linear learning rate
scaling with warm-up) is not general enough and training may diverge. To
overcome this optimization difficulties we propose a new training algorithm
based on Layer-wise Adaptive Rate Scaling (LARS). Using LARS, we scaled Alexnet
up to a batch size of 8K, and Resnet-50 to a batch size of 32K without loss in
accuracy. | http://arxiv.org/pdf/1708.03888 | Yang You, Igor Gitman, Boris Ginsburg | cs.CV | null | null | cs.CV | 20170813 | 20170913 | [
{
"id": "1609.04836"
},
{
"id": "1705.08741"
},
{
"id": "1706.02677"
},
{
"id": "1604.00981"
},
{
"id": "1708.02188"
},
{
"id": "1705.09319"
}
] |
1708.03888 | 21 | 6
Technical Report
08 ImageNet by ResNet50 without Data Augmentation 2 a 2 a 2 @ Top-1 Test Accuracy ° FS 0.2 â Batch=32k, LR=2.9, warmup, LARS â Batch=16k, LR=2.5, warmup, LARS 01 â Batch=8k, LR=6.4, warmup 4 â Batch=256, LR=0.2 °° 20 40 60 80 100 Epochs
Figure 5: Scaling ResNet-50 up to B=32K with LARS.
All networks have been trained using SGD with momentum 0.9 and weight decay=0.0001 for 90 epochs. We used LARS and warm-up for 5 epochs with polynomial decay (power=2) LR policy.
We found that with LARS we can scale up Resnet-50 up to batch B=32K with almost the same (-0.7%) accuracy as baseline
# 6 LARGE BATCH VS NUMBER OF STEPS | 1708.03888#21 | Large Batch Training of Convolutional Networks | A common way to speed up training of large convolutional networks is to add
computational units. Training is then performed using data-parallel synchronous
Stochastic Gradient Descent (SGD) with mini-batch divided between computational
units. With an increase in the number of nodes, the batch size grows. But
training with large batch size often results in the lower model accuracy. We
argue that the current recipe for large batch training (linear learning rate
scaling with warm-up) is not general enough and training may diverge. To
overcome this optimization difficulties we propose a new training algorithm
based on Layer-wise Adaptive Rate Scaling (LARS). Using LARS, we scaled Alexnet
up to a batch size of 8K, and Resnet-50 to a batch size of 32K without loss in
accuracy. | http://arxiv.org/pdf/1708.03888 | Yang You, Igor Gitman, Boris Ginsburg | cs.CV | null | null | cs.CV | 20170813 | 20170913 | [
{
"id": "1609.04836"
},
{
"id": "1705.08741"
},
{
"id": "1706.02677"
},
{
"id": "1604.00981"
},
{
"id": "1708.02188"
},
{
"id": "1705.09319"
}
] |
1708.03888 | 22 | # 6 LARGE BATCH VS NUMBER OF STEPS
As one can see from Alexnet-BN exmaple for B=32K, even training with LARS and using large LR does not reach baseline accuracy. But the accuracy can be recovered completely by just training longer. We argue that when batch very large, the stochastic gradients become very close to true gradients, so increasing the batch does not give much additional gradient information comparing to smaller batches.
Table 5: Alexnet-BN, B=32K: Accuracy vs Training duration
Num of epochs 100 125 150 175 200 accuracy, % 57.8 59.2 59.5 59.5 59.9
# 7 CONCLUSION
Large batch is a key for scaling up training of convolutional networks. The existing approach for large-batch training, based on using large learning rates, leads to divergence, especially during the initial phase, even with learning rate warm-up. To solve these optimization difï¬culties we proposed the new algorithm, which adapts the learning rate for each layer (LARS). Using LARS, we extended scaling of Alexnet and Resnet-50 to B=32K. Training of these networks with batch above 32K without accuracy loss is still open problem.
# REFERENCES | 1708.03888#22 | Large Batch Training of Convolutional Networks | A common way to speed up training of large convolutional networks is to add
computational units. Training is then performed using data-parallel synchronous
Stochastic Gradient Descent (SGD) with mini-batch divided between computational
units. With an increase in the number of nodes, the batch size grows. But
training with large batch size often results in the lower model accuracy. We
argue that the current recipe for large batch training (linear learning rate
scaling with warm-up) is not general enough and training may diverge. To
overcome this optimization difficulties we propose a new training algorithm
based on Layer-wise Adaptive Rate Scaling (LARS). Using LARS, we scaled Alexnet
up to a batch size of 8K, and Resnet-50 to a batch size of 32K without loss in
accuracy. | http://arxiv.org/pdf/1708.03888 | Yang You, Igor Gitman, Boris Ginsburg | cs.CV | null | null | cs.CV | 20170813 | 20170913 | [
{
"id": "1609.04836"
},
{
"id": "1705.08741"
},
{
"id": "1706.02677"
},
{
"id": "1604.00981"
},
{
"id": "1708.02188"
},
{
"id": "1705.09319"
}
] |
1708.03888 | 23 | # REFERENCES
Jianmin Chen, Rajat Monga, Samy Bengio, and Rafal Jozefowicz. Revisiting distributed synchronous sgd. arXiv preprint arXiv:1604.00981, 2016.
7
# Technical Report
Minsik Cho, Ulrich Finkler, Sameer Kumar, David Kung, Vaibhav Saxena, and Dheeraj Sreedhar. Powerai ddl. arXiv preprint arXiv:1708.02188, 2017.
Valeriu Codreanu, Damian Podareanu, and Vikram Saletore. Blog: Achieving deep learning training in less than 40 minutes on imagenet-1k with scale-out intel® xeonâ¢/xeon phi⢠architectures. blog https://blog.surf:nl/en/imagenet- 1k-training-on-intel-xeon-phi-in-less-than-40-minutes/, 2017.
Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pp. 248â255. IEEE, 2009. | 1708.03888#23 | Large Batch Training of Convolutional Networks | A common way to speed up training of large convolutional networks is to add
computational units. Training is then performed using data-parallel synchronous
Stochastic Gradient Descent (SGD) with mini-batch divided between computational
units. With an increase in the number of nodes, the batch size grows. But
training with large batch size often results in the lower model accuracy. We
argue that the current recipe for large batch training (linear learning rate
scaling with warm-up) is not general enough and training may diverge. To
overcome this optimization difficulties we propose a new training algorithm
based on Layer-wise Adaptive Rate Scaling (LARS). Using LARS, we scaled Alexnet
up to a batch size of 8K, and Resnet-50 to a batch size of 32K without loss in
accuracy. | http://arxiv.org/pdf/1708.03888 | Yang You, Igor Gitman, Boris Ginsburg | cs.CV | null | null | cs.CV | 20170813 | 20170913 | [
{
"id": "1609.04836"
},
{
"id": "1705.08741"
},
{
"id": "1706.02677"
},
{
"id": "1604.00981"
},
{
"id": "1708.02188"
},
{
"id": "1705.09319"
}
] |
1708.03888 | 24 | Priya Goyal, Piotr Dollár, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, and Kaiming He. Accurate, large minibatch sgd: Training imagenet in 1 hour. arXiv preprint arXiv:1706.02677, 2017.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770â778, 2016.
Elad Hoffer, Itay Hubara, and Daniel Soudry. Train longer, generalize better: closing the gen- eralization gap in large batch training of neural networks. arXiv preprint arXiv:1705.08741, 2017.
Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, and Ping Tak Peter Tang. On large-batch training for deep learning: Generalization gap and sharp minima. arXiv preprint arXiv:1609.04836, 2016. | 1708.03888#24 | Large Batch Training of Convolutional Networks | A common way to speed up training of large convolutional networks is to add
computational units. Training is then performed using data-parallel synchronous
Stochastic Gradient Descent (SGD) with mini-batch divided between computational
units. With an increase in the number of nodes, the batch size grows. But
training with large batch size often results in the lower model accuracy. We
argue that the current recipe for large batch training (linear learning rate
scaling with warm-up) is not general enough and training may diverge. To
overcome this optimization difficulties we propose a new training algorithm
based on Layer-wise Adaptive Rate Scaling (LARS). Using LARS, we scaled Alexnet
up to a batch size of 8K, and Resnet-50 to a batch size of 32K without loss in
accuracy. | http://arxiv.org/pdf/1708.03888 | Yang You, Igor Gitman, Boris Ginsburg | cs.CV | null | null | cs.CV | 20170813 | 20170913 | [
{
"id": "1609.04836"
},
{
"id": "1705.08741"
},
{
"id": "1706.02677"
},
{
"id": "1604.00981"
},
{
"id": "1708.02188"
},
{
"id": "1705.09319"
}
] |
1708.03888 | 25 | Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
Alex Krizhevsky. One weird trick for parallelizing convolutional neural networks. arXiv preprint arXiv:1404.5997, 2014.
Jean Lafond, Nicolas Vasilache, and Léon Bottou. Diagonal rescaling for neural networks. arXiv preprint arXiv:1705.09319v1, 2017.
Mu Li. Scaling Distributed Machine Learning with System and Algorithm Co-design. PhD thesis, CMU, 2017.
Mu Li, Tong Zhang, Yuqiang Chen, and Alexander J Smola. Efï¬cient mini-batch training for stochastic optimization. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 661â670. ACM, 2014.
Tijmen Tieleman and Geoffrey Hinton. Lecture 6.5-rmsprop, coursera: Neural networks for machine learning. University of Toronto, Tech. Rep, 2012.
8 | 1708.03888#25 | Large Batch Training of Convolutional Networks | A common way to speed up training of large convolutional networks is to add
computational units. Training is then performed using data-parallel synchronous
Stochastic Gradient Descent (SGD) with mini-batch divided between computational
units. With an increase in the number of nodes, the batch size grows. But
training with large batch size often results in the lower model accuracy. We
argue that the current recipe for large batch training (linear learning rate
scaling with warm-up) is not general enough and training may diverge. To
overcome this optimization difficulties we propose a new training algorithm
based on Layer-wise Adaptive Rate Scaling (LARS). Using LARS, we scaled Alexnet
up to a batch size of 8K, and Resnet-50 to a batch size of 32K without loss in
accuracy. | http://arxiv.org/pdf/1708.03888 | Yang You, Igor Gitman, Boris Ginsburg | cs.CV | null | null | cs.CV | 20170813 | 20170913 | [
{
"id": "1609.04836"
},
{
"id": "1705.08741"
},
{
"id": "1706.02677"
},
{
"id": "1604.00981"
},
{
"id": "1708.02188"
},
{
"id": "1705.09319"
}
] |
1708.02901 | 1 | Learning visual representations with self-supervised learning has become popular in computer vision. The idea is to design auxiliary tasks where labels are free to obtain. Most of these tasks end up providing data to learn speciï¬c In this paper, kinds of invariance useful for recognition. we propose to exploit different self-supervised approaches to learn representations invariant to (i) inter-instance vari- ations (two objects in the same class should have similar features) and (ii) intra-instance variations (viewpoint, pose, deformations, illumination, etc.). Instead of combining two approaches with multi-task learning, we argue to organize and reason the data with multiple variations. Speciï¬cally, we propose to generate a graph with millions of objects mined from hundreds of thousands of videos. The objects are connected by two types of edges which correspond to two types of invariance: âdifferent instances but a simi- lar viewpoint and categoryâ and âdifferent viewpoints of the same instanceâ. By applying simple transitivity on the graph with these edges, we can obtain pairs of images ex- hibiting richer visual invariance. We use this data to train a Triplet-Siamese network with | 1708.02901#1 | Transitive Invariance for Self-supervised Visual Representation Learning | Learning visual representations with self-supervised learning has become
popular in computer vision. The idea is to design auxiliary tasks where labels
are free to obtain. Most of these tasks end up providing data to learn specific
kinds of invariance useful for recognition. In this paper, we propose to
exploit different self-supervised approaches to learn representations invariant
to (i) inter-instance variations (two objects in the same class should have
similar features) and (ii) intra-instance variations (viewpoint, pose,
deformations, illumination, etc). Instead of combining two approaches with
multi-task learning, we argue to organize and reason the data with multiple
variations. Specifically, we propose to generate a graph with millions of
objects mined from hundreds of thousands of videos. The objects are connected
by two types of edges which correspond to two types of invariance: "different
instances but a similar viewpoint and category" and "different viewpoints of
the same instance". By applying simple transitivity on the graph with these
edges, we can obtain pairs of images exhibiting richer visual invariance. We
use this data to train a Triplet-Siamese network with VGG16 as the base
architecture and apply the learned representations to different recognition
tasks. For object detection, we achieve 63.2% mAP on PASCAL VOC 2007 using Fast
R-CNN (compare to 67.3% with ImageNet pre-training). For the challenging COCO
dataset, our method is surprisingly close (23.5%) to the ImageNet-supervised
counterpart (24.4%) using the Faster R-CNN framework. We also show that our
network can perform significantly better than the ImageNet network in the
surface normal estimation task. | http://arxiv.org/pdf/1708.02901 | Xiaolong Wang, Kaiming He, Abhinav Gupta | cs.CV | ICCV 2017 | null | cs.CV | 20170809 | 20170815 | [] |
1708.02901 | 2 | graph with these edges, we can obtain pairs of images ex- hibiting richer visual invariance. We use this data to train a Triplet-Siamese network with VGG16 as the base archi- tecture and apply the learned representations to different recognition tasks. For object detection, we achieve 63.2% mAP on PASCAL VOC 2007 using Fast R-CNN (compare to 67.3% with ImageNet pre-training). For the challenging COCO dataset, our method is surprisingly close (23.5%) to the ImageNet-supervised counterpart (24.4%) using the Faster R-CNN framework. We also show that our network can perform signiï¬cantly better than the ImageNet network in the surface normal estimation task. | 1708.02901#2 | Transitive Invariance for Self-supervised Visual Representation Learning | Learning visual representations with self-supervised learning has become
popular in computer vision. The idea is to design auxiliary tasks where labels
are free to obtain. Most of these tasks end up providing data to learn specific
kinds of invariance useful for recognition. In this paper, we propose to
exploit different self-supervised approaches to learn representations invariant
to (i) inter-instance variations (two objects in the same class should have
similar features) and (ii) intra-instance variations (viewpoint, pose,
deformations, illumination, etc). Instead of combining two approaches with
multi-task learning, we argue to organize and reason the data with multiple
variations. Specifically, we propose to generate a graph with millions of
objects mined from hundreds of thousands of videos. The objects are connected
by two types of edges which correspond to two types of invariance: "different
instances but a similar viewpoint and category" and "different viewpoints of
the same instance". By applying simple transitivity on the graph with these
edges, we can obtain pairs of images exhibiting richer visual invariance. We
use this data to train a Triplet-Siamese network with VGG16 as the base
architecture and apply the learned representations to different recognition
tasks. For object detection, we achieve 63.2% mAP on PASCAL VOC 2007 using Fast
R-CNN (compare to 67.3% with ImageNet pre-training). For the challenging COCO
dataset, our method is surprisingly close (23.5%) to the ImageNet-supervised
counterpart (24.4%) using the Faster R-CNN framework. We also show that our
network can perform significantly better than the ImageNet network in the
surface normal estimation task. | http://arxiv.org/pdf/1708.02901 | Xiaolong Wang, Kaiming He, Abhinav Gupta | cs.CV | ICCV 2017 | null | cs.CV | 20170809 | 20170815 | [] |
1708.02901 | 3 | Inter-instance mee A B | vm instance Intra-instance Invariance Invariance Transitive Invariance More Examples: Object A Object B Tracked Object Aâ Tracked Object B
Figure 1: We propose to obtain rich invariance by apply- ing simple transitive relations. In this example, two differ- ent cars A and B are linked by the features that are good for inter-instance invariance (e.g., using [9]); and each car is linked to another view (Aâ and Bâ) by visual tracking [61]. Then we can obtain new invariance from object pairs (A, Bâ), (Aâ, B), and (Aâ, Bâ) via transitivity. We show more examples in the bottom.
# 1. Introduction
to complicated visual recognition tasks [17, 38].
Visual invariance is a core issue in learning visual rep- resentations. Traditional features like SIFT [39] and HOG [6] are histograms of edges that are to an extent invariant to illumination, orientations, scales, and translations. Modern deep representations are capable of learning high-level in- variance from large-scale data [47] , e.g., viewpoint, pose, deformation, and semantics. These can also be transferred | 1708.02901#3 | Transitive Invariance for Self-supervised Visual Representation Learning | Learning visual representations with self-supervised learning has become
popular in computer vision. The idea is to design auxiliary tasks where labels
are free to obtain. Most of these tasks end up providing data to learn specific
kinds of invariance useful for recognition. In this paper, we propose to
exploit different self-supervised approaches to learn representations invariant
to (i) inter-instance variations (two objects in the same class should have
similar features) and (ii) intra-instance variations (viewpoint, pose,
deformations, illumination, etc). Instead of combining two approaches with
multi-task learning, we argue to organize and reason the data with multiple
variations. Specifically, we propose to generate a graph with millions of
objects mined from hundreds of thousands of videos. The objects are connected
by two types of edges which correspond to two types of invariance: "different
instances but a similar viewpoint and category" and "different viewpoints of
the same instance". By applying simple transitivity on the graph with these
edges, we can obtain pairs of images exhibiting richer visual invariance. We
use this data to train a Triplet-Siamese network with VGG16 as the base
architecture and apply the learned representations to different recognition
tasks. For object detection, we achieve 63.2% mAP on PASCAL VOC 2007 using Fast
R-CNN (compare to 67.3% with ImageNet pre-training). For the challenging COCO
dataset, our method is surprisingly close (23.5%) to the ImageNet-supervised
counterpart (24.4%) using the Faster R-CNN framework. We also show that our
network can perform significantly better than the ImageNet network in the
surface normal estimation task. | http://arxiv.org/pdf/1708.02901 | Xiaolong Wang, Kaiming He, Abhinav Gupta | cs.CV | ICCV 2017 | null | cs.CV | 20170809 | 20170815 | [] |
1708.02901 | 4 | In the scheme of supervised learning, human annotations that map a variety of examples into a single label provide supervision for learning invariant representations. For ex- ample, two horses with different illumination, poses, and breeds are invariantly annotated as a category of âhorseâ. Such human knowledge on invariance is expected to be learned by capable deep neural networks [33, 28] through
1
carefully annotated data. However, large-scale, high-quality annotations come at a cost of expensive human effort. | 1708.02901#4 | Transitive Invariance for Self-supervised Visual Representation Learning | Learning visual representations with self-supervised learning has become
popular in computer vision. The idea is to design auxiliary tasks where labels
are free to obtain. Most of these tasks end up providing data to learn specific
kinds of invariance useful for recognition. In this paper, we propose to
exploit different self-supervised approaches to learn representations invariant
to (i) inter-instance variations (two objects in the same class should have
similar features) and (ii) intra-instance variations (viewpoint, pose,
deformations, illumination, etc). Instead of combining two approaches with
multi-task learning, we argue to organize and reason the data with multiple
variations. Specifically, we propose to generate a graph with millions of
objects mined from hundreds of thousands of videos. The objects are connected
by two types of edges which correspond to two types of invariance: "different
instances but a similar viewpoint and category" and "different viewpoints of
the same instance". By applying simple transitivity on the graph with these
edges, we can obtain pairs of images exhibiting richer visual invariance. We
use this data to train a Triplet-Siamese network with VGG16 as the base
architecture and apply the learned representations to different recognition
tasks. For object detection, we achieve 63.2% mAP on PASCAL VOC 2007 using Fast
R-CNN (compare to 67.3% with ImageNet pre-training). For the challenging COCO
dataset, our method is surprisingly close (23.5%) to the ImageNet-supervised
counterpart (24.4%) using the Faster R-CNN framework. We also show that our
network can perform significantly better than the ImageNet network in the
surface normal estimation task. | http://arxiv.org/pdf/1708.02901 | Xiaolong Wang, Kaiming He, Abhinav Gupta | cs.CV | ICCV 2017 | null | cs.CV | 20170809 | 20170815 | [] |
1708.02901 | 5 | 1
carefully annotated data. However, large-scale, high-quality annotations come at a cost of expensive human effort.
Unsupervised or âself-supervisedâ learning (e.g., [61, 9, 45, 63, 64, 35, 44, 62, 40, 66]) recently has attracted in- creasing interests because the âlabelsâ are free to obtain. Unlike supervised learning that learns invariance from the semantic labels, the self-supervised learning scheme mines it from the nature of the data. We observe that most self- supervised approaches learn representations that are invari- ant to: (i) inter-instance variations, which reï¬ects the com- monality among different instances. For example, relative positions of patches [9] (see also Figure 3) or channels of colors [63, 64] can be predicted through the commonality shared by many object instances; (ii) intra-instance varia- tions. Intra-instance invariance is learned from the pose, viewpoint, and illumination changes by tracking a single moving instance in videos [61, 44]. However, either source of invariance can be as rich as that provided by human an- notations on large-scale datasets like ImageNet. | 1708.02901#5 | Transitive Invariance for Self-supervised Visual Representation Learning | Learning visual representations with self-supervised learning has become
popular in computer vision. The idea is to design auxiliary tasks where labels
are free to obtain. Most of these tasks end up providing data to learn specific
kinds of invariance useful for recognition. In this paper, we propose to
exploit different self-supervised approaches to learn representations invariant
to (i) inter-instance variations (two objects in the same class should have
similar features) and (ii) intra-instance variations (viewpoint, pose,
deformations, illumination, etc). Instead of combining two approaches with
multi-task learning, we argue to organize and reason the data with multiple
variations. Specifically, we propose to generate a graph with millions of
objects mined from hundreds of thousands of videos. The objects are connected
by two types of edges which correspond to two types of invariance: "different
instances but a similar viewpoint and category" and "different viewpoints of
the same instance". By applying simple transitivity on the graph with these
edges, we can obtain pairs of images exhibiting richer visual invariance. We
use this data to train a Triplet-Siamese network with VGG16 as the base
architecture and apply the learned representations to different recognition
tasks. For object detection, we achieve 63.2% mAP on PASCAL VOC 2007 using Fast
R-CNN (compare to 67.3% with ImageNet pre-training). For the challenging COCO
dataset, our method is surprisingly close (23.5%) to the ImageNet-supervised
counterpart (24.4%) using the Faster R-CNN framework. We also show that our
network can perform significantly better than the ImageNet network in the
surface normal estimation task. | http://arxiv.org/pdf/1708.02901 | Xiaolong Wang, Kaiming He, Abhinav Gupta | cs.CV | ICCV 2017 | null | cs.CV | 20170809 | 20170815 | [] |
1708.02901 | 6 | Even after signiï¬cant advances in the ï¬eld of self- supervised learning, there is still a long way to go compared to supervised learning. What should be the next steps? It seems that an obvious way is to obtain multiple sources of invariance by combining multiple self-supervised tasks, e.g., via multiple losses. Unfortunately, this na¨ıve solution turns out to give little improvement (as we will show by experiments).
We argue that the trick lies not in the tasks but in the way of exploiting data. To leverage both intra-instance and inter- instance invariance, in this paper we construct a huge afï¬n- ity graph consisting of two types of edges (see Figure 1): the ï¬rst type of edges relates âdifferent instances of similar viewpoints/poses and potentially the same categoryâ, and the second type of edges relates âdifferent viewpoints/poses of an identical instanceâ. We instantiate the ï¬rst type of edges by learning commonalities across instances via the approach of [9], and the second type by unsupervised track- ing of objects in videos [61]. We set up simple transitive re- lations on this graph to infer more complex invariance from the data, which are then used to train a Triplet-Siamese net- work for learning visual representations. | 1708.02901#6 | Transitive Invariance for Self-supervised Visual Representation Learning | Learning visual representations with self-supervised learning has become
popular in computer vision. The idea is to design auxiliary tasks where labels
are free to obtain. Most of these tasks end up providing data to learn specific
kinds of invariance useful for recognition. In this paper, we propose to
exploit different self-supervised approaches to learn representations invariant
to (i) inter-instance variations (two objects in the same class should have
similar features) and (ii) intra-instance variations (viewpoint, pose,
deformations, illumination, etc). Instead of combining two approaches with
multi-task learning, we argue to organize and reason the data with multiple
variations. Specifically, we propose to generate a graph with millions of
objects mined from hundreds of thousands of videos. The objects are connected
by two types of edges which correspond to two types of invariance: "different
instances but a similar viewpoint and category" and "different viewpoints of
the same instance". By applying simple transitivity on the graph with these
edges, we can obtain pairs of images exhibiting richer visual invariance. We
use this data to train a Triplet-Siamese network with VGG16 as the base
architecture and apply the learned representations to different recognition
tasks. For object detection, we achieve 63.2% mAP on PASCAL VOC 2007 using Fast
R-CNN (compare to 67.3% with ImageNet pre-training). For the challenging COCO
dataset, our method is surprisingly close (23.5%) to the ImageNet-supervised
counterpart (24.4%) using the Faster R-CNN framework. We also show that our
network can perform significantly better than the ImageNet network in the
surface normal estimation task. | http://arxiv.org/pdf/1708.02901 | Xiaolong Wang, Kaiming He, Abhinav Gupta | cs.CV | ICCV 2017 | null | cs.CV | 20170809 | 20170815 | [] |
1708.02901 | 7 | Experiments show that our representations learned with- out any annotations can be well transferred to the object detection task. Speciï¬cally, we achieve 63.2% mAP with VGG16 [50] when ï¬ne-tuning Fast R-CNN on VOC2007, against the ImageNet pre-training baseline of 67.3%. More importantly, we also report the ï¬rst-ever result of un-/self- supervised pre-training models ï¬ne-tuned on the challeng- ing COCO object detection dataset [37], achieving 23.5% AP comparing against 24.4% AP that is ï¬ne-tuned from an ImageNet pre-trained counterpart (both using VGG16). To our knowledge, this is the closest accuracy to the ImageNet pre-training counterpart obtained on object detection tasks.
# 2. Related Work | 1708.02901#7 | Transitive Invariance for Self-supervised Visual Representation Learning | Learning visual representations with self-supervised learning has become
popular in computer vision. The idea is to design auxiliary tasks where labels
are free to obtain. Most of these tasks end up providing data to learn specific
kinds of invariance useful for recognition. In this paper, we propose to
exploit different self-supervised approaches to learn representations invariant
to (i) inter-instance variations (two objects in the same class should have
similar features) and (ii) intra-instance variations (viewpoint, pose,
deformations, illumination, etc). Instead of combining two approaches with
multi-task learning, we argue to organize and reason the data with multiple
variations. Specifically, we propose to generate a graph with millions of
objects mined from hundreds of thousands of videos. The objects are connected
by two types of edges which correspond to two types of invariance: "different
instances but a similar viewpoint and category" and "different viewpoints of
the same instance". By applying simple transitivity on the graph with these
edges, we can obtain pairs of images exhibiting richer visual invariance. We
use this data to train a Triplet-Siamese network with VGG16 as the base
architecture and apply the learned representations to different recognition
tasks. For object detection, we achieve 63.2% mAP on PASCAL VOC 2007 using Fast
R-CNN (compare to 67.3% with ImageNet pre-training). For the challenging COCO
dataset, our method is surprisingly close (23.5%) to the ImageNet-supervised
counterpart (24.4%) using the Faster R-CNN framework. We also show that our
network can perform significantly better than the ImageNet network in the
surface normal estimation task. | http://arxiv.org/pdf/1708.02901 | Xiaolong Wang, Kaiming He, Abhinav Gupta | cs.CV | ICCV 2017 | null | cs.CV | 20170809 | 20170815 | [] |
1708.02901 | 8 | # 2. Related Work
Unsupervised learning of visual representations is a re- search area of particular interest. Approaches to unsuper- vised learning can be roughly categorized into two main (i) generative models, and (ii) self-supervised streams: learning. Earlier methods for generative models include Anto-Encoders [43, 56, 34, 32] and Restricted Boltzmann Machines (RBMs) [24, 4, 54, 12]. For example, Le et al. [32] trained a multi-layer auto-encoder on a large-scale dataset of YouTube videos: although no label is provided, some neurons in high-level layers can recognize cats and human faces. Recent generative models such as Generative Adversarial Networks [20] and Variational Auto-Encoders [27] are capable of generating more realistic images. The generated examples or the neural networks that learn to gen- erate examples can be exploited to learn representations of data [11, 10]. | 1708.02901#8 | Transitive Invariance for Self-supervised Visual Representation Learning | Learning visual representations with self-supervised learning has become
popular in computer vision. The idea is to design auxiliary tasks where labels
are free to obtain. Most of these tasks end up providing data to learn specific
kinds of invariance useful for recognition. In this paper, we propose to
exploit different self-supervised approaches to learn representations invariant
to (i) inter-instance variations (two objects in the same class should have
similar features) and (ii) intra-instance variations (viewpoint, pose,
deformations, illumination, etc). Instead of combining two approaches with
multi-task learning, we argue to organize and reason the data with multiple
variations. Specifically, we propose to generate a graph with millions of
objects mined from hundreds of thousands of videos. The objects are connected
by two types of edges which correspond to two types of invariance: "different
instances but a similar viewpoint and category" and "different viewpoints of
the same instance". By applying simple transitivity on the graph with these
edges, we can obtain pairs of images exhibiting richer visual invariance. We
use this data to train a Triplet-Siamese network with VGG16 as the base
architecture and apply the learned representations to different recognition
tasks. For object detection, we achieve 63.2% mAP on PASCAL VOC 2007 using Fast
R-CNN (compare to 67.3% with ImageNet pre-training). For the challenging COCO
dataset, our method is surprisingly close (23.5%) to the ImageNet-supervised
counterpart (24.4%) using the Faster R-CNN framework. We also show that our
network can perform significantly better than the ImageNet network in the
surface normal estimation task. | http://arxiv.org/pdf/1708.02901 | Xiaolong Wang, Kaiming He, Abhinav Gupta | cs.CV | ICCV 2017 | null | cs.CV | 20170809 | 20170815 | [] |
1708.02901 | 9 | Self-supervised learning is another popular stream for learning invariant features. Visual invariance can be cap- tured by the same instance/scene taken in a sequence of video frames [61, 53, 26, 1, 41, 57, 35, 44, 42, 21]. For example, Wang and Gupta [61] leverage tracking of objects in videos to learn visual invariance within individual ob- jects; Jayaraman and Grauman [26] train a Siamese net- work to model the ego-motion between two frames in a scene; Mathieu et al. [41] propose to learn representations by predicting future frames; Pathak et al. [44] train a net- work to segment the foreground objects where are acquired via motion cues. On the other hand, common character- istics of different object instances can also be mined from data [9, 63, 64, 30, 31]. For example, relative positions of image patches [9] may reï¬ect feasible spatial layouts of ob- jects; possible colors can be inferred [63, 64] if the networks can relate colors to object appearances. Rather than rely on temporal changes in video, these methods are able to exploit still images. | 1708.02901#9 | Transitive Invariance for Self-supervised Visual Representation Learning | Learning visual representations with self-supervised learning has become
popular in computer vision. The idea is to design auxiliary tasks where labels
are free to obtain. Most of these tasks end up providing data to learn specific
kinds of invariance useful for recognition. In this paper, we propose to
exploit different self-supervised approaches to learn representations invariant
to (i) inter-instance variations (two objects in the same class should have
similar features) and (ii) intra-instance variations (viewpoint, pose,
deformations, illumination, etc). Instead of combining two approaches with
multi-task learning, we argue to organize and reason the data with multiple
variations. Specifically, we propose to generate a graph with millions of
objects mined from hundreds of thousands of videos. The objects are connected
by two types of edges which correspond to two types of invariance: "different
instances but a similar viewpoint and category" and "different viewpoints of
the same instance". By applying simple transitivity on the graph with these
edges, we can obtain pairs of images exhibiting richer visual invariance. We
use this data to train a Triplet-Siamese network with VGG16 as the base
architecture and apply the learned representations to different recognition
tasks. For object detection, we achieve 63.2% mAP on PASCAL VOC 2007 using Fast
R-CNN (compare to 67.3% with ImageNet pre-training). For the challenging COCO
dataset, our method is surprisingly close (23.5%) to the ImageNet-supervised
counterpart (24.4%) using the Faster R-CNN framework. We also show that our
network can perform significantly better than the ImageNet network in the
surface normal estimation task. | http://arxiv.org/pdf/1708.02901 | Xiaolong Wang, Kaiming He, Abhinav Gupta | cs.CV | ICCV 2017 | null | cs.CV | 20170809 | 20170815 | [] |
1708.02901 | 10 | Our work is also closely related to mid-level patch clus- tering [51, 7, 8] and unsupervised discovery of semantic classes [48, 52] as we attempt to ï¬nd reliable clusters in our afï¬nity graph. In addition, the ranking function used in this paper is related to deep metric learning with Siamese architectures [5, 22, 19, 59, 25].
Analysis of the two types of invariance. Our generic framework can be instantiated by any two self-supervised methods that can respectively learn inter-/intra-instance in- variance. In this paper we adopt Doersch et al.âs [9] con- text prediction method to build inter-instance invariance, and Wang and Guptaâs [61] tracking method to build intra- instance invariance. We analyze their behaviors as follows. The context prediction task in [9] randomly samples a patch (blue in Figure 3) and one of its eight neighbors (red), and trains the network to predict their relative position, deâ intra-instance edge â inter-instance edge cluster | 1708.02901#10 | Transitive Invariance for Self-supervised Visual Representation Learning | Learning visual representations with self-supervised learning has become
popular in computer vision. The idea is to design auxiliary tasks where labels
are free to obtain. Most of these tasks end up providing data to learn specific
kinds of invariance useful for recognition. In this paper, we propose to
exploit different self-supervised approaches to learn representations invariant
to (i) inter-instance variations (two objects in the same class should have
similar features) and (ii) intra-instance variations (viewpoint, pose,
deformations, illumination, etc). Instead of combining two approaches with
multi-task learning, we argue to organize and reason the data with multiple
variations. Specifically, we propose to generate a graph with millions of
objects mined from hundreds of thousands of videos. The objects are connected
by two types of edges which correspond to two types of invariance: "different
instances but a similar viewpoint and category" and "different viewpoints of
the same instance". By applying simple transitivity on the graph with these
edges, we can obtain pairs of images exhibiting richer visual invariance. We
use this data to train a Triplet-Siamese network with VGG16 as the base
architecture and apply the learned representations to different recognition
tasks. For object detection, we achieve 63.2% mAP on PASCAL VOC 2007 using Fast
R-CNN (compare to 67.3% with ImageNet pre-training). For the challenging COCO
dataset, our method is surprisingly close (23.5%) to the ImageNet-supervised
counterpart (24.4%) using the Faster R-CNN framework. We also show that our
network can perform significantly better than the ImageNet network in the
surface normal estimation task. | http://arxiv.org/pdf/1708.02901 | Xiaolong Wang, Kaiming He, Abhinav Gupta | cs.CV | ICCV 2017 | null | cs.CV | 20170809 | 20170815 | [] |
1708.02901 | 11 | Figure 2: Illustrations for our graph construction. We ï¬rst cluster the object nodes into coarser clusters (namely âparentâ clusters) and then inside each cluster we perform nearest-neighbor search to obtain âchildâ clusters consist- ing of 4 samples. Samples in each child cluster are linked to each other with the âinter-instanceâ edges. We add new samples via visual tracking and link them to the original objects by âintra-instanceâ edges.
ï¬ned as an 8-way classiï¬cation problem. In the ï¬rst two examples in Figure 3, the context prediction model is able to predict that the âlegâ patch is below the âfaceâ patch of the cat, indicating that the model has learned some com- monality of spatial layout from the training data. However, the model would fail if the pose, viewpoint, or deforma- tion of the object is changed drastically, e.g., in the third example of Figure 3 â unless the dataset is diversiï¬ed and large enough to include gradually changing poses, it is hard for the models to learn that the changed pose can be of the same object type. | 1708.02901#11 | Transitive Invariance for Self-supervised Visual Representation Learning | Learning visual representations with self-supervised learning has become
popular in computer vision. The idea is to design auxiliary tasks where labels
are free to obtain. Most of these tasks end up providing data to learn specific
kinds of invariance useful for recognition. In this paper, we propose to
exploit different self-supervised approaches to learn representations invariant
to (i) inter-instance variations (two objects in the same class should have
similar features) and (ii) intra-instance variations (viewpoint, pose,
deformations, illumination, etc). Instead of combining two approaches with
multi-task learning, we argue to organize and reason the data with multiple
variations. Specifically, we propose to generate a graph with millions of
objects mined from hundreds of thousands of videos. The objects are connected
by two types of edges which correspond to two types of invariance: "different
instances but a similar viewpoint and category" and "different viewpoints of
the same instance". By applying simple transitivity on the graph with these
edges, we can obtain pairs of images exhibiting richer visual invariance. We
use this data to train a Triplet-Siamese network with VGG16 as the base
architecture and apply the learned representations to different recognition
tasks. For object detection, we achieve 63.2% mAP on PASCAL VOC 2007 using Fast
R-CNN (compare to 67.3% with ImageNet pre-training). For the challenging COCO
dataset, our method is surprisingly close (23.5%) to the ImageNet-supervised
counterpart (24.4%) using the Faster R-CNN framework. We also show that our
network can perform significantly better than the ImageNet network in the
surface normal estimation task. | http://arxiv.org/pdf/1708.02901 | Xiaolong Wang, Kaiming He, Abhinav Gupta | cs.CV | ICCV 2017 | null | cs.CV | 20170809 | 20170815 | [] |
1708.02901 | 12 | On the other hand, these changes can be more success- fully captured by the visual tracking method presented in [61], e.g., see (A, Aâ) and (B, Bâ) in Figure 1. But by tracking an identical instance we cannot associate different instances of the same semantics. Thus we expect the rep- resentations learned in [61] are weak in handling the varia- tions between different objects in the same category.
# 3. Overview
Our goal is to learn visual representations which cap- ture: (i) inter-instance invariance (e.g., two instances of cats should have similar features), and (ii) intra-instance invari- ance (pose, viewpoint, deformation, illumination, and other variance of the same object instance). We have tried to for- mulate this as a multi-task (multi-loss) learning problem in our initial experiments (detailed in Table 2 and 3) and ob- served unsatisfactory performance. Instead of doing so, we propose to obtain a richer set of invariance by performing transitive reasoning on the data.
Our ï¬rst step is to construct a graph that describes the afï¬nity among image patches. A node in the graph denotes
eg er ee Ue X=( TBM) Y=7 X=(" OW )Y=7 X=(iaa i, | 1708.02901#12 | Transitive Invariance for Self-supervised Visual Representation Learning | Learning visual representations with self-supervised learning has become
popular in computer vision. The idea is to design auxiliary tasks where labels
are free to obtain. Most of these tasks end up providing data to learn specific
kinds of invariance useful for recognition. In this paper, we propose to
exploit different self-supervised approaches to learn representations invariant
to (i) inter-instance variations (two objects in the same class should have
similar features) and (ii) intra-instance variations (viewpoint, pose,
deformations, illumination, etc). Instead of combining two approaches with
multi-task learning, we argue to organize and reason the data with multiple
variations. Specifically, we propose to generate a graph with millions of
objects mined from hundreds of thousands of videos. The objects are connected
by two types of edges which correspond to two types of invariance: "different
instances but a similar viewpoint and category" and "different viewpoints of
the same instance". By applying simple transitivity on the graph with these
edges, we can obtain pairs of images exhibiting richer visual invariance. We
use this data to train a Triplet-Siamese network with VGG16 as the base
architecture and apply the learned representations to different recognition
tasks. For object detection, we achieve 63.2% mAP on PASCAL VOC 2007 using Fast
R-CNN (compare to 67.3% with ImageNet pre-training). For the challenging COCO
dataset, our method is surprisingly close (23.5%) to the ImageNet-supervised
counterpart (24.4%) using the Faster R-CNN framework. We also show that our
network can perform significantly better than the ImageNet network in the
surface normal estimation task. | http://arxiv.org/pdf/1708.02901 | Xiaolong Wang, Kaiming He, Abhinav Gupta | cs.CV | ICCV 2017 | null | cs.CV | 20170809 | 20170815 | [] |
1708.02901 | 13 | eg er ee Ue X=( TBM) Y=7 X=(" OW )Y=7 X=(iaa i,
Figure 3: The context prediction task deï¬ned in [9]. Given two patches in an image, it learns to predict the relative po- sition between them.
an image patch. We deï¬ne two types of edges in the graph that relate image patches to each other. The ï¬rst type of edges, called inter-instance edges, link two nodes which correspond to different object instances of similar visual ap- pearance; the second type of edges, called intra-instance edges, link two nodes which correspond to an identical ob- ject captured at different time steps of a track. The solid arrows in Figure 1 illustrate these two types of edges. | 1708.02901#13 | Transitive Invariance for Self-supervised Visual Representation Learning | Learning visual representations with self-supervised learning has become
popular in computer vision. The idea is to design auxiliary tasks where labels
are free to obtain. Most of these tasks end up providing data to learn specific
kinds of invariance useful for recognition. In this paper, we propose to
exploit different self-supervised approaches to learn representations invariant
to (i) inter-instance variations (two objects in the same class should have
similar features) and (ii) intra-instance variations (viewpoint, pose,
deformations, illumination, etc). Instead of combining two approaches with
multi-task learning, we argue to organize and reason the data with multiple
variations. Specifically, we propose to generate a graph with millions of
objects mined from hundreds of thousands of videos. The objects are connected
by two types of edges which correspond to two types of invariance: "different
instances but a similar viewpoint and category" and "different viewpoints of
the same instance". By applying simple transitivity on the graph with these
edges, we can obtain pairs of images exhibiting richer visual invariance. We
use this data to train a Triplet-Siamese network with VGG16 as the base
architecture and apply the learned representations to different recognition
tasks. For object detection, we achieve 63.2% mAP on PASCAL VOC 2007 using Fast
R-CNN (compare to 67.3% with ImageNet pre-training). For the challenging COCO
dataset, our method is surprisingly close (23.5%) to the ImageNet-supervised
counterpart (24.4%) using the Faster R-CNN framework. We also show that our
network can perform significantly better than the ImageNet network in the
surface normal estimation task. | http://arxiv.org/pdf/1708.02901 | Xiaolong Wang, Kaiming He, Abhinav Gupta | cs.CV | ICCV 2017 | null | cs.CV | 20170809 | 20170815 | [] |
1708.02901 | 14 | Given the built graph, we want to transit the relations via the known edges and associate unconnected nodes that may provide under-explored invariance (Figure 1, dash arrows). Specifically, as shown in Figure 1, if patches (A, B) are linked via an inter-instance edge and (A, Aâ) and (B, Bâ) respectively are linked via âintra-instanceâ edges, we hope to enrich the invariance by simple transitivity and relate three new pairs of: (Aâ, Bâ), (A, Bâ), and (Aâ, B) (Figure 1, dash arrows).
We train a Triplet-Siamese network that encourages sim- ilar visual representations between the invariant samples (e.g., any pair consisting of A, Aâ, B, Bâ) and at the same time discourages similar visual representations to a third distractor sample (e.g., a random sample Câ unconnected to A,Aâ,B,Bâ). In all of our experiments, we apply VGGI16 [50] as the backbone architecture for each branch of this Triplet-Siamese network. The visual representations learned by this backbone architecture are evaluated on other recognition tasks.
# 4. Graph Construction | 1708.02901#14 | Transitive Invariance for Self-supervised Visual Representation Learning | Learning visual representations with self-supervised learning has become
popular in computer vision. The idea is to design auxiliary tasks where labels
are free to obtain. Most of these tasks end up providing data to learn specific
kinds of invariance useful for recognition. In this paper, we propose to
exploit different self-supervised approaches to learn representations invariant
to (i) inter-instance variations (two objects in the same class should have
similar features) and (ii) intra-instance variations (viewpoint, pose,
deformations, illumination, etc). Instead of combining two approaches with
multi-task learning, we argue to organize and reason the data with multiple
variations. Specifically, we propose to generate a graph with millions of
objects mined from hundreds of thousands of videos. The objects are connected
by two types of edges which correspond to two types of invariance: "different
instances but a similar viewpoint and category" and "different viewpoints of
the same instance". By applying simple transitivity on the graph with these
edges, we can obtain pairs of images exhibiting richer visual invariance. We
use this data to train a Triplet-Siamese network with VGG16 as the base
architecture and apply the learned representations to different recognition
tasks. For object detection, we achieve 63.2% mAP on PASCAL VOC 2007 using Fast
R-CNN (compare to 67.3% with ImageNet pre-training). For the challenging COCO
dataset, our method is surprisingly close (23.5%) to the ImageNet-supervised
counterpart (24.4%) using the Faster R-CNN framework. We also show that our
network can perform significantly better than the ImageNet network in the
surface normal estimation task. | http://arxiv.org/pdf/1708.02901 | Xiaolong Wang, Kaiming He, Abhinav Gupta | cs.CV | ICCV 2017 | null | cs.CV | 20170809 | 20170815 | [] |
1708.02901 | 15 | # 4. Graph Construction
We construct a graph with inter-instance and intra- instance edges. Firstly, we apply the method of [61] on a large set of 100K unlabeled videos (introduced in [61]) and mine millions of moving objects using motion cues (Sec. 4.1). We use the image patches of them to construct the nodes of the graph.
self- supervised method of [9] that learns context predictions on a large set of still images, which provide features to cluster the nodes and set up inter-instance edges (Sec. 4.2). On the other hand, we connect the image patches in the same visual track by intra-instance edges (Sec. 4.3). | 1708.02901#15 | Transitive Invariance for Self-supervised Visual Representation Learning | Learning visual representations with self-supervised learning has become
popular in computer vision. The idea is to design auxiliary tasks where labels
are free to obtain. Most of these tasks end up providing data to learn specific
kinds of invariance useful for recognition. In this paper, we propose to
exploit different self-supervised approaches to learn representations invariant
to (i) inter-instance variations (two objects in the same class should have
similar features) and (ii) intra-instance variations (viewpoint, pose,
deformations, illumination, etc). Instead of combining two approaches with
multi-task learning, we argue to organize and reason the data with multiple
variations. Specifically, we propose to generate a graph with millions of
objects mined from hundreds of thousands of videos. The objects are connected
by two types of edges which correspond to two types of invariance: "different
instances but a similar viewpoint and category" and "different viewpoints of
the same instance". By applying simple transitivity on the graph with these
edges, we can obtain pairs of images exhibiting richer visual invariance. We
use this data to train a Triplet-Siamese network with VGG16 as the base
architecture and apply the learned representations to different recognition
tasks. For object detection, we achieve 63.2% mAP on PASCAL VOC 2007 using Fast
R-CNN (compare to 67.3% with ImageNet pre-training). For the challenging COCO
dataset, our method is surprisingly close (23.5%) to the ImageNet-supervised
counterpart (24.4%) using the Faster R-CNN framework. We also show that our
network can perform significantly better than the ImageNet network in the
surface normal estimation task. | http://arxiv.org/pdf/1708.02901 | Xiaolong Wang, Kaiming He, Abhinav Gupta | cs.CV | ICCV 2017 | null | cs.CV | 20170809 | 20170815 | [] |
1708.02901 | 16 | Figure 4: Some example clustering results. Each row shows the 4 examples in a child cluster (Sec. 4.2).
# 4.1. Mining Moving Objects
We follow the approach in [61] to ï¬nd the moving ob- jects in videos. As a brief introduction, this method ï¬rst ap- plies Improved Dense Trajectories (IDT) [58] on videos to extract SURF [2] feature points and their motion. The video frames are then pruned if there is too much motion (indicat- ing camera motion) or too little motion (e.g., noisy signals). For the remaining frames, it crop a 227Ã227 bounding box (from â¼600Ã400 images) which includes the most number of moving points as the foreground object. However, for computational efï¬ciency, in this paper we rescale the image patches to 96 à 96 after cropping and we use them as inputs for clustering and training.
# 4.2. Inter-instance Edges via Clustering | 1708.02901#16 | Transitive Invariance for Self-supervised Visual Representation Learning | Learning visual representations with self-supervised learning has become
popular in computer vision. The idea is to design auxiliary tasks where labels
are free to obtain. Most of these tasks end up providing data to learn specific
kinds of invariance useful for recognition. In this paper, we propose to
exploit different self-supervised approaches to learn representations invariant
to (i) inter-instance variations (two objects in the same class should have
similar features) and (ii) intra-instance variations (viewpoint, pose,
deformations, illumination, etc). Instead of combining two approaches with
multi-task learning, we argue to organize and reason the data with multiple
variations. Specifically, we propose to generate a graph with millions of
objects mined from hundreds of thousands of videos. The objects are connected
by two types of edges which correspond to two types of invariance: "different
instances but a similar viewpoint and category" and "different viewpoints of
the same instance". By applying simple transitivity on the graph with these
edges, we can obtain pairs of images exhibiting richer visual invariance. We
use this data to train a Triplet-Siamese network with VGG16 as the base
architecture and apply the learned representations to different recognition
tasks. For object detection, we achieve 63.2% mAP on PASCAL VOC 2007 using Fast
R-CNN (compare to 67.3% with ImageNet pre-training). For the challenging COCO
dataset, our method is surprisingly close (23.5%) to the ImageNet-supervised
counterpart (24.4%) using the Faster R-CNN framework. We also show that our
network can perform significantly better than the ImageNet network in the
surface normal estimation task. | http://arxiv.org/pdf/1708.02901 | Xiaolong Wang, Kaiming He, Abhinav Gupta | cs.CV | ICCV 2017 | null | cs.CV | 20170809 | 20170815 | [] |
1708.02901 | 17 | # 4.2. Inter-instance Edges via Clustering
Given the extracted image patches which act as nodes, we want to link them with extra inter-instance edges. We rely on the visual representations learned from [9] to do this. We connect the nodes representing image patches which are close in the feature space. In addition, motivated by the mid-level clustering approaches [51, 7], we want to obtain millions of object clusters with a small number of objects in each to maintain high âpurityâ of the clusters. We describe the implementation details of this step as follows. | 1708.02901#17 | Transitive Invariance for Self-supervised Visual Representation Learning | Learning visual representations with self-supervised learning has become
popular in computer vision. The idea is to design auxiliary tasks where labels
are free to obtain. Most of these tasks end up providing data to learn specific
kinds of invariance useful for recognition. In this paper, we propose to
exploit different self-supervised approaches to learn representations invariant
to (i) inter-instance variations (two objects in the same class should have
similar features) and (ii) intra-instance variations (viewpoint, pose,
deformations, illumination, etc). Instead of combining two approaches with
multi-task learning, we argue to organize and reason the data with multiple
variations. Specifically, we propose to generate a graph with millions of
objects mined from hundreds of thousands of videos. The objects are connected
by two types of edges which correspond to two types of invariance: "different
instances but a similar viewpoint and category" and "different viewpoints of
the same instance". By applying simple transitivity on the graph with these
edges, we can obtain pairs of images exhibiting richer visual invariance. We
use this data to train a Triplet-Siamese network with VGG16 as the base
architecture and apply the learned representations to different recognition
tasks. For object detection, we achieve 63.2% mAP on PASCAL VOC 2007 using Fast
R-CNN (compare to 67.3% with ImageNet pre-training). For the challenging COCO
dataset, our method is surprisingly close (23.5%) to the ImageNet-supervised
counterpart (24.4%) using the Faster R-CNN framework. We also show that our
network can perform significantly better than the ImageNet network in the
surface normal estimation task. | http://arxiv.org/pdf/1708.02901 | Xiaolong Wang, Kaiming He, Abhinav Gupta | cs.CV | ICCV 2017 | null | cs.CV | 20170809 | 20170815 | [] |
1708.02901 | 18 | We extract the pool5 features of the VGG16 network trained as in [9]. Following [9], we use ImageNet without labels to train this network. Note that because we use a patch size of 96Ã96, the dimension of our pool5 feature is 3Ã3Ã512=4608. The distance between samples is cal- culated by the cosine distance of these features. We want the object patches in each cluster to be close to each other in the feature space, and we care less about the differences between clusters. However, directly clustering millions of image patches into millions of small clusters (e.g., by Kmeans) is time consuming. So we apply a hierarchical clus- tering approach (2-stage in this paper) where we ï¬rst group the images into a relatively small number of clusters, and then ï¬nd groups of small number of examples inside each cluster via nearest-neighbor search. | 1708.02901#18 | Transitive Invariance for Self-supervised Visual Representation Learning | Learning visual representations with self-supervised learning has become
popular in computer vision. The idea is to design auxiliary tasks where labels
are free to obtain. Most of these tasks end up providing data to learn specific
kinds of invariance useful for recognition. In this paper, we propose to
exploit different self-supervised approaches to learn representations invariant
to (i) inter-instance variations (two objects in the same class should have
similar features) and (ii) intra-instance variations (viewpoint, pose,
deformations, illumination, etc). Instead of combining two approaches with
multi-task learning, we argue to organize and reason the data with multiple
variations. Specifically, we propose to generate a graph with millions of
objects mined from hundreds of thousands of videos. The objects are connected
by two types of edges which correspond to two types of invariance: "different
instances but a similar viewpoint and category" and "different viewpoints of
the same instance". By applying simple transitivity on the graph with these
edges, we can obtain pairs of images exhibiting richer visual invariance. We
use this data to train a Triplet-Siamese network with VGG16 as the base
architecture and apply the learned representations to different recognition
tasks. For object detection, we achieve 63.2% mAP on PASCAL VOC 2007 using Fast
R-CNN (compare to 67.3% with ImageNet pre-training). For the challenging COCO
dataset, our method is surprisingly close (23.5%) to the ImageNet-supervised
counterpart (24.4%) using the Faster R-CNN framework. We also show that our
network can perform significantly better than the ImageNet network in the
surface normal estimation task. | http://arxiv.org/pdf/1708.02901 | Xiaolong Wang, Kaiming He, Abhinav Gupta | cs.CV | ICCV 2017 | null | cs.CV | 20170809 | 20170815 | [] |
1708.02901 | 19 | Speciï¬cally, in the ï¬rst stage of clustering, we apply K- means clustering with K = 5000 on the image patches. We then remove the clusters with number of examples less than 100 (this reduces K to 546 in our experiments on the im- age patches mined from the video dataset). We view these clusters as the âparentâ clusters (blue circles in Figure 2). Then in the second stage of clustering, inside each parent cluster, we perform nearest-neighbor search for each sam- ple and obtain its top 10 nearest neighbors in the feature space. We then ï¬nd any group of samples with a group size of 4, inside which all the samples are each otherâs top-10 nearest neighbors. We call these small clusters with 4 sam- ples âchildâ clusters (green circles in Figure 2). We then link these image patches with each other inside a child clus- ter via âinter-instanceâ edges. Note that different child clus- ters may overlap, i.e., we allow the same sample to appear in different groups. However, in our experiments we ï¬nd that most samples appear only in one group. We show some results of clustering in Figure 4.
# 4.3. Intra-instance Edges via Tracking | 1708.02901#19 | Transitive Invariance for Self-supervised Visual Representation Learning | Learning visual representations with self-supervised learning has become
popular in computer vision. The idea is to design auxiliary tasks where labels
are free to obtain. Most of these tasks end up providing data to learn specific
kinds of invariance useful for recognition. In this paper, we propose to
exploit different self-supervised approaches to learn representations invariant
to (i) inter-instance variations (two objects in the same class should have
similar features) and (ii) intra-instance variations (viewpoint, pose,
deformations, illumination, etc). Instead of combining two approaches with
multi-task learning, we argue to organize and reason the data with multiple
variations. Specifically, we propose to generate a graph with millions of
objects mined from hundreds of thousands of videos. The objects are connected
by two types of edges which correspond to two types of invariance: "different
instances but a similar viewpoint and category" and "different viewpoints of
the same instance". By applying simple transitivity on the graph with these
edges, we can obtain pairs of images exhibiting richer visual invariance. We
use this data to train a Triplet-Siamese network with VGG16 as the base
architecture and apply the learned representations to different recognition
tasks. For object detection, we achieve 63.2% mAP on PASCAL VOC 2007 using Fast
R-CNN (compare to 67.3% with ImageNet pre-training). For the challenging COCO
dataset, our method is surprisingly close (23.5%) to the ImageNet-supervised
counterpart (24.4%) using the Faster R-CNN framework. We also show that our
network can perform significantly better than the ImageNet network in the
surface normal estimation task. | http://arxiv.org/pdf/1708.02901 | Xiaolong Wang, Kaiming He, Abhinav Gupta | cs.CV | ICCV 2017 | null | cs.CV | 20170809 | 20170815 | [] |
1708.02901 | 20 | # 4.3. Intra-instance Edges via Tracking
To obtain rich variations of viewpoint and deformation changes of the same object instance, we apply visual track- ing on the mined moving objects in the videos as in [61]. More speciï¬cally, given a moving object in the video, it ap- plies KCF [23] to track the object for N = 30 frames and obtain another sample of the object in the end of the track. Note that the KCF tracker does not require any human su- pervision. We add these new objects as nodes to the graph and link the two samples in the same track with an intra- instance edge (purple in Figure 2).
# 5. Learning with Transitions in the Graph | 1708.02901#20 | Transitive Invariance for Self-supervised Visual Representation Learning | Learning visual representations with self-supervised learning has become
popular in computer vision. The idea is to design auxiliary tasks where labels
are free to obtain. Most of these tasks end up providing data to learn specific
kinds of invariance useful for recognition. In this paper, we propose to
exploit different self-supervised approaches to learn representations invariant
to (i) inter-instance variations (two objects in the same class should have
similar features) and (ii) intra-instance variations (viewpoint, pose,
deformations, illumination, etc). Instead of combining two approaches with
multi-task learning, we argue to organize and reason the data with multiple
variations. Specifically, we propose to generate a graph with millions of
objects mined from hundreds of thousands of videos. The objects are connected
by two types of edges which correspond to two types of invariance: "different
instances but a similar viewpoint and category" and "different viewpoints of
the same instance". By applying simple transitivity on the graph with these
edges, we can obtain pairs of images exhibiting richer visual invariance. We
use this data to train a Triplet-Siamese network with VGG16 as the base
architecture and apply the learned representations to different recognition
tasks. For object detection, we achieve 63.2% mAP on PASCAL VOC 2007 using Fast
R-CNN (compare to 67.3% with ImageNet pre-training). For the challenging COCO
dataset, our method is surprisingly close (23.5%) to the ImageNet-supervised
counterpart (24.4%) using the Faster R-CNN framework. We also show that our
network can perform significantly better than the ImageNet network in the
surface normal estimation task. | http://arxiv.org/pdf/1708.02901 | Xiaolong Wang, Kaiming He, Abhinav Gupta | cs.CV | ICCV 2017 | null | cs.CV | 20170809 | 20170815 | [] |
1708.02901 | 21 | # 5. Learning with Transitions in the Graph
With the graph constructed, we want to link more image patches (see dotted links in Figure 1) which may be related via the transitivity of invariance. Objects subject to differ- ent levels of invariance can thus be related to each other. Specifically, if we have a set of nodes {.A, B, Aâ, Bâ} where (A, B) are connected by an inter-instance edge and (A, Aâ) and (B, Bâ) are connected by an intra-instance edge, by as- suming transitivity of invariance we expect the new pairs of (A, Bâ), (Aâ, B), and (Aâ, Bâ) to share similar high-level visual representations. Some examples are illustrated in Figure | and 5.
We train a deep neural network (VGG16) to gener- ates similar visual representations if the image patches are linked by inter-instance/intra-instance edges or their transi- tivity (which we call a positive pair of samples). To avoid a
â a | cy = ad â= AE S Tracked Object Aâ ders, «J Tracked â Object Bâ ; | 1708.02901#21 | Transitive Invariance for Self-supervised Visual Representation Learning | Learning visual representations with self-supervised learning has become
popular in computer vision. The idea is to design auxiliary tasks where labels
are free to obtain. Most of these tasks end up providing data to learn specific
kinds of invariance useful for recognition. In this paper, we propose to
exploit different self-supervised approaches to learn representations invariant
to (i) inter-instance variations (two objects in the same class should have
similar features) and (ii) intra-instance variations (viewpoint, pose,
deformations, illumination, etc). Instead of combining two approaches with
multi-task learning, we argue to organize and reason the data with multiple
variations. Specifically, we propose to generate a graph with millions of
objects mined from hundreds of thousands of videos. The objects are connected
by two types of edges which correspond to two types of invariance: "different
instances but a similar viewpoint and category" and "different viewpoints of
the same instance". By applying simple transitivity on the graph with these
edges, we can obtain pairs of images exhibiting richer visual invariance. We
use this data to train a Triplet-Siamese network with VGG16 as the base
architecture and apply the learned representations to different recognition
tasks. For object detection, we achieve 63.2% mAP on PASCAL VOC 2007 using Fast
R-CNN (compare to 67.3% with ImageNet pre-training). For the challenging COCO
dataset, our method is surprisingly close (23.5%) to the ImageNet-supervised
counterpart (24.4%) using the Faster R-CNN framework. We also show that our
network can perform significantly better than the ImageNet network in the
surface normal estimation task. | http://arxiv.org/pdf/1708.02901 | Xiaolong Wang, Kaiming He, Abhinav Gupta | cs.CV | ICCV 2017 | null | cs.CV | 20170809 | 20170815 | [] |
1708.02901 | 22 | â a | cy = ad â= AE S Tracked Object Aâ ders, «J Tracked â Object Bâ ;
Figure 5: Examples used for training the network. Each column shows a set of image patches { A, B, Aâ, Bâ}. Here, A and B is linked by an inter-instance edge, and Aâ/Bâ is linked to A/B via intra-instance edges.
2eonv Seon 5 4 4096 1024 A), 3 c08V_ 3 conv (A,4â,C) (A,B',C) ' < me: un 2A a | Xt a sharing
Figure 6: Our Triplet-Siamese network. We can feed in the network with different combinations of examples.
trivial solution of identical representations, we also encour- age the network to generate dissimilar representations if a node is expected to be unrelated. Speciï¬cally, we constrain the image patches from different âparentâ clusters (which are more likely to have different categories) to have differ- ent representations (which we call a negative pair of sam- ples). We design a Triplet-Siamese network with a ranking loss function [59, 61] such that the distance between related samples should be smaller than the distance of unrelated samples. | 1708.02901#22 | Transitive Invariance for Self-supervised Visual Representation Learning | Learning visual representations with self-supervised learning has become
popular in computer vision. The idea is to design auxiliary tasks where labels
are free to obtain. Most of these tasks end up providing data to learn specific
kinds of invariance useful for recognition. In this paper, we propose to
exploit different self-supervised approaches to learn representations invariant
to (i) inter-instance variations (two objects in the same class should have
similar features) and (ii) intra-instance variations (viewpoint, pose,
deformations, illumination, etc). Instead of combining two approaches with
multi-task learning, we argue to organize and reason the data with multiple
variations. Specifically, we propose to generate a graph with millions of
objects mined from hundreds of thousands of videos. The objects are connected
by two types of edges which correspond to two types of invariance: "different
instances but a similar viewpoint and category" and "different viewpoints of
the same instance". By applying simple transitivity on the graph with these
edges, we can obtain pairs of images exhibiting richer visual invariance. We
use this data to train a Triplet-Siamese network with VGG16 as the base
architecture and apply the learned representations to different recognition
tasks. For object detection, we achieve 63.2% mAP on PASCAL VOC 2007 using Fast
R-CNN (compare to 67.3% with ImageNet pre-training). For the challenging COCO
dataset, our method is surprisingly close (23.5%) to the ImageNet-supervised
counterpart (24.4%) using the Faster R-CNN framework. We also show that our
network can perform significantly better than the ImageNet network in the
surface normal estimation task. | http://arxiv.org/pdf/1708.02901 | Xiaolong Wang, Kaiming He, Abhinav Gupta | cs.CV | ICCV 2017 | null | cs.CV | 20170809 | 20170815 | [] |
1708.02901 | 23 | Our Triplet-Siamese network includes three towers of a ConvNet with shared weights (Figure 6). For each tower, we adopt the standard VGG16 architecture [50] to the convolutional layers, after which we add two fully- connected layers with 4096-d and 1024-d outputs. The Triplet-Siamese network accepts a triplet sample as its in- put: the ï¬rst two image patches in the triplet are a positive pair, and the last two are a negative pair. We extract their 1024-d features and calculate the ranking loss as follows.
Given an arbitrary pair of image patches A and B, we dear di . â,â 4H) fine their distance as: D(A, B) = 1 TFET F°(-) is the representation mapping of the network. With a triplet of (X,X+,X~) where (X, X+) is a positive pair and (X, X~) is a negative pair as defined above, we mini- mize the ranking loss: where
L(X, X +, X â) = max{0, D(X, X +) â D(X, X â) + m}, | 1708.02901#23 | Transitive Invariance for Self-supervised Visual Representation Learning | Learning visual representations with self-supervised learning has become
popular in computer vision. The idea is to design auxiliary tasks where labels
are free to obtain. Most of these tasks end up providing data to learn specific
kinds of invariance useful for recognition. In this paper, we propose to
exploit different self-supervised approaches to learn representations invariant
to (i) inter-instance variations (two objects in the same class should have
similar features) and (ii) intra-instance variations (viewpoint, pose,
deformations, illumination, etc). Instead of combining two approaches with
multi-task learning, we argue to organize and reason the data with multiple
variations. Specifically, we propose to generate a graph with millions of
objects mined from hundreds of thousands of videos. The objects are connected
by two types of edges which correspond to two types of invariance: "different
instances but a similar viewpoint and category" and "different viewpoints of
the same instance". By applying simple transitivity on the graph with these
edges, we can obtain pairs of images exhibiting richer visual invariance. We
use this data to train a Triplet-Siamese network with VGG16 as the base
architecture and apply the learned representations to different recognition
tasks. For object detection, we achieve 63.2% mAP on PASCAL VOC 2007 using Fast
R-CNN (compare to 67.3% with ImageNet pre-training). For the challenging COCO
dataset, our method is surprisingly close (23.5%) to the ImageNet-supervised
counterpart (24.4%) using the Faster R-CNN framework. We also show that our
network can perform significantly better than the ImageNet network in the
surface normal estimation task. | http://arxiv.org/pdf/1708.02901 | Xiaolong Wang, Kaiming He, Abhinav Gupta | cs.CV | ICCV 2017 | null | cs.CV | 20170809 | 20170815 | [] |
1708.02901 | 24 | L(X, X +, X â) = max{0, D(X, X +) â D(X, X â) + m},
where m is a margin set as 0.5 in our experiments. Al- though we have only one objective function, we have dif- ferent types of training examples. As illustrated in Figure 6, given the set of related samples {A, B, Aâ, Bâ} (see Fig- ure 5) and a random distractor sample C' from another par- ent cluster, we can train the network to handle, e.g., view- point invariance for the same instance via £(A, Aâ, Câ) and invariance to different objects sharing the same semantics via L(A, Bâ,C).
Besides exploring these relations, we have also tried to enforce the distance between different objects to be larger than the distance between two different viewpoints of the same object, e.g., D(A, Aâ) < D(A, Bâ). But we have not found this extra relation brings any improvement. Inter- estingly, we found that the representations learned by our method can in general satisfy D(A, Aâ) < D(A, Bâ) after training.
# 6. Experiments | 1708.02901#24 | Transitive Invariance for Self-supervised Visual Representation Learning | Learning visual representations with self-supervised learning has become
popular in computer vision. The idea is to design auxiliary tasks where labels
are free to obtain. Most of these tasks end up providing data to learn specific
kinds of invariance useful for recognition. In this paper, we propose to
exploit different self-supervised approaches to learn representations invariant
to (i) inter-instance variations (two objects in the same class should have
similar features) and (ii) intra-instance variations (viewpoint, pose,
deformations, illumination, etc). Instead of combining two approaches with
multi-task learning, we argue to organize and reason the data with multiple
variations. Specifically, we propose to generate a graph with millions of
objects mined from hundreds of thousands of videos. The objects are connected
by two types of edges which correspond to two types of invariance: "different
instances but a similar viewpoint and category" and "different viewpoints of
the same instance". By applying simple transitivity on the graph with these
edges, we can obtain pairs of images exhibiting richer visual invariance. We
use this data to train a Triplet-Siamese network with VGG16 as the base
architecture and apply the learned representations to different recognition
tasks. For object detection, we achieve 63.2% mAP on PASCAL VOC 2007 using Fast
R-CNN (compare to 67.3% with ImageNet pre-training). For the challenging COCO
dataset, our method is surprisingly close (23.5%) to the ImageNet-supervised
counterpart (24.4%) using the Faster R-CNN framework. We also show that our
network can perform significantly better than the ImageNet network in the
surface normal estimation task. | http://arxiv.org/pdf/1708.02901 | Xiaolong Wang, Kaiming He, Abhinav Gupta | cs.CV | ICCV 2017 | null | cs.CV | 20170809 | 20170815 | [] |
1708.02901 | 25 | # 6. Experiments
We perform extensive analysis on our self-supervised representations. We ï¬rst evaluate our ConvNet as a fea- ture extractor on different tasks without ï¬ne-tuning . We then show the results of transferring the representations to vision tasks including object detection and surface normal estimation with ï¬ne-tuning.
Implementation Details. To prepare the data for train- ing, we download the 100K videos from YouTube using the URLs provided by [36, 61]. By mining the moving objects and tracking in the videos, we obtain â¼10 million image patches of objects. By applying the transitivity on the graph constructed, we obtain 7 million positive pairs of objects where each pair of objects are two different instances with different viewpoints. We also randomly sample 2 million object pairs connected by the intra-instance edges.
We train our network with these 9 million pairs of images using a learning rate of 0.001 and a mini-batch size of 100. For each pair we sample the third distractor patch from a different âparent clusterâ in the same mini-batch. We use the network pre-trained in [9] to initialize our convolutional layers and randomly initialized the fully connected layers. We train the network for 200K iterations with our method.
# 6.1. Qualitative Results without Fine-tuning | 1708.02901#25 | Transitive Invariance for Self-supervised Visual Representation Learning | Learning visual representations with self-supervised learning has become
popular in computer vision. The idea is to design auxiliary tasks where labels
are free to obtain. Most of these tasks end up providing data to learn specific
kinds of invariance useful for recognition. In this paper, we propose to
exploit different self-supervised approaches to learn representations invariant
to (i) inter-instance variations (two objects in the same class should have
similar features) and (ii) intra-instance variations (viewpoint, pose,
deformations, illumination, etc). Instead of combining two approaches with
multi-task learning, we argue to organize and reason the data with multiple
variations. Specifically, we propose to generate a graph with millions of
objects mined from hundreds of thousands of videos. The objects are connected
by two types of edges which correspond to two types of invariance: "different
instances but a similar viewpoint and category" and "different viewpoints of
the same instance". By applying simple transitivity on the graph with these
edges, we can obtain pairs of images exhibiting richer visual invariance. We
use this data to train a Triplet-Siamese network with VGG16 as the base
architecture and apply the learned representations to different recognition
tasks. For object detection, we achieve 63.2% mAP on PASCAL VOC 2007 using Fast
R-CNN (compare to 67.3% with ImageNet pre-training). For the challenging COCO
dataset, our method is surprisingly close (23.5%) to the ImageNet-supervised
counterpart (24.4%) using the Faster R-CNN framework. We also show that our
network can perform significantly better than the ImageNet network in the
surface normal estimation task. | http://arxiv.org/pdf/1708.02901 | Xiaolong Wang, Kaiming He, Abhinav Gupta | cs.CV | ICCV 2017 | null | cs.CV | 20170809 | 20170815 | [] |
1708.02901 | 26 | # 6.1. Qualitative Results without Fine-tuning
We ï¬rst perform nearest-neighbor search to show qual- itative results. We adopt the pool5 feature of the VGG16
Query (a) Context Prediction Network (b) Our Network (c) ImageNet Pre-trained Network
Figure 7: Nearest-neighbor search on the PASCAL VOC dataset. We extract three types of features: (a) context prediction network from [9], (b) network trained with our self-supervised method, and (c) the network pre-trained in the annotated ImageNet dataset. We show that our network can represent a greater variety (e.g., viewpoints) of objects of the same category. | 1708.02901#26 | Transitive Invariance for Self-supervised Visual Representation Learning | Learning visual representations with self-supervised learning has become
popular in computer vision. The idea is to design auxiliary tasks where labels
are free to obtain. Most of these tasks end up providing data to learn specific
kinds of invariance useful for recognition. In this paper, we propose to
exploit different self-supervised approaches to learn representations invariant
to (i) inter-instance variations (two objects in the same class should have
similar features) and (ii) intra-instance variations (viewpoint, pose,
deformations, illumination, etc). Instead of combining two approaches with
multi-task learning, we argue to organize and reason the data with multiple
variations. Specifically, we propose to generate a graph with millions of
objects mined from hundreds of thousands of videos. The objects are connected
by two types of edges which correspond to two types of invariance: "different
instances but a similar viewpoint and category" and "different viewpoints of
the same instance". By applying simple transitivity on the graph with these
edges, we can obtain pairs of images exhibiting richer visual invariance. We
use this data to train a Triplet-Siamese network with VGG16 as the base
architecture and apply the learned representations to different recognition
tasks. For object detection, we achieve 63.2% mAP on PASCAL VOC 2007 using Fast
R-CNN (compare to 67.3% with ImageNet pre-training). For the challenging COCO
dataset, our method is surprisingly close (23.5%) to the ImageNet-supervised
counterpart (24.4%) using the Faster R-CNN framework. We also show that our
network can perform significantly better than the ImageNet network in the
surface normal estimation task. | http://arxiv.org/pdf/1708.02901 | Xiaolong Wang, Kaiming He, Abhinav Gupta | cs.CV | ICCV 2017 | null | cs.CV | 20170809 | 20170815 | [] |
1708.02901 | 27 | Figure 8: Top 6 responses for neurons in 4 different convo- lutional units of our network, visualized using [65].
network for all methods without any ï¬ne-tuning (Figure 7). We do this experiment on the object instances cropped from the PASCAL VOC 2007 dataset [13] (trainval). As Fig- ure 7 shows, given an query image on the left, the network pre-trained with the context prediction task [9] can retrieve objects of very similar viewpoints. On the other hand, our network shows more variations of objects and can often re- trieve objects with the same class as the query. We also show the nearest-neighbor results using fully-supervised ImageNet pre-trained features as a comparison.
# 6.2. Analysis on Object Detection
We evaluate how well our representations can be trans- ferred to object detection by ï¬ne-tuning Fast R-CNN [16] on PASCAL VOC 2007 [13]. We use the standard trainval set for training and test set for testing with VGG16 as the base architecture. For the detection network, we initialize the weights of convolutional layers from our self-supervised network and randomly initialize the fully- connected layers using Gaussian noise with zero mean and 0.001 standard deviation. | 1708.02901#27 | Transitive Invariance for Self-supervised Visual Representation Learning | Learning visual representations with self-supervised learning has become
popular in computer vision. The idea is to design auxiliary tasks where labels
are free to obtain. Most of these tasks end up providing data to learn specific
kinds of invariance useful for recognition. In this paper, we propose to
exploit different self-supervised approaches to learn representations invariant
to (i) inter-instance variations (two objects in the same class should have
similar features) and (ii) intra-instance variations (viewpoint, pose,
deformations, illumination, etc). Instead of combining two approaches with
multi-task learning, we argue to organize and reason the data with multiple
variations. Specifically, we propose to generate a graph with millions of
objects mined from hundreds of thousands of videos. The objects are connected
by two types of edges which correspond to two types of invariance: "different
instances but a similar viewpoint and category" and "different viewpoints of
the same instance". By applying simple transitivity on the graph with these
edges, we can obtain pairs of images exhibiting richer visual invariance. We
use this data to train a Triplet-Siamese network with VGG16 as the base
architecture and apply the learned representations to different recognition
tasks. For object detection, we achieve 63.2% mAP on PASCAL VOC 2007 using Fast
R-CNN (compare to 67.3% with ImageNet pre-training). For the challenging COCO
dataset, our method is surprisingly close (23.5%) to the ImageNet-supervised
counterpart (24.4%) using the Faster R-CNN framework. We also show that our
network can perform significantly better than the ImageNet network in the
surface normal estimation task. | http://arxiv.org/pdf/1708.02901 | Xiaolong Wang, Kaiming He, Abhinav Gupta | cs.CV | ICCV 2017 | null | cs.CV | 20170809 | 20170815 | [] |
1708.02901 | 28 | During ï¬ne-tuning Fast R-CNN, we use 0.00025 as the starting learning rate. We reduce the learning rate by 1/10 in every 50K iterations. We ï¬ne-tune the network for 150K iterations. Unlike standard Fast R-CNN where the ï¬rst few convolutional layers of the ImageNet pre-trained network are ï¬xed, we ï¬ne-tuned all layers on the PASCAL data as our model is pre-trained in a very different domain (e.g., video patches).
If we train Fast R- CNN from scratch without any pre-training, we can only obtain 39.7% mAP. With our self-supervised trained net- work as initialization, the detection mAP is increased to 63.2% (with a 23.5 points improvement). Our result com- pares competitively (4.1 points lower) to the counterpart us- ing ImageNet pre-training (67.3% with VGG16). | 1708.02901#28 | Transitive Invariance for Self-supervised Visual Representation Learning | Learning visual representations with self-supervised learning has become
popular in computer vision. The idea is to design auxiliary tasks where labels
are free to obtain. Most of these tasks end up providing data to learn specific
kinds of invariance useful for recognition. In this paper, we propose to
exploit different self-supervised approaches to learn representations invariant
to (i) inter-instance variations (two objects in the same class should have
similar features) and (ii) intra-instance variations (viewpoint, pose,
deformations, illumination, etc). Instead of combining two approaches with
multi-task learning, we argue to organize and reason the data with multiple
variations. Specifically, we propose to generate a graph with millions of
objects mined from hundreds of thousands of videos. The objects are connected
by two types of edges which correspond to two types of invariance: "different
instances but a similar viewpoint and category" and "different viewpoints of
the same instance". By applying simple transitivity on the graph with these
edges, we can obtain pairs of images exhibiting richer visual invariance. We
use this data to train a Triplet-Siamese network with VGG16 as the base
architecture and apply the learned representations to different recognition
tasks. For object detection, we achieve 63.2% mAP on PASCAL VOC 2007 using Fast
R-CNN (compare to 67.3% with ImageNet pre-training). For the challenging COCO
dataset, our method is surprisingly close (23.5%) to the ImageNet-supervised
counterpart (24.4%) using the Faster R-CNN framework. We also show that our
network can perform significantly better than the ImageNet network in the
surface normal estimation task. | http://arxiv.org/pdf/1708.02901 | Xiaolong Wang, Kaiming He, Abhinav Gupta | cs.CV | ICCV 2017 | null | cs.CV | 20170809 | 20170815 | [] |
1708.02901 | 29 | We also visualize the features using the visualization technique of [65]. For each convolutional unit in conv5 3, we retrieve the objects which give highest activation re- sponses and highlight the receptive ï¬elds on the images. We visualize the top 6 images for 4 different convolutional units in Figure 8. We can see these convolutional units are cor- responding to different semantic object parts (e.g., fronts of cars or buses wheels, animal legs, eyes or faces).
As we incorporate the invariance captured from [61] and [9], we also evaluate the results using these two approaches individually (Table 1). By ï¬ne-tuning the context predic- tion network of [9], we can obtain 61.5% mAP. To train the network of [61], we use exactly the same loss function and initialization as our approach except that there are only training examples of the same instance in the same visual track (i.e., only the samples linked by intra-instance edges in our graph). Its results is 60.2% mAP. Our result (63.2%) | 1708.02901#29 | Transitive Invariance for Self-supervised Visual Representation Learning | Learning visual representations with self-supervised learning has become
popular in computer vision. The idea is to design auxiliary tasks where labels
are free to obtain. Most of these tasks end up providing data to learn specific
kinds of invariance useful for recognition. In this paper, we propose to
exploit different self-supervised approaches to learn representations invariant
to (i) inter-instance variations (two objects in the same class should have
similar features) and (ii) intra-instance variations (viewpoint, pose,
deformations, illumination, etc). Instead of combining two approaches with
multi-task learning, we argue to organize and reason the data with multiple
variations. Specifically, we propose to generate a graph with millions of
objects mined from hundreds of thousands of videos. The objects are connected
by two types of edges which correspond to two types of invariance: "different
instances but a similar viewpoint and category" and "different viewpoints of
the same instance". By applying simple transitivity on the graph with these
edges, we can obtain pairs of images exhibiting richer visual invariance. We
use this data to train a Triplet-Siamese network with VGG16 as the base
architecture and apply the learned representations to different recognition
tasks. For object detection, we achieve 63.2% mAP on PASCAL VOC 2007 using Fast
R-CNN (compare to 67.3% with ImageNet pre-training). For the challenging COCO
dataset, our method is surprisingly close (23.5%) to the ImageNet-supervised
counterpart (24.4%) using the Faster R-CNN framework. We also show that our
network can perform significantly better than the ImageNet network in the
surface normal estimation task. | http://arxiv.org/pdf/1708.02901 | Xiaolong Wang, Kaiming He, Abhinav Gupta | cs.CV | ICCV 2017 | null | cs.CV | 20170809 | 20170815 | [] |
1708.02901 | 30 | method from scratch Vid-Edge [35] Context [9] Tracking [61] Ours ImageNet mAP aero bike bird boat bottle bus 39.7 51.7 55.8 21.7 24.0 44.2 54.4 58.2 39.6 30.8 61.5 70.8 72.1 54.7 49.7 60.2 65.7 73.2 55.4 46.4 63.2 68.4 74.6 57.1 49.6 67.3 74.4 78.0 65.9 54.4 10.5 12.5 31.0 30.9 34.1 39.7 car cat chair cow table dog horse mbike persn plant sheep sofa train tv 58.7 59.2 41.1 18.2 32.9 35.6 33.4 60.4 58.7 61.9 51.0 22.0 41.4 47.4 41.5 63.2 72.3 76.9 70.8 44.6 61.1 59.8 67.0 74.6 74.0 76.9 67.8 40.9 58.0 60.9 65.0 74.1 73.5 76.9 73.2 45.8 63.3 66.3 68.6 74.9 57.3 58.4 72.5 71.6 74.2 45.5 | 1708.02901#30 | Transitive Invariance for Self-supervised Visual Representation Learning | Learning visual representations with self-supervised learning has become
popular in computer vision. The idea is to design auxiliary tasks where labels
are free to obtain. Most of these tasks end up providing data to learn specific
kinds of invariance useful for recognition. In this paper, we propose to
exploit different self-supervised approaches to learn representations invariant
to (i) inter-instance variations (two objects in the same class should have
similar features) and (ii) intra-instance variations (viewpoint, pose,
deformations, illumination, etc). Instead of combining two approaches with
multi-task learning, we argue to organize and reason the data with multiple
variations. Specifically, we propose to generate a graph with millions of
objects mined from hundreds of thousands of videos. The objects are connected
by two types of edges which correspond to two types of invariance: "different
instances but a similar viewpoint and category" and "different viewpoints of
the same instance". By applying simple transitivity on the graph with these
edges, we can obtain pairs of images exhibiting richer visual invariance. We
use this data to train a Triplet-Siamese network with VGG16 as the base
architecture and apply the learned representations to different recognition
tasks. For object detection, we achieve 63.2% mAP on PASCAL VOC 2007 using Fast
R-CNN (compare to 67.3% with ImageNet pre-training). For the challenging COCO
dataset, our method is surprisingly close (23.5%) to the ImageNet-supervised
counterpart (24.4%) using the Faster R-CNN framework. We also show that our
network can perform significantly better than the ImageNet network in the
surface normal estimation task. | http://arxiv.org/pdf/1708.02901 | Xiaolong Wang, Kaiming He, Abhinav Gupta | cs.CV | ICCV 2017 | null | cs.CV | 20170809 | 20170815 | [] |
1708.02901 | 33 | method Ours Multi-Task Ours (15-frame) Ours (HOG) mAP aero bike bird boat bottle bus 63.2 68.4 74.6 57.1 49.6 62.1 70.0 74.2 57.2 48.4 61.5 70.3 74.1 53.3 47.1 60.4 65.8 73.4 54.7 47.7 34.1 33.0 33.5 30.2 car cat chair cow table dog horse mbike persn plant sheep sofa train tv 73.5 76.9 73.2 45.8 63.3 66.3 68.6 74.9 73.6 77.6 70.7 45.0 61.5 64.8 67.2 74.0 74.6 77.1 67.7 43.3 58.1 65.5 65.8 75.2 75.6 77.1 67.6 42.0 58.8 63.2 65.3 74.1 74.2 72.9 72.2 72.0 69.5 68.3 67.6 67.2 31.9 32.4 31.6 29.9 57.4 56.6 55.5 54.4 70.3 75.9 59.3 64.1 74.1 57.5 65.6 74.6 57.2 62.1 72.9 53.9 | 1708.02901#33 | Transitive Invariance for Self-supervised Visual Representation Learning | Learning visual representations with self-supervised learning has become
popular in computer vision. The idea is to design auxiliary tasks where labels
are free to obtain. Most of these tasks end up providing data to learn specific
kinds of invariance useful for recognition. In this paper, we propose to
exploit different self-supervised approaches to learn representations invariant
to (i) inter-instance variations (two objects in the same class should have
similar features) and (ii) intra-instance variations (viewpoint, pose,
deformations, illumination, etc). Instead of combining two approaches with
multi-task learning, we argue to organize and reason the data with multiple
variations. Specifically, we propose to generate a graph with millions of
objects mined from hundreds of thousands of videos. The objects are connected
by two types of edges which correspond to two types of invariance: "different
instances but a similar viewpoint and category" and "different viewpoints of
the same instance". By applying simple transitivity on the graph with these
edges, we can obtain pairs of images exhibiting richer visual invariance. We
use this data to train a Triplet-Siamese network with VGG16 as the base
architecture and apply the learned representations to different recognition
tasks. For object detection, we achieve 63.2% mAP on PASCAL VOC 2007 using Fast
R-CNN (compare to 67.3% with ImageNet pre-training). For the challenging COCO
dataset, our method is surprisingly close (23.5%) to the ImageNet-supervised
counterpart (24.4%) using the Faster R-CNN framework. We also show that our
network can perform significantly better than the ImageNet network in the
surface normal estimation task. | http://arxiv.org/pdf/1708.02901 | Xiaolong Wang, Kaiming He, Abhinav Gupta | cs.CV | ICCV 2017 | null | cs.CV | 20170809 | 20170815 | [] |
1708.02901 | 34 | Table 2: More ablative studies on object detection on the VOC 2007 test set using Fast R-CNN [16] (with selective search proposals [55]).
is better than both methods. This comparison indicates the effectiveness of exploiting a greater variety of invariance in representation learning.
Is multi-task learning sufï¬cient? An alternative way of obtaining both intra- and inter-instance invariance is to ap- ply multi-task learning with the two losses of [9] and [61]. Next we compare with this method.
All >c1 >c2 >c3 >c4 >c5 62.6 61.1 60.9 57.0 49.7 38.1 Context [9] 62.2 61.5 62.2 61.4 58.9 39.5 Tracking [61] Multi-Task [9, 61] 62.4 63.2 63.5 62.9 58.7 27.6 65.0 64.5 63.6 60.4 55.7 43.1 Ours ImageNet 70.9 71.1 71.1 70.2 70.3 64.3 | 1708.02901#34 | Transitive Invariance for Self-supervised Visual Representation Learning | Learning visual representations with self-supervised learning has become
popular in computer vision. The idea is to design auxiliary tasks where labels
are free to obtain. Most of these tasks end up providing data to learn specific
kinds of invariance useful for recognition. In this paper, we propose to
exploit different self-supervised approaches to learn representations invariant
to (i) inter-instance variations (two objects in the same class should have
similar features) and (ii) intra-instance variations (viewpoint, pose,
deformations, illumination, etc). Instead of combining two approaches with
multi-task learning, we argue to organize and reason the data with multiple
variations. Specifically, we propose to generate a graph with millions of
objects mined from hundreds of thousands of videos. The objects are connected
by two types of edges which correspond to two types of invariance: "different
instances but a similar viewpoint and category" and "different viewpoints of
the same instance". By applying simple transitivity on the graph with these
edges, we can obtain pairs of images exhibiting richer visual invariance. We
use this data to train a Triplet-Siamese network with VGG16 as the base
architecture and apply the learned representations to different recognition
tasks. For object detection, we achieve 63.2% mAP on PASCAL VOC 2007 using Fast
R-CNN (compare to 67.3% with ImageNet pre-training). For the challenging COCO
dataset, our method is surprisingly close (23.5%) to the ImageNet-supervised
counterpart (24.4%) using the Faster R-CNN framework. We also show that our
network can perform significantly better than the ImageNet network in the
surface normal estimation task. | http://arxiv.org/pdf/1708.02901 | Xiaolong Wang, Kaiming He, Abhinav Gupta | cs.CV | ICCV 2017 | null | cs.CV | 20170809 | 20170815 | [] |
1708.02901 | 35 | For the task in [61], we use the same network architec- ture as our approach; for the task in [9], we follow their design of a Siamese network. We apply different fully con- nected layers for different tasks, but share the convolutional layers between these two tasks. Given a mini-batch of train- ing samples, we perform ranking among these images as well as context prediction in each image simultaneously via two losses. The representations learned in this way, when ï¬ne-tuned with Fast R-CNN, obtain 62.1% mAP (âMulti- taskâ in Table 2). Comparing to only using context pre- diction [9] (61.5%), the multi-task learning only gives a marginal improvement (0.6%). This result suggests that multi-task learning in this way is not sufï¬cient; organiz- ing and exploiting the relationships of data, as done by our method, is more effective for representation learning.
Table 3: Object detection Average Precision (%) on the VOC 2007 test set using joint training Faster R-CNN [46].
better features for object detection. | 1708.02901#35 | Transitive Invariance for Self-supervised Visual Representation Learning | Learning visual representations with self-supervised learning has become
popular in computer vision. The idea is to design auxiliary tasks where labels
are free to obtain. Most of these tasks end up providing data to learn specific
kinds of invariance useful for recognition. In this paper, we propose to
exploit different self-supervised approaches to learn representations invariant
to (i) inter-instance variations (two objects in the same class should have
similar features) and (ii) intra-instance variations (viewpoint, pose,
deformations, illumination, etc). Instead of combining two approaches with
multi-task learning, we argue to organize and reason the data with multiple
variations. Specifically, we propose to generate a graph with millions of
objects mined from hundreds of thousands of videos. The objects are connected
by two types of edges which correspond to two types of invariance: "different
instances but a similar viewpoint and category" and "different viewpoints of
the same instance". By applying simple transitivity on the graph with these
edges, we can obtain pairs of images exhibiting richer visual invariance. We
use this data to train a Triplet-Siamese network with VGG16 as the base
architecture and apply the learned representations to different recognition
tasks. For object detection, we achieve 63.2% mAP on PASCAL VOC 2007 using Fast
R-CNN (compare to 67.3% with ImageNet pre-training). For the challenging COCO
dataset, our method is surprisingly close (23.5%) to the ImageNet-supervised
counterpart (24.4%) using the Faster R-CNN framework. We also show that our
network can perform significantly better than the ImageNet network in the
surface normal estimation task. | http://arxiv.org/pdf/1708.02901 | Xiaolong Wang, Kaiming He, Abhinav Gupta | cs.CV | ICCV 2017 | null | cs.CV | 20170809 | 20170815 | [] |
1708.02901 | 36 | Table 3: Object detection Average Precision (%) on the VOC 2007 test set using joint training Faster R-CNN [46].
better features for object detection.
How important is clustering? Furthermore, we want to understand how important it is to cluster images with fea- tures learned from still images [9]. We perform another ab- lative analysis by replacing the features of [9] with HOG [6] during clustering. The rest of the pipeline remains ex- actly the same. The ï¬nal result is 60.4% mAP (âHOGâ in Table 2). This shows that if the features for clustering are not invariant enough to handle different object instances, the transitivity in the graph becomes less reliable.
# 6.3. Object Detection with Faster R-CNN | 1708.02901#36 | Transitive Invariance for Self-supervised Visual Representation Learning | Learning visual representations with self-supervised learning has become
popular in computer vision. The idea is to design auxiliary tasks where labels
are free to obtain. Most of these tasks end up providing data to learn specific
kinds of invariance useful for recognition. In this paper, we propose to
exploit different self-supervised approaches to learn representations invariant
to (i) inter-instance variations (two objects in the same class should have
similar features) and (ii) intra-instance variations (viewpoint, pose,
deformations, illumination, etc). Instead of combining two approaches with
multi-task learning, we argue to organize and reason the data with multiple
variations. Specifically, we propose to generate a graph with millions of
objects mined from hundreds of thousands of videos. The objects are connected
by two types of edges which correspond to two types of invariance: "different
instances but a similar viewpoint and category" and "different viewpoints of
the same instance". By applying simple transitivity on the graph with these
edges, we can obtain pairs of images exhibiting richer visual invariance. We
use this data to train a Triplet-Siamese network with VGG16 as the base
architecture and apply the learned representations to different recognition
tasks. For object detection, we achieve 63.2% mAP on PASCAL VOC 2007 using Fast
R-CNN (compare to 67.3% with ImageNet pre-training). For the challenging COCO
dataset, our method is surprisingly close (23.5%) to the ImageNet-supervised
counterpart (24.4%) using the Faster R-CNN framework. We also show that our
network can perform significantly better than the ImageNet network in the
surface normal estimation task. | http://arxiv.org/pdf/1708.02901 | Xiaolong Wang, Kaiming He, Abhinav Gupta | cs.CV | ICCV 2017 | null | cs.CV | 20170809 | 20170815 | [] |
1708.02901 | 37 | # 6.3. Object Detection with Faster R-CNN
How important is tracking? To further understand how much visual tracking helps, we perform ablative analysis by making the visual tracks shorter: we track the mov- ing objects for 15 frames instead of by default 30 frames. This is expected to reduce the viewpoint/pose/deformation variance contributed by tracking. Our model pre-trained in this way shows 61.5% mAP (â15-frameâ in Table 2) when ï¬ne-tuned for detection. This number is similar to that of using context prediction only (Table 1). This re- sult is not surprising, because it does not add much new information for training. It suggests adding stronger view- point/pose/deformation invariance is important for learning
Although Fast R-CNN [16] has been a popular testbed of un-/self-supervised features, it relies on Selective Search proposals [55] and thus is not fully end-to-end. We fur- ther evaluate the representations on object detection with the end-to-end Faster R-CNN [46] where the Region Pro- posal Network (RPN) may suffer from the features if they are low-quality. | 1708.02901#37 | Transitive Invariance for Self-supervised Visual Representation Learning | Learning visual representations with self-supervised learning has become
popular in computer vision. The idea is to design auxiliary tasks where labels
are free to obtain. Most of these tasks end up providing data to learn specific
kinds of invariance useful for recognition. In this paper, we propose to
exploit different self-supervised approaches to learn representations invariant
to (i) inter-instance variations (two objects in the same class should have
similar features) and (ii) intra-instance variations (viewpoint, pose,
deformations, illumination, etc). Instead of combining two approaches with
multi-task learning, we argue to organize and reason the data with multiple
variations. Specifically, we propose to generate a graph with millions of
objects mined from hundreds of thousands of videos. The objects are connected
by two types of edges which correspond to two types of invariance: "different
instances but a similar viewpoint and category" and "different viewpoints of
the same instance". By applying simple transitivity on the graph with these
edges, we can obtain pairs of images exhibiting richer visual invariance. We
use this data to train a Triplet-Siamese network with VGG16 as the base
architecture and apply the learned representations to different recognition
tasks. For object detection, we achieve 63.2% mAP on PASCAL VOC 2007 using Fast
R-CNN (compare to 67.3% with ImageNet pre-training). For the challenging COCO
dataset, our method is surprisingly close (23.5%) to the ImageNet-supervised
counterpart (24.4%) using the Faster R-CNN framework. We also show that our
network can perform significantly better than the ImageNet network in the
surface normal estimation task. | http://arxiv.org/pdf/1708.02901 | Xiaolong Wang, Kaiming He, Abhinav Gupta | cs.CV | ICCV 2017 | null | cs.CV | 20170809 | 20170815 | [] |
1708.02901 | 38 | PASCAL VOC 2007 Results. We ï¬ne-tune Faster R- CNN in 8 GPUs for 35K iterations with an initial learn- ing rate of 0.00025 which is reduced by 1/10 after every 15K iterations. Table 3 shows the results of ï¬ne-tuning all
AP AP50 AP75 APS APM APL 20.5 40.1 from scratch 22.7 43.5 Context [9] Tracking [61] 22.6 42.8 Multi-Task [9, 61] 22.0 42.3 23.5 44.4 Ours 19.0 21.2 21.6 21.1 22.6 5.6 6.6 6.3 6.6 7.1 22.5 24.9 25.0 24.5 25.9 32.7 36.5 36.2 35.0 37.3 ImageNet (shorter) 23.7 44.5 24.4 46.4 ImageNet 23.5 23.1 7.2 7.9 26.9 27.4 37.4 38.1
Table 4: Object detection Average Precision (%, COCO def- initions) on COCO minival using joint training Faster R-CNN [46]. â(shorter)â indicates a shorter training time (fewer iterations, 61.25K) used by the codebase of [46]. | 1708.02901#38 | Transitive Invariance for Self-supervised Visual Representation Learning | Learning visual representations with self-supervised learning has become
popular in computer vision. The idea is to design auxiliary tasks where labels
are free to obtain. Most of these tasks end up providing data to learn specific
kinds of invariance useful for recognition. In this paper, we propose to
exploit different self-supervised approaches to learn representations invariant
to (i) inter-instance variations (two objects in the same class should have
similar features) and (ii) intra-instance variations (viewpoint, pose,
deformations, illumination, etc). Instead of combining two approaches with
multi-task learning, we argue to organize and reason the data with multiple
variations. Specifically, we propose to generate a graph with millions of
objects mined from hundreds of thousands of videos. The objects are connected
by two types of edges which correspond to two types of invariance: "different
instances but a similar viewpoint and category" and "different viewpoints of
the same instance". By applying simple transitivity on the graph with these
edges, we can obtain pairs of images exhibiting richer visual invariance. We
use this data to train a Triplet-Siamese network with VGG16 as the base
architecture and apply the learned representations to different recognition
tasks. For object detection, we achieve 63.2% mAP on PASCAL VOC 2007 using Fast
R-CNN (compare to 67.3% with ImageNet pre-training). For the challenging COCO
dataset, our method is surprisingly close (23.5%) to the ImageNet-supervised
counterpart (24.4%) using the Faster R-CNN framework. We also show that our
network can perform significantly better than the ImageNet network in the
surface normal estimation task. | http://arxiv.org/pdf/1708.02901 | Xiaolong Wang, Kaiming He, Abhinav Gupta | cs.CV | ICCV 2017 | null | cs.CV | 20170809 | 20170815 | [] |
1708.02901 | 39 | layers (âAllâ) and also ablative results on freezing different levels of convolutional layers (e.g., the column >c3 repre- sents freezing all the layers below and including conv3 x in VGG16 during ï¬ne-tuning). Our method gets even better results of 65.0% by using Faster R-CNN, showing a larger gap compared to the counterparts of [9] (62.6%) and [61] (62.2%). Noteworthily, when freezing all the convolutional layers and only ï¬ne-tuning the fully-connected layers, our method (43.1%) is much better than other competitors. And we again ï¬nd that the multi-task alternative does not work well for Faster R-CNN.
COCO Results. We further report results on the chal- lenging COCO detection dataset [37]. To the best of our knowledge this is the ï¬rst work of this kind presented on COCO detection. We ï¬ne-tune Faster R-CNN in 8 GPUs for 120K iterations with an initial learning rate of 0.001 which is reduced by 1/10 after 80k iterations. This is trained on the COCO trainval35k split and evaluated on the minival5k split, introduced by [3]. | 1708.02901#39 | Transitive Invariance for Self-supervised Visual Representation Learning | Learning visual representations with self-supervised learning has become
popular in computer vision. The idea is to design auxiliary tasks where labels
are free to obtain. Most of these tasks end up providing data to learn specific
kinds of invariance useful for recognition. In this paper, we propose to
exploit different self-supervised approaches to learn representations invariant
to (i) inter-instance variations (two objects in the same class should have
similar features) and (ii) intra-instance variations (viewpoint, pose,
deformations, illumination, etc). Instead of combining two approaches with
multi-task learning, we argue to organize and reason the data with multiple
variations. Specifically, we propose to generate a graph with millions of
objects mined from hundreds of thousands of videos. The objects are connected
by two types of edges which correspond to two types of invariance: "different
instances but a similar viewpoint and category" and "different viewpoints of
the same instance". By applying simple transitivity on the graph with these
edges, we can obtain pairs of images exhibiting richer visual invariance. We
use this data to train a Triplet-Siamese network with VGG16 as the base
architecture and apply the learned representations to different recognition
tasks. For object detection, we achieve 63.2% mAP on PASCAL VOC 2007 using Fast
R-CNN (compare to 67.3% with ImageNet pre-training). For the challenging COCO
dataset, our method is surprisingly close (23.5%) to the ImageNet-supervised
counterpart (24.4%) using the Faster R-CNN framework. We also show that our
network can perform significantly better than the ImageNet network in the
surface normal estimation task. | http://arxiv.org/pdf/1708.02901 | Xiaolong Wang, Kaiming He, Abhinav Gupta | cs.CV | ICCV 2017 | null | cs.CV | 20170809 | 20170815 | [] |
1708.02901 | 40 | We report the COCO results on Table 4. Faster R-CNN ï¬ne-tuned with our self-supervised network obtains 23.5% AP using the COCO metric, which is very close (<1%) to ï¬ne-tuning Faster R-CNN with the ImageNet pre-trained counterpart (24.4%). Actually, if the ï¬ne-tuning of the Ima- geNet counterpart follows the âshorterâ schedule in the pub- lic code (61.25K iterations in 8 GPUs, converted from 490K in 1 GPU)1, the ImageNet supervised pre-training version has 23.7% AP and is comparable with ours. This compari- son also strengthens the signiï¬cance of our result.
To the best of our knowledge, our model achieves the best performance reported to date on VOC 2007 and COCO using un-/self-supervised pre-training.
# 6.4. Adapting to Surface Normal Estimation
To show the generalization ability of our self-supervised representations, we adopt the learned network to the sur- In this task, given a single face normal estimation task.
# 1https://github.com/rbgirshick/py-faster-rcnn | 1708.02901#40 | Transitive Invariance for Self-supervised Visual Representation Learning | Learning visual representations with self-supervised learning has become
popular in computer vision. The idea is to design auxiliary tasks where labels
are free to obtain. Most of these tasks end up providing data to learn specific
kinds of invariance useful for recognition. In this paper, we propose to
exploit different self-supervised approaches to learn representations invariant
to (i) inter-instance variations (two objects in the same class should have
similar features) and (ii) intra-instance variations (viewpoint, pose,
deformations, illumination, etc). Instead of combining two approaches with
multi-task learning, we argue to organize and reason the data with multiple
variations. Specifically, we propose to generate a graph with millions of
objects mined from hundreds of thousands of videos. The objects are connected
by two types of edges which correspond to two types of invariance: "different
instances but a similar viewpoint and category" and "different viewpoints of
the same instance". By applying simple transitivity on the graph with these
edges, we can obtain pairs of images exhibiting richer visual invariance. We
use this data to train a Triplet-Siamese network with VGG16 as the base
architecture and apply the learned representations to different recognition
tasks. For object detection, we achieve 63.2% mAP on PASCAL VOC 2007 using Fast
R-CNN (compare to 67.3% with ImageNet pre-training). For the challenging COCO
dataset, our method is surprisingly close (23.5%) to the ImageNet-supervised
counterpart (24.4%) using the Faster R-CNN framework. We also show that our
network can perform significantly better than the ImageNet network in the
surface normal estimation task. | http://arxiv.org/pdf/1708.02901 | Xiaolong Wang, Kaiming He, Abhinav Gupta | cs.CV | ICCV 2017 | null | cs.CV | 20170809 | 20170815 | [] |
1708.02901 | 41 | # 1https://github.com/rbgirshick/py-faster-rcnn
Mean Median 11.25⦠22.5⦠30⦠(higher is better) (lower is better) from scratch Context [9] Tracking [61] Ours ImageNet 31.3 29.0 27.8 26.0 27.8 25.3 21.6 21.8 18.0 21.2 24.2 28.8 27.4 33.9 29.0 45.6 56.8 51.5 61.9 51.1 62.5 57.6 67.5 52.3 63.4
Table 5: Results on NYU v2 for per-pixel surface normal estimation, evaluated over valid pixels.
RGB image as input, we train the network to predict the normal/orientation of the pixels. We evaluate our method on the NYUv2 RGBD dataset [49] dataset. We use the of- ï¬cial split of 795 images for training and 654 images for testing. We follow the same protocols for generating sur- face normal ground truth and evaluations as [14, 29, 15]. | 1708.02901#41 | Transitive Invariance for Self-supervised Visual Representation Learning | Learning visual representations with self-supervised learning has become
popular in computer vision. The idea is to design auxiliary tasks where labels
are free to obtain. Most of these tasks end up providing data to learn specific
kinds of invariance useful for recognition. In this paper, we propose to
exploit different self-supervised approaches to learn representations invariant
to (i) inter-instance variations (two objects in the same class should have
similar features) and (ii) intra-instance variations (viewpoint, pose,
deformations, illumination, etc). Instead of combining two approaches with
multi-task learning, we argue to organize and reason the data with multiple
variations. Specifically, we propose to generate a graph with millions of
objects mined from hundreds of thousands of videos. The objects are connected
by two types of edges which correspond to two types of invariance: "different
instances but a similar viewpoint and category" and "different viewpoints of
the same instance". By applying simple transitivity on the graph with these
edges, we can obtain pairs of images exhibiting richer visual invariance. We
use this data to train a Triplet-Siamese network with VGG16 as the base
architecture and apply the learned representations to different recognition
tasks. For object detection, we achieve 63.2% mAP on PASCAL VOC 2007 using Fast
R-CNN (compare to 67.3% with ImageNet pre-training). For the challenging COCO
dataset, our method is surprisingly close (23.5%) to the ImageNet-supervised
counterpart (24.4%) using the Faster R-CNN framework. We also show that our
network can perform significantly better than the ImageNet network in the
surface normal estimation task. | http://arxiv.org/pdf/1708.02901 | Xiaolong Wang, Kaiming He, Abhinav Gupta | cs.CV | ICCV 2017 | null | cs.CV | 20170809 | 20170815 | [] |
1708.02901 | 42 | To train the network for surface normal estimation, we apply the Fully Convolutional Network (FCN 32-s) pro- posed in [38] with the VGG16 network as base architecture. For the loss function, we follow the design in [60]. Specif- ically, instead of direct regression to obtain the normal, we use a codebook of 40 codewords to encode the 3-dimension normals. Each codeword represents one class thus we turn the problem into a 40-class classiï¬cation for each pixel. We use the same hyperparameters as in [38] for training and the network is ï¬ne-tuned for same number of iterations (100K) for different initializations.
To initialize the FCN model with self-supervised nets, we copy the weights of the convolutional layers to the cor- responding layers in FCN. For ImageNet pre-trained net- work, we follow [38] by converting the fully connected lay- ers to convolutional layers and copy all the weights. For the model trained from scratch, we randomly initialize all the layers with âXavierâ initialization [18] . | 1708.02901#42 | Transitive Invariance for Self-supervised Visual Representation Learning | Learning visual representations with self-supervised learning has become
popular in computer vision. The idea is to design auxiliary tasks where labels
are free to obtain. Most of these tasks end up providing data to learn specific
kinds of invariance useful for recognition. In this paper, we propose to
exploit different self-supervised approaches to learn representations invariant
to (i) inter-instance variations (two objects in the same class should have
similar features) and (ii) intra-instance variations (viewpoint, pose,
deformations, illumination, etc). Instead of combining two approaches with
multi-task learning, we argue to organize and reason the data with multiple
variations. Specifically, we propose to generate a graph with millions of
objects mined from hundreds of thousands of videos. The objects are connected
by two types of edges which correspond to two types of invariance: "different
instances but a similar viewpoint and category" and "different viewpoints of
the same instance". By applying simple transitivity on the graph with these
edges, we can obtain pairs of images exhibiting richer visual invariance. We
use this data to train a Triplet-Siamese network with VGG16 as the base
architecture and apply the learned representations to different recognition
tasks. For object detection, we achieve 63.2% mAP on PASCAL VOC 2007 using Fast
R-CNN (compare to 67.3% with ImageNet pre-training). For the challenging COCO
dataset, our method is surprisingly close (23.5%) to the ImageNet-supervised
counterpart (24.4%) using the Faster R-CNN framework. We also show that our
network can perform significantly better than the ImageNet network in the
surface normal estimation task. | http://arxiv.org/pdf/1708.02901 | Xiaolong Wang, Kaiming He, Abhinav Gupta | cs.CV | ICCV 2017 | null | cs.CV | 20170809 | 20170815 | [] |
1708.02901 | 43 | Table 5 shows the results. We report mean and median error for all visible pixels (in degrees) and also the percent- age of pixels with error less than 11.25, 22.5 and 30 de- grees. Surprisingly, we obtain much better results with our self-supervised trained network than ImageNet pre-training in this task (3 to 4% better in most metrics). As a com- parison, the network trained in [9, 61] are slightly worse than the ImageNet pre-trained network. These results sug- gest that our learned representations are competitive to Ima- geNet pre-training for high-level semantic tasks, but outper- forms it on tasks such as surface normal estimation. This experiment suggests that different visual tasks may prefer different levels of visual invariance.
Acknowledgement: This work was supported by ONR MURI N000141612007 and Sloan Fellowship to AG.
# References
[1] P. Agrawal, J. Carreira, and J. Malik. Learning to see by moving. 2015. 2
[2] H. Bay, T. Tuytelaars, and L. V. Gool. Surf: Speeded up robust features. In ECCV, 2006. 4
Inside- outside net: Detecting objects in context with skip pooling and recurrent neural networks. arXiv, 2015. 8 | 1708.02901#43 | Transitive Invariance for Self-supervised Visual Representation Learning | Learning visual representations with self-supervised learning has become
popular in computer vision. The idea is to design auxiliary tasks where labels
are free to obtain. Most of these tasks end up providing data to learn specific
kinds of invariance useful for recognition. In this paper, we propose to
exploit different self-supervised approaches to learn representations invariant
to (i) inter-instance variations (two objects in the same class should have
similar features) and (ii) intra-instance variations (viewpoint, pose,
deformations, illumination, etc). Instead of combining two approaches with
multi-task learning, we argue to organize and reason the data with multiple
variations. Specifically, we propose to generate a graph with millions of
objects mined from hundreds of thousands of videos. The objects are connected
by two types of edges which correspond to two types of invariance: "different
instances but a similar viewpoint and category" and "different viewpoints of
the same instance". By applying simple transitivity on the graph with these
edges, we can obtain pairs of images exhibiting richer visual invariance. We
use this data to train a Triplet-Siamese network with VGG16 as the base
architecture and apply the learned representations to different recognition
tasks. For object detection, we achieve 63.2% mAP on PASCAL VOC 2007 using Fast
R-CNN (compare to 67.3% with ImageNet pre-training). For the challenging COCO
dataset, our method is surprisingly close (23.5%) to the ImageNet-supervised
counterpart (24.4%) using the Faster R-CNN framework. We also show that our
network can perform significantly better than the ImageNet network in the
surface normal estimation task. | http://arxiv.org/pdf/1708.02901 | Xiaolong Wang, Kaiming He, Abhinav Gupta | cs.CV | ICCV 2017 | null | cs.CV | 20170809 | 20170815 | [] |
1708.02901 | 44 | Inside- outside net: Detecting objects in context with skip pooling and recurrent neural networks. arXiv, 2015. 8
[4] Y. Bengio, P. Lamblin, D. Popovici, and H. Larochelle. Greedy layer-wise training of deep networks. In NIPS, 2007. 2
[5] S. Chopra, R. Hadsell, and Y. LeCun. Learning a similarity metric discriminatively, with application to face veriï¬cation. In CVPR, 2005. 2
[6] N. Dalal and B. Triggs. Histograms of oriented gradients for human detection. In CVPR, 2005. 1, 7
[7] C. Doersch, A. Gupta, and A. A. Efros. Mid-level visual element discovery as discriminative mode seeking. In NIPS, 2013. 2, 4
[8] C. Doersch, A. Gupta, and A. A. Efros. Context as supervi- sory signal: Discovering objects with predictable context. In ECCV, 2014. 2
[9] C. Doersch, A. Gupta, and A. A. Efros. Unsupervised vi- sual representation learning by context prediction. In ICCV, 2015. 1, 2, 3, 4, 5, 6, 7, 8 | 1708.02901#44 | Transitive Invariance for Self-supervised Visual Representation Learning | Learning visual representations with self-supervised learning has become
popular in computer vision. The idea is to design auxiliary tasks where labels
are free to obtain. Most of these tasks end up providing data to learn specific
kinds of invariance useful for recognition. In this paper, we propose to
exploit different self-supervised approaches to learn representations invariant
to (i) inter-instance variations (two objects in the same class should have
similar features) and (ii) intra-instance variations (viewpoint, pose,
deformations, illumination, etc). Instead of combining two approaches with
multi-task learning, we argue to organize and reason the data with multiple
variations. Specifically, we propose to generate a graph with millions of
objects mined from hundreds of thousands of videos. The objects are connected
by two types of edges which correspond to two types of invariance: "different
instances but a similar viewpoint and category" and "different viewpoints of
the same instance". By applying simple transitivity on the graph with these
edges, we can obtain pairs of images exhibiting richer visual invariance. We
use this data to train a Triplet-Siamese network with VGG16 as the base
architecture and apply the learned representations to different recognition
tasks. For object detection, we achieve 63.2% mAP on PASCAL VOC 2007 using Fast
R-CNN (compare to 67.3% with ImageNet pre-training). For the challenging COCO
dataset, our method is surprisingly close (23.5%) to the ImageNet-supervised
counterpart (24.4%) using the Faster R-CNN framework. We also show that our
network can perform significantly better than the ImageNet network in the
surface normal estimation task. | http://arxiv.org/pdf/1708.02901 | Xiaolong Wang, Kaiming He, Abhinav Gupta | cs.CV | ICCV 2017 | null | cs.CV | 20170809 | 20170815 | [] |
1708.02901 | 45 | [10] J. Donahue, P. Kr¨ahenb¨uhl, and T. Darrell. Adversarial fea- ture learning. arXiv, 2016. 2
[11] V. Dumoulin, I. Belghazi, B. Poole, A. Lamb, M. Arjovsky, O. Mastropietro, and A. Courville. Adversarially learned in- ference. arXiv, 2016. 2
[12] S. M. A. Eslami, N. Heess, and J. Winn. The shape boltz- mann machine: a strong model of object shape. In CVPR, 2012. 2
[13] M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman. The Pascal Visual Object Classes (VOC) Challenge. IJCV, 2010. 6
[14] D. F. Fouhey, A. Gupta, and M. Hebert. Data-driven 3D primitives for single image understanding. In ICCV, 2013. 8 [15] D. F. Fouhey, A. Gupta, and M. Hebert. Unfolding an indoor
origami world. In ECCV, 2014. 8 | 1708.02901#45 | Transitive Invariance for Self-supervised Visual Representation Learning | Learning visual representations with self-supervised learning has become
popular in computer vision. The idea is to design auxiliary tasks where labels
are free to obtain. Most of these tasks end up providing data to learn specific
kinds of invariance useful for recognition. In this paper, we propose to
exploit different self-supervised approaches to learn representations invariant
to (i) inter-instance variations (two objects in the same class should have
similar features) and (ii) intra-instance variations (viewpoint, pose,
deformations, illumination, etc). Instead of combining two approaches with
multi-task learning, we argue to organize and reason the data with multiple
variations. Specifically, we propose to generate a graph with millions of
objects mined from hundreds of thousands of videos. The objects are connected
by two types of edges which correspond to two types of invariance: "different
instances but a similar viewpoint and category" and "different viewpoints of
the same instance". By applying simple transitivity on the graph with these
edges, we can obtain pairs of images exhibiting richer visual invariance. We
use this data to train a Triplet-Siamese network with VGG16 as the base
architecture and apply the learned representations to different recognition
tasks. For object detection, we achieve 63.2% mAP on PASCAL VOC 2007 using Fast
R-CNN (compare to 67.3% with ImageNet pre-training). For the challenging COCO
dataset, our method is surprisingly close (23.5%) to the ImageNet-supervised
counterpart (24.4%) using the Faster R-CNN framework. We also show that our
network can perform significantly better than the ImageNet network in the
surface normal estimation task. | http://arxiv.org/pdf/1708.02901 | Xiaolong Wang, Kaiming He, Abhinav Gupta | cs.CV | ICCV 2017 | null | cs.CV | 20170809 | 20170815 | [] |
1708.02901 | 46 | origami world. In ECCV, 2014. 8
[16] R. Girshick. Fast R-CNN. In ICCV, 2015. 6, 7 [17] R. Girshick, J. Donahue, T. Darrell, and J. Malik. Rich fea- ture hierarchies for accurate object detection and semantic segmentation. In CVPR, 2014. 1
[18] X. Glorot and Y. Bengio. Understanding the difï¬culty of In AISTATS, training deep feedforward neural networks. 2010. 8
[19] Y. Gong, Y. Jia, T. K. Leung, A. Toshev, and S. Ioffe. Deep convolutional ranking for multilabel image annotation. arXiv, 2013. 2 [20] I. Goodfellow,
J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Gen- erative adversarial nets. In NIPS, 2014. 2
[21] R. Goroshin, J. Bruna, J. Tompson, D. Eigen, and Y. LeCun. Unsupervised learning of spatiotemporally coherent metrics. ICCV, 2015. 2 | 1708.02901#46 | Transitive Invariance for Self-supervised Visual Representation Learning | Learning visual representations with self-supervised learning has become
popular in computer vision. The idea is to design auxiliary tasks where labels
are free to obtain. Most of these tasks end up providing data to learn specific
kinds of invariance useful for recognition. In this paper, we propose to
exploit different self-supervised approaches to learn representations invariant
to (i) inter-instance variations (two objects in the same class should have
similar features) and (ii) intra-instance variations (viewpoint, pose,
deformations, illumination, etc). Instead of combining two approaches with
multi-task learning, we argue to organize and reason the data with multiple
variations. Specifically, we propose to generate a graph with millions of
objects mined from hundreds of thousands of videos. The objects are connected
by two types of edges which correspond to two types of invariance: "different
instances but a similar viewpoint and category" and "different viewpoints of
the same instance". By applying simple transitivity on the graph with these
edges, we can obtain pairs of images exhibiting richer visual invariance. We
use this data to train a Triplet-Siamese network with VGG16 as the base
architecture and apply the learned representations to different recognition
tasks. For object detection, we achieve 63.2% mAP on PASCAL VOC 2007 using Fast
R-CNN (compare to 67.3% with ImageNet pre-training). For the challenging COCO
dataset, our method is surprisingly close (23.5%) to the ImageNet-supervised
counterpart (24.4%) using the Faster R-CNN framework. We also show that our
network can perform significantly better than the ImageNet network in the
surface normal estimation task. | http://arxiv.org/pdf/1708.02901 | Xiaolong Wang, Kaiming He, Abhinav Gupta | cs.CV | ICCV 2017 | null | cs.CV | 20170809 | 20170815 | [] |
1708.02901 | 47 | [22] R. Hadsell, S. Chopra, and Y. LeCun. Dimensionality re- duction by learning an invariant mapping. In CVPR, 2006. 2
[23] J. F. Henriques, R. Caseiro, P. Martins, and J. Batista. High- speed tracking with kernelized correlation ï¬lters. TPAMI, 2015. 4
[24] G. E. Hinton and R. R. Salakhutdinov. Reducing the dimen- sionality of data with neural networks. Science, 2006. 2 [25] E. Hoffer and N. Ailon. Deep metric learning using triplet
network. arXiv, 2015. 2
[26] D. Jayaraman and K. Grauman. Learning image representa- tions tied to egomotion. In ICCV, 2015. 2
[27] D. Kingma and M. Welling. Auto-encoding variational bayes. In ICLR, 2014. 2
[28] A. Krizhevsky, I. Sutskever, and G. E. Hinton. classiï¬cation with deep convolutional neural networks. NIPS, 2012. 1 Imagenet In | 1708.02901#47 | Transitive Invariance for Self-supervised Visual Representation Learning | Learning visual representations with self-supervised learning has become
popular in computer vision. The idea is to design auxiliary tasks where labels
are free to obtain. Most of these tasks end up providing data to learn specific
kinds of invariance useful for recognition. In this paper, we propose to
exploit different self-supervised approaches to learn representations invariant
to (i) inter-instance variations (two objects in the same class should have
similar features) and (ii) intra-instance variations (viewpoint, pose,
deformations, illumination, etc). Instead of combining two approaches with
multi-task learning, we argue to organize and reason the data with multiple
variations. Specifically, we propose to generate a graph with millions of
objects mined from hundreds of thousands of videos. The objects are connected
by two types of edges which correspond to two types of invariance: "different
instances but a similar viewpoint and category" and "different viewpoints of
the same instance". By applying simple transitivity on the graph with these
edges, we can obtain pairs of images exhibiting richer visual invariance. We
use this data to train a Triplet-Siamese network with VGG16 as the base
architecture and apply the learned representations to different recognition
tasks. For object detection, we achieve 63.2% mAP on PASCAL VOC 2007 using Fast
R-CNN (compare to 67.3% with ImageNet pre-training). For the challenging COCO
dataset, our method is surprisingly close (23.5%) to the ImageNet-supervised
counterpart (24.4%) using the Faster R-CNN framework. We also show that our
network can perform significantly better than the ImageNet network in the
surface normal estimation task. | http://arxiv.org/pdf/1708.02901 | Xiaolong Wang, Kaiming He, Abhinav Gupta | cs.CV | ICCV 2017 | null | cs.CV | 20170809 | 20170815 | [] |
1708.02901 | 48 | [29] L. Ladick´y, B. Zeisl, and M. Pollefeys. Discriminatively In ECCV, 2014. trained dense surface normal estimation. 8
[30] G. Larsson, M. Maire, and G. Shakhnarovich. Learning rep- resentations for automatic colorization. In European Confer- ence on Computer Vision (ECCV), 2016. 2
[31] G. Larsson, M. Maire, and G. Shakhnarovich. Colorization as a proxy task for visual understanding. In CVPR, 2017. 2 [32] Q. V. Le, M. A. Ranzato, R. Monga, M. Devin, K. Chen, G. S. Corrado, J. Dean, and A. Y. Ng. Building high-level features using large scale unsupervised learning. In ICML, 2012. 2
[33] Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel. Backpropagation applied to handwritten zip code recognition. Neural compu- tation, 1989. 1 | 1708.02901#48 | Transitive Invariance for Self-supervised Visual Representation Learning | Learning visual representations with self-supervised learning has become
popular in computer vision. The idea is to design auxiliary tasks where labels
are free to obtain. Most of these tasks end up providing data to learn specific
kinds of invariance useful for recognition. In this paper, we propose to
exploit different self-supervised approaches to learn representations invariant
to (i) inter-instance variations (two objects in the same class should have
similar features) and (ii) intra-instance variations (viewpoint, pose,
deformations, illumination, etc). Instead of combining two approaches with
multi-task learning, we argue to organize and reason the data with multiple
variations. Specifically, we propose to generate a graph with millions of
objects mined from hundreds of thousands of videos. The objects are connected
by two types of edges which correspond to two types of invariance: "different
instances but a similar viewpoint and category" and "different viewpoints of
the same instance". By applying simple transitivity on the graph with these
edges, we can obtain pairs of images exhibiting richer visual invariance. We
use this data to train a Triplet-Siamese network with VGG16 as the base
architecture and apply the learned representations to different recognition
tasks. For object detection, we achieve 63.2% mAP on PASCAL VOC 2007 using Fast
R-CNN (compare to 67.3% with ImageNet pre-training). For the challenging COCO
dataset, our method is surprisingly close (23.5%) to the ImageNet-supervised
counterpart (24.4%) using the Faster R-CNN framework. We also show that our
network can perform significantly better than the ImageNet network in the
surface normal estimation task. | http://arxiv.org/pdf/1708.02901 | Xiaolong Wang, Kaiming He, Abhinav Gupta | cs.CV | ICCV 2017 | null | cs.CV | 20170809 | 20170815 | [] |
1708.02901 | 49 | [34] H. Lee, R. Grosse, R. Ranganath, and A. Y. Ng. Convolu- tional deep belief networks for scalable unsupervised learn- ing of hierarchical representations. In ICML, 2009. 2 [35] Y. Li, M. Paluri, J. M. Rehg, and P. Doll´ar. Unsupervised
learning of edges. In CVPR, 2016. 2, 7
[36] X. Liang, S. Liu, Y. Wei, L. Liu, L. Lin, and S. Yan. Compu- tational baby learning. arXiv, 2014. 5
[37] T. Lin, M. Maire, S. Belongie, L. D. Bourdev, R. B. Girshick, J. Hays, P. Perona, D. Ramanan, P. Doll´ar, and C. L. Zitnick. Microsoft COCO: common objects in context. arXiv, 2014. 2, 8
[38] J. Long, E. Shelhamer, and T. Darrell. Fully convolutional networks for semantic segmentation. In CVPR, 2015. 1, 8
[39] D. Lowe. Distinctive Image Features from Scale-Invariant Keypoints. IJCV, 2004. 1 | 1708.02901#49 | Transitive Invariance for Self-supervised Visual Representation Learning | Learning visual representations with self-supervised learning has become
popular in computer vision. The idea is to design auxiliary tasks where labels
are free to obtain. Most of these tasks end up providing data to learn specific
kinds of invariance useful for recognition. In this paper, we propose to
exploit different self-supervised approaches to learn representations invariant
to (i) inter-instance variations (two objects in the same class should have
similar features) and (ii) intra-instance variations (viewpoint, pose,
deformations, illumination, etc). Instead of combining two approaches with
multi-task learning, we argue to organize and reason the data with multiple
variations. Specifically, we propose to generate a graph with millions of
objects mined from hundreds of thousands of videos. The objects are connected
by two types of edges which correspond to two types of invariance: "different
instances but a similar viewpoint and category" and "different viewpoints of
the same instance". By applying simple transitivity on the graph with these
edges, we can obtain pairs of images exhibiting richer visual invariance. We
use this data to train a Triplet-Siamese network with VGG16 as the base
architecture and apply the learned representations to different recognition
tasks. For object detection, we achieve 63.2% mAP on PASCAL VOC 2007 using Fast
R-CNN (compare to 67.3% with ImageNet pre-training). For the challenging COCO
dataset, our method is surprisingly close (23.5%) to the ImageNet-supervised
counterpart (24.4%) using the Faster R-CNN framework. We also show that our
network can perform significantly better than the ImageNet network in the
surface normal estimation task. | http://arxiv.org/pdf/1708.02901 | Xiaolong Wang, Kaiming He, Abhinav Gupta | cs.CV | ICCV 2017 | null | cs.CV | 20170809 | 20170815 | [] |
1708.02901 | 50 | [39] D. Lowe. Distinctive Image Features from Scale-Invariant Keypoints. IJCV, 2004. 1
[40] Z. Luo, B. Peng, D.-A. Huang, A. Alahi, and L. Fei-Fei. Unsupervised Learning of Long-Term Motion Dynamics for Videos. In CVPR, 2017. 2
[41] M. Mathieu, C. Couprie, and Y. LeCun. Deep multi-scale video prediction beyond mean square error. arXiv, 2015. 2
[42] I. Misra, C. L. Zitnick, and M. Hebert. Shufï¬e and Learn: Unsupervised Learning using Temporal Order Veriï¬cation. In ECCV, 2016. 2
[43] B. A. Olshausen and D. J. Field. Sparse coding with an over- complete basis set: A strategy employed by v1? Vision re- search, 37(23):3311â3325, 1997. 2
[44] D. Pathak, R. Girshick, P. Doll´ar, T. Darrell, and B. Hariha- ran. Learning features by watching objects move. In CVPR, 2017. 2 | 1708.02901#50 | Transitive Invariance for Self-supervised Visual Representation Learning | Learning visual representations with self-supervised learning has become
popular in computer vision. The idea is to design auxiliary tasks where labels
are free to obtain. Most of these tasks end up providing data to learn specific
kinds of invariance useful for recognition. In this paper, we propose to
exploit different self-supervised approaches to learn representations invariant
to (i) inter-instance variations (two objects in the same class should have
similar features) and (ii) intra-instance variations (viewpoint, pose,
deformations, illumination, etc). Instead of combining two approaches with
multi-task learning, we argue to organize and reason the data with multiple
variations. Specifically, we propose to generate a graph with millions of
objects mined from hundreds of thousands of videos. The objects are connected
by two types of edges which correspond to two types of invariance: "different
instances but a similar viewpoint and category" and "different viewpoints of
the same instance". By applying simple transitivity on the graph with these
edges, we can obtain pairs of images exhibiting richer visual invariance. We
use this data to train a Triplet-Siamese network with VGG16 as the base
architecture and apply the learned representations to different recognition
tasks. For object detection, we achieve 63.2% mAP on PASCAL VOC 2007 using Fast
R-CNN (compare to 67.3% with ImageNet pre-training). For the challenging COCO
dataset, our method is surprisingly close (23.5%) to the ImageNet-supervised
counterpart (24.4%) using the Faster R-CNN framework. We also show that our
network can perform significantly better than the ImageNet network in the
surface normal estimation task. | http://arxiv.org/pdf/1708.02901 | Xiaolong Wang, Kaiming He, Abhinav Gupta | cs.CV | ICCV 2017 | null | cs.CV | 20170809 | 20170815 | [] |
1708.02901 | 51 | [45] L. Pinto and A. Gupta. Supersizing self-supervision: Learning to grasp from 50k tries and 700 robot hours. In ICRA, 2016. 2
[46] S. Ren, K. He, R. Girshick, and J. Sun. Faster R-CNN: To- wards real-time object detection with region proposal net- works. In NIPS, 2015. 7, 8
[47] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei. Imagenet large scale visual recog- nition challenge. IJCV, 2015. 1
[48] B. C. Russell, A. A. Efros, J. Sivic, W. T. Freeman, and A. Zisserman. Using multiple segmentations to discover ob- jects and their extent in image collections. In CVPR, 2006. 2
Indoor segmentation and support inference from RGBD images. In ECCV, 2012. 8
[50] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv, 2014. 2, 3, 5 | 1708.02901#51 | Transitive Invariance for Self-supervised Visual Representation Learning | Learning visual representations with self-supervised learning has become
popular in computer vision. The idea is to design auxiliary tasks where labels
are free to obtain. Most of these tasks end up providing data to learn specific
kinds of invariance useful for recognition. In this paper, we propose to
exploit different self-supervised approaches to learn representations invariant
to (i) inter-instance variations (two objects in the same class should have
similar features) and (ii) intra-instance variations (viewpoint, pose,
deformations, illumination, etc). Instead of combining two approaches with
multi-task learning, we argue to organize and reason the data with multiple
variations. Specifically, we propose to generate a graph with millions of
objects mined from hundreds of thousands of videos. The objects are connected
by two types of edges which correspond to two types of invariance: "different
instances but a similar viewpoint and category" and "different viewpoints of
the same instance". By applying simple transitivity on the graph with these
edges, we can obtain pairs of images exhibiting richer visual invariance. We
use this data to train a Triplet-Siamese network with VGG16 as the base
architecture and apply the learned representations to different recognition
tasks. For object detection, we achieve 63.2% mAP on PASCAL VOC 2007 using Fast
R-CNN (compare to 67.3% with ImageNet pre-training). For the challenging COCO
dataset, our method is surprisingly close (23.5%) to the ImageNet-supervised
counterpart (24.4%) using the Faster R-CNN framework. We also show that our
network can perform significantly better than the ImageNet network in the
surface normal estimation task. | http://arxiv.org/pdf/1708.02901 | Xiaolong Wang, Kaiming He, Abhinav Gupta | cs.CV | ICCV 2017 | null | cs.CV | 20170809 | 20170815 | [] |
1708.02901 | 52 | [50] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv, 2014. 2, 3, 5
[51] S. Singh, A. Gupta, and A. A. Efros. Unsupervised discovery of mid-level discriminative patches. In ECCV, 2012. 2, 4 [52] J. Sivic, B. C. Russell, A. A. Efros, A. Zisserman, and W. T. Freeman. Discovering objects and their location in images. In ICCV, 2005. 2
[53] N. Srivastava, E. Mansimov, and R. Salakhutdinov. Unsuper- vised learning of video representations using LSTMs. arXiv, 2015. 2 | 1708.02901#52 | Transitive Invariance for Self-supervised Visual Representation Learning | Learning visual representations with self-supervised learning has become
popular in computer vision. The idea is to design auxiliary tasks where labels
are free to obtain. Most of these tasks end up providing data to learn specific
kinds of invariance useful for recognition. In this paper, we propose to
exploit different self-supervised approaches to learn representations invariant
to (i) inter-instance variations (two objects in the same class should have
similar features) and (ii) intra-instance variations (viewpoint, pose,
deformations, illumination, etc). Instead of combining two approaches with
multi-task learning, we argue to organize and reason the data with multiple
variations. Specifically, we propose to generate a graph with millions of
objects mined from hundreds of thousands of videos. The objects are connected
by two types of edges which correspond to two types of invariance: "different
instances but a similar viewpoint and category" and "different viewpoints of
the same instance". By applying simple transitivity on the graph with these
edges, we can obtain pairs of images exhibiting richer visual invariance. We
use this data to train a Triplet-Siamese network with VGG16 as the base
architecture and apply the learned representations to different recognition
tasks. For object detection, we achieve 63.2% mAP on PASCAL VOC 2007 using Fast
R-CNN (compare to 67.3% with ImageNet pre-training). For the challenging COCO
dataset, our method is surprisingly close (23.5%) to the ImageNet-supervised
counterpart (24.4%) using the Faster R-CNN framework. We also show that our
network can perform significantly better than the ImageNet network in the
surface normal estimation task. | http://arxiv.org/pdf/1708.02901 | Xiaolong Wang, Kaiming He, Abhinav Gupta | cs.CV | ICCV 2017 | null | cs.CV | 20170809 | 20170815 | [] |
1708.02901 | 53 | [54] Y. Tang, R. Salakhutdinov, and G. Hinton. Robust boltzmann machines for recognition and denoising. In CVPR, 2012. 2 [55] J. Uijlings, K. van de Sande, T. Gevers, and A. Smeulders. Selective search for object recognition. IJCV, 2013. 7 [56] P. Vincent, H. Larochelle, Y. Bengio, and P. Manzagol. Ex- tracting and composing robust features with denoising au- toencoders. In ICML, 2008. 2
[57] J. Walker, C. Doersch, A. Gupta, and M. Hebert. An uncer- tain future: Forecasting from variational autoencoders. In ECCV, 2016. 2
[58] H. Wang and C. Schmid. Action recognition with improved trajectories. In ICCV, 2013. 4 | 1708.02901#53 | Transitive Invariance for Self-supervised Visual Representation Learning | Learning visual representations with self-supervised learning has become
popular in computer vision. The idea is to design auxiliary tasks where labels
are free to obtain. Most of these tasks end up providing data to learn specific
kinds of invariance useful for recognition. In this paper, we propose to
exploit different self-supervised approaches to learn representations invariant
to (i) inter-instance variations (two objects in the same class should have
similar features) and (ii) intra-instance variations (viewpoint, pose,
deformations, illumination, etc). Instead of combining two approaches with
multi-task learning, we argue to organize and reason the data with multiple
variations. Specifically, we propose to generate a graph with millions of
objects mined from hundreds of thousands of videos. The objects are connected
by two types of edges which correspond to two types of invariance: "different
instances but a similar viewpoint and category" and "different viewpoints of
the same instance". By applying simple transitivity on the graph with these
edges, we can obtain pairs of images exhibiting richer visual invariance. We
use this data to train a Triplet-Siamese network with VGG16 as the base
architecture and apply the learned representations to different recognition
tasks. For object detection, we achieve 63.2% mAP on PASCAL VOC 2007 using Fast
R-CNN (compare to 67.3% with ImageNet pre-training). For the challenging COCO
dataset, our method is surprisingly close (23.5%) to the ImageNet-supervised
counterpart (24.4%) using the Faster R-CNN framework. We also show that our
network can perform significantly better than the ImageNet network in the
surface normal estimation task. | http://arxiv.org/pdf/1708.02901 | Xiaolong Wang, Kaiming He, Abhinav Gupta | cs.CV | ICCV 2017 | null | cs.CV | 20170809 | 20170815 | [] |
1708.02901 | 54 | [58] H. Wang and C. Schmid. Action recognition with improved trajectories. In ICCV, 2013. 4
[59] J. Wang, Y. Song, T. Leung, C. Rosenberg, J. Wang, J. Philbin, B. Chen, and Y. Wu. Learning ï¬ne-grained im- age similarity with deep ranking. In CVPR, 2014. 2, 5 [60] X. Wang, D. F. Fouhey, and A. Gupta. Designing deep net- works for surface normal estimation. In CVPR, 2015. 8 [61] X. Wang and A. Gupta. Unsupervised learning of visual rep- resentations using videos. In ICCV, 2015. 1, 2, 3, 4, 5, 6, 7, 8
[62] A. R. Zamir, T. Wekel, P. Agrawal, C. Wei, J. Malik, and S. Savarese. 3D Representations via Pose Estimation and Matching. In ECCV, 2016. 2
[63] R. Zhang, P. Isola, and A. A. Efros. Colorful image coloriza- tion. In ECCV, 2016. 2 | 1708.02901#54 | Transitive Invariance for Self-supervised Visual Representation Learning | Learning visual representations with self-supervised learning has become
popular in computer vision. The idea is to design auxiliary tasks where labels
are free to obtain. Most of these tasks end up providing data to learn specific
kinds of invariance useful for recognition. In this paper, we propose to
exploit different self-supervised approaches to learn representations invariant
to (i) inter-instance variations (two objects in the same class should have
similar features) and (ii) intra-instance variations (viewpoint, pose,
deformations, illumination, etc). Instead of combining two approaches with
multi-task learning, we argue to organize and reason the data with multiple
variations. Specifically, we propose to generate a graph with millions of
objects mined from hundreds of thousands of videos. The objects are connected
by two types of edges which correspond to two types of invariance: "different
instances but a similar viewpoint and category" and "different viewpoints of
the same instance". By applying simple transitivity on the graph with these
edges, we can obtain pairs of images exhibiting richer visual invariance. We
use this data to train a Triplet-Siamese network with VGG16 as the base
architecture and apply the learned representations to different recognition
tasks. For object detection, we achieve 63.2% mAP on PASCAL VOC 2007 using Fast
R-CNN (compare to 67.3% with ImageNet pre-training). For the challenging COCO
dataset, our method is surprisingly close (23.5%) to the ImageNet-supervised
counterpart (24.4%) using the Faster R-CNN framework. We also show that our
network can perform significantly better than the ImageNet network in the
surface normal estimation task. | http://arxiv.org/pdf/1708.02901 | Xiaolong Wang, Kaiming He, Abhinav Gupta | cs.CV | ICCV 2017 | null | cs.CV | 20170809 | 20170815 | [] |
1708.02901 | 55 | [63] R. Zhang, P. Isola, and A. A. Efros. Colorful image coloriza- tion. In ECCV, 2016. 2
[64] R. Zhang, P. Isola, and A. A. Efros. Split-brain autoen- coders: Unsupervised learning by cross-channel prediction. In CVPR, 2017. 2
[65] B. Zhou, A. Khosla, A. Lapedriza, A. Oliva, and A. Torralba. Object detectors emerge in deep scene cnns. In ICLR, 2015. 6
[66] T. Zhou, M. Brown, N. Snavely, and D. Lowe. Unsupervised Learning of Depth and Ego-motion from Video. In CVPR,
2017. 2 | 1708.02901#55 | Transitive Invariance for Self-supervised Visual Representation Learning | Learning visual representations with self-supervised learning has become
popular in computer vision. The idea is to design auxiliary tasks where labels
are free to obtain. Most of these tasks end up providing data to learn specific
kinds of invariance useful for recognition. In this paper, we propose to
exploit different self-supervised approaches to learn representations invariant
to (i) inter-instance variations (two objects in the same class should have
similar features) and (ii) intra-instance variations (viewpoint, pose,
deformations, illumination, etc). Instead of combining two approaches with
multi-task learning, we argue to organize and reason the data with multiple
variations. Specifically, we propose to generate a graph with millions of
objects mined from hundreds of thousands of videos. The objects are connected
by two types of edges which correspond to two types of invariance: "different
instances but a similar viewpoint and category" and "different viewpoints of
the same instance". By applying simple transitivity on the graph with these
edges, we can obtain pairs of images exhibiting richer visual invariance. We
use this data to train a Triplet-Siamese network with VGG16 as the base
architecture and apply the learned representations to different recognition
tasks. For object detection, we achieve 63.2% mAP on PASCAL VOC 2007 using Fast
R-CNN (compare to 67.3% with ImageNet pre-training). For the challenging COCO
dataset, our method is surprisingly close (23.5%) to the ImageNet-supervised
counterpart (24.4%) using the Faster R-CNN framework. We also show that our
network can perform significantly better than the ImageNet network in the
surface normal estimation task. | http://arxiv.org/pdf/1708.02901 | Xiaolong Wang, Kaiming He, Abhinav Gupta | cs.CV | ICCV 2017 | null | cs.CV | 20170809 | 20170815 | [] |
1708.02556 | 1 | We propose in this paper a new approach to train the Generative Adversarial Nets (GANs) with a mixture of generators to overcome the mode collapsing problem. The main intuition is to employ multiple generators, instead of using a single one as in the original GAN. The idea is simple, yet proven to be extremely effective at covering diverse data modes, easily overcoming the mode collapsing problem and delivering state-of-the-art results. A minimax formulation was able to establish among a classiï¬er, a discriminator, and a set of generators in a similar spirit with GAN. Generators create samples that are intended to come from the same distribu- tion as the training data, whilst the discriminator determines whether samples are true data or generated by generators, and the classiï¬er speciï¬es which generator a sample comes from. The distinguishing feature is that internal samples are created from multiple generators, and then one of them will be randomly selected as ï¬nal output similar to the mechanism of a probabilistic mixture model. We term our method Mixture Generative Adversarial Nets (MGAN). We develop theoretical analysis to prove that, at the equilibrium, the | 1708.02556#1 | Multi-Generator Generative Adversarial Nets | We propose a new approach to train the Generative Adversarial Nets (GANs)
with a mixture of generators to overcome the mode collapsing problem. The main
intuition is to employ multiple generators, instead of using a single one as in
the original GAN. The idea is simple, yet proven to be extremely effective at
covering diverse data modes, easily overcoming the mode collapse and delivering
state-of-the-art results. A minimax formulation is able to establish among a
classifier, a discriminator, and a set of generators in a similar spirit with
GAN. Generators create samples that are intended to come from the same
distribution as the training data, whilst the discriminator determines whether
samples are true data or generated by generators, and the classifier specifies
which generator a sample comes from. The distinguishing feature is that
internal samples are created from multiple generators, and then one of them
will be randomly selected as final output similar to the mechanism of a
probabilistic mixture model. We term our method Mixture GAN (MGAN). We develop
theoretical analysis to prove that, at the equilibrium, the Jensen-Shannon
divergence (JSD) between the mixture of generators' distributions and the
empirical data distribution is minimal, whilst the JSD among generators'
distributions is maximal, hence effectively avoiding the mode collapse. By
utilizing parameter sharing, our proposed model adds minimal computational cost
to the standard GAN, and thus can also efficiently scale to large-scale
datasets. We conduct extensive experiments on synthetic 2D data and natural
image databases (CIFAR-10, STL-10 and ImageNet) to demonstrate the superior
performance of our MGAN in achieving state-of-the-art Inception scores over
latest baselines, generating diverse and appealing recognizable objects at
different resolutions, and specializing in capturing different types of objects
by generators. | http://arxiv.org/pdf/1708.02556 | Quan Hoang, Tu Dinh Nguyen, Trung Le, Dinh Phung | cs.LG, cs.AI, stat.ML | null | null | cs.LG | 20170808 | 20171027 | [
{
"id": "1703.00573"
},
{
"id": "1701.00160"
},
{
"id": "1612.00991"
},
{
"id": "1701.02386"
},
{
"id": "1701.07875"
},
{
"id": "1703.10717"
},
{
"id": "1704.03817"
},
{
"id": "1506.03365"
},
{
"id": "1704.02906"
},
{
"id": "1706.02515"
},
{
"id": "1603.04467"
},
{
"id": "1606.00704"
},
{
"id": "1511.01844"
},
{
"id": "1511.05101"
},
{
"id": "1611.02163"
},
{
"id": "1610.09585"
},
{
"id": "1611.01673"
},
{
"id": "1511.06434"
}
] |
1708.02556 | 2 | mixture model. We term our method Mixture Generative Adversarial Nets (MGAN). We develop theoretical analysis to prove that, at the equilibrium, the Jensen-Shannon divergence (JSD) between the mixture of generatorsâ distributions and the empirical data distribu- tion is minimal, whilst the JSD among generatorsâ distributions is maximal, hence effectively avoiding the mode collapsing problem. By utilizing parameter sharing, our proposed model adds minimal computational cost to the standard GAN, and thus can also efï¬ciently scale to large-scale datasets. We conduct extensive exper- iments on synthetic 2D data and natural image databases (CIFAR-10, STL-10 and ImageNet) to demonstrate the superior performance of our MGAN in achieving state-of-the-art Inception scores over latest baselines, generating diverse and ap- pealing recognizable objects at different resolutions, and specializing in capturing different types of objects by the generators. | 1708.02556#2 | Multi-Generator Generative Adversarial Nets | We propose a new approach to train the Generative Adversarial Nets (GANs)
with a mixture of generators to overcome the mode collapsing problem. The main
intuition is to employ multiple generators, instead of using a single one as in
the original GAN. The idea is simple, yet proven to be extremely effective at
covering diverse data modes, easily overcoming the mode collapse and delivering
state-of-the-art results. A minimax formulation is able to establish among a
classifier, a discriminator, and a set of generators in a similar spirit with
GAN. Generators create samples that are intended to come from the same
distribution as the training data, whilst the discriminator determines whether
samples are true data or generated by generators, and the classifier specifies
which generator a sample comes from. The distinguishing feature is that
internal samples are created from multiple generators, and then one of them
will be randomly selected as final output similar to the mechanism of a
probabilistic mixture model. We term our method Mixture GAN (MGAN). We develop
theoretical analysis to prove that, at the equilibrium, the Jensen-Shannon
divergence (JSD) between the mixture of generators' distributions and the
empirical data distribution is minimal, whilst the JSD among generators'
distributions is maximal, hence effectively avoiding the mode collapse. By
utilizing parameter sharing, our proposed model adds minimal computational cost
to the standard GAN, and thus can also efficiently scale to large-scale
datasets. We conduct extensive experiments on synthetic 2D data and natural
image databases (CIFAR-10, STL-10 and ImageNet) to demonstrate the superior
performance of our MGAN in achieving state-of-the-art Inception scores over
latest baselines, generating diverse and appealing recognizable objects at
different resolutions, and specializing in capturing different types of objects
by generators. | http://arxiv.org/pdf/1708.02556 | Quan Hoang, Tu Dinh Nguyen, Trung Le, Dinh Phung | cs.LG, cs.AI, stat.ML | null | null | cs.LG | 20170808 | 20171027 | [
{
"id": "1703.00573"
},
{
"id": "1701.00160"
},
{
"id": "1612.00991"
},
{
"id": "1701.02386"
},
{
"id": "1701.07875"
},
{
"id": "1703.10717"
},
{
"id": "1704.03817"
},
{
"id": "1506.03365"
},
{
"id": "1704.02906"
},
{
"id": "1706.02515"
},
{
"id": "1603.04467"
},
{
"id": "1606.00704"
},
{
"id": "1511.01844"
},
{
"id": "1511.05101"
},
{
"id": "1611.02163"
},
{
"id": "1610.09585"
},
{
"id": "1611.01673"
},
{
"id": "1511.06434"
}
] |
1708.02556 | 3 | 1
# INTRODUCTION
Generative Adversarial Nets (GANs) (Goodfellow et al., 2014) are a recent novel class of deep generative models that are successfully applied to a large variety of applications such as image, video generation, image inpainting, semantic segmentation, image-to-image translation, and text-to-image synthesis, to name a few (Goodfellow, 2016). From the game theory metaphor, the model consists of a discriminator and a generator playing a two-player minimax game, wherein the generator aims to generate samples that resemble those in the training data whilst the discriminator tries to distinguish between the two as narrated in (Goodfellow et al., 2014). Training GAN, however, is challenging as it can be easily trapped into the mode collapsing problem where the generator only concentrates on producing samples lying on a few modes instead of the whole data space (Goodfellow, 2016).
Many GAN variants have been recently proposed to address this problem. They can be grouped into two main categories: training either a single generator or many generators. Methods in the former
1 | 1708.02556#3 | Multi-Generator Generative Adversarial Nets | We propose a new approach to train the Generative Adversarial Nets (GANs)
with a mixture of generators to overcome the mode collapsing problem. The main
intuition is to employ multiple generators, instead of using a single one as in
the original GAN. The idea is simple, yet proven to be extremely effective at
covering diverse data modes, easily overcoming the mode collapse and delivering
state-of-the-art results. A minimax formulation is able to establish among a
classifier, a discriminator, and a set of generators in a similar spirit with
GAN. Generators create samples that are intended to come from the same
distribution as the training data, whilst the discriminator determines whether
samples are true data or generated by generators, and the classifier specifies
which generator a sample comes from. The distinguishing feature is that
internal samples are created from multiple generators, and then one of them
will be randomly selected as final output similar to the mechanism of a
probabilistic mixture model. We term our method Mixture GAN (MGAN). We develop
theoretical analysis to prove that, at the equilibrium, the Jensen-Shannon
divergence (JSD) between the mixture of generators' distributions and the
empirical data distribution is minimal, whilst the JSD among generators'
distributions is maximal, hence effectively avoiding the mode collapse. By
utilizing parameter sharing, our proposed model adds minimal computational cost
to the standard GAN, and thus can also efficiently scale to large-scale
datasets. We conduct extensive experiments on synthetic 2D data and natural
image databases (CIFAR-10, STL-10 and ImageNet) to demonstrate the superior
performance of our MGAN in achieving state-of-the-art Inception scores over
latest baselines, generating diverse and appealing recognizable objects at
different resolutions, and specializing in capturing different types of objects
by generators. | http://arxiv.org/pdf/1708.02556 | Quan Hoang, Tu Dinh Nguyen, Trung Le, Dinh Phung | cs.LG, cs.AI, stat.ML | null | null | cs.LG | 20170808 | 20171027 | [
{
"id": "1703.00573"
},
{
"id": "1701.00160"
},
{
"id": "1612.00991"
},
{
"id": "1701.02386"
},
{
"id": "1701.07875"
},
{
"id": "1703.10717"
},
{
"id": "1704.03817"
},
{
"id": "1506.03365"
},
{
"id": "1704.02906"
},
{
"id": "1706.02515"
},
{
"id": "1603.04467"
},
{
"id": "1606.00704"
},
{
"id": "1511.01844"
},
{
"id": "1511.05101"
},
{
"id": "1611.02163"
},
{
"id": "1610.09585"
},
{
"id": "1611.01673"
},
{
"id": "1511.06434"
}
] |
1708.02556 | 4 | Many GAN variants have been recently proposed to address this problem. They can be grouped into two main categories: training either a single generator or many generators. Methods in the former
1
include modifying the discriminatorâs objective (Salimans et al., 2016; Metz et al., 2016), modifying the generatorâs objective (Warde-Farley & Bengio, 2016), or employing additional discriminators to yield more useful gradient signals for the generators (Nguyen et al., 2017; Durugkar et al., 2016). The common theme in these variants is that generators are shown, at equilibrium, to be able to recover the data distribution, but convergence remains elusive in practice. Most experiments are conducted on toy datasets or on narrow-domain datasets such as LSUN (Yu et al., 2015) or CelebA (Liu et al., 2015). To our knowledge, only Warde-Farley & Bengio (2016) and Nguyen et al. (2017) perform quantitative evaluation of models trained on much more diverse datasets such as STL-10 (Coates et al., 2011) and ImageNet (Russakovsky et al., 2015). | 1708.02556#4 | Multi-Generator Generative Adversarial Nets | We propose a new approach to train the Generative Adversarial Nets (GANs)
with a mixture of generators to overcome the mode collapsing problem. The main
intuition is to employ multiple generators, instead of using a single one as in
the original GAN. The idea is simple, yet proven to be extremely effective at
covering diverse data modes, easily overcoming the mode collapse and delivering
state-of-the-art results. A minimax formulation is able to establish among a
classifier, a discriminator, and a set of generators in a similar spirit with
GAN. Generators create samples that are intended to come from the same
distribution as the training data, whilst the discriminator determines whether
samples are true data or generated by generators, and the classifier specifies
which generator a sample comes from. The distinguishing feature is that
internal samples are created from multiple generators, and then one of them
will be randomly selected as final output similar to the mechanism of a
probabilistic mixture model. We term our method Mixture GAN (MGAN). We develop
theoretical analysis to prove that, at the equilibrium, the Jensen-Shannon
divergence (JSD) between the mixture of generators' distributions and the
empirical data distribution is minimal, whilst the JSD among generators'
distributions is maximal, hence effectively avoiding the mode collapse. By
utilizing parameter sharing, our proposed model adds minimal computational cost
to the standard GAN, and thus can also efficiently scale to large-scale
datasets. We conduct extensive experiments on synthetic 2D data and natural
image databases (CIFAR-10, STL-10 and ImageNet) to demonstrate the superior
performance of our MGAN in achieving state-of-the-art Inception scores over
latest baselines, generating diverse and appealing recognizable objects at
different resolutions, and specializing in capturing different types of objects
by generators. | http://arxiv.org/pdf/1708.02556 | Quan Hoang, Tu Dinh Nguyen, Trung Le, Dinh Phung | cs.LG, cs.AI, stat.ML | null | null | cs.LG | 20170808 | 20171027 | [
{
"id": "1703.00573"
},
{
"id": "1701.00160"
},
{
"id": "1612.00991"
},
{
"id": "1701.02386"
},
{
"id": "1701.07875"
},
{
"id": "1703.10717"
},
{
"id": "1704.03817"
},
{
"id": "1506.03365"
},
{
"id": "1704.02906"
},
{
"id": "1706.02515"
},
{
"id": "1603.04467"
},
{
"id": "1606.00704"
},
{
"id": "1511.01844"
},
{
"id": "1511.05101"
},
{
"id": "1611.02163"
},
{
"id": "1610.09585"
},
{
"id": "1611.01673"
},
{
"id": "1511.06434"
}
] |
1708.02556 | 5 | Given current limitations in the training of single-generator GANs, some very recent attempts have been made following the multi-generator approach. Tolstikhin et al. (2017) apply boosting tech- niques to train a mixture of generators by sequentially training and adding new generators to the mixture. However, sequentially training many generators is computational expensive. Moreover, this approach is built on the implicit assumption that a single-generator GAN can generate very good images of some modes, so reweighing the training data and incrementally training new gener- ators will result in a mixture that covers the whole data space. This assumption is not true in practice since current single-generator GANs trained on diverse datasets such as ImageNet tend to generate images of unrecognizable objects. Arora et al. (2017) train a mixture of generators and discrimina- tors, and optimize the minimax game with the reward function being the weighted average reward function between any pair of generator and discriminator. This model is computationally expen- sive and lacks a mechanism to enforce the divergence among generators. Ghosh et al. (2017) train many generators by using a multi-class discriminator that, in | 1708.02556#5 | Multi-Generator Generative Adversarial Nets | We propose a new approach to train the Generative Adversarial Nets (GANs)
with a mixture of generators to overcome the mode collapsing problem. The main
intuition is to employ multiple generators, instead of using a single one as in
the original GAN. The idea is simple, yet proven to be extremely effective at
covering diverse data modes, easily overcoming the mode collapse and delivering
state-of-the-art results. A minimax formulation is able to establish among a
classifier, a discriminator, and a set of generators in a similar spirit with
GAN. Generators create samples that are intended to come from the same
distribution as the training data, whilst the discriminator determines whether
samples are true data or generated by generators, and the classifier specifies
which generator a sample comes from. The distinguishing feature is that
internal samples are created from multiple generators, and then one of them
will be randomly selected as final output similar to the mechanism of a
probabilistic mixture model. We term our method Mixture GAN (MGAN). We develop
theoretical analysis to prove that, at the equilibrium, the Jensen-Shannon
divergence (JSD) between the mixture of generators' distributions and the
empirical data distribution is minimal, whilst the JSD among generators'
distributions is maximal, hence effectively avoiding the mode collapse. By
utilizing parameter sharing, our proposed model adds minimal computational cost
to the standard GAN, and thus can also efficiently scale to large-scale
datasets. We conduct extensive experiments on synthetic 2D data and natural
image databases (CIFAR-10, STL-10 and ImageNet) to demonstrate the superior
performance of our MGAN in achieving state-of-the-art Inception scores over
latest baselines, generating diverse and appealing recognizable objects at
different resolutions, and specializing in capturing different types of objects
by generators. | http://arxiv.org/pdf/1708.02556 | Quan Hoang, Tu Dinh Nguyen, Trung Le, Dinh Phung | cs.LG, cs.AI, stat.ML | null | null | cs.LG | 20170808 | 20171027 | [
{
"id": "1703.00573"
},
{
"id": "1701.00160"
},
{
"id": "1612.00991"
},
{
"id": "1701.02386"
},
{
"id": "1701.07875"
},
{
"id": "1703.10717"
},
{
"id": "1704.03817"
},
{
"id": "1506.03365"
},
{
"id": "1704.02906"
},
{
"id": "1706.02515"
},
{
"id": "1603.04467"
},
{
"id": "1606.00704"
},
{
"id": "1511.01844"
},
{
"id": "1511.05101"
},
{
"id": "1611.02163"
},
{
"id": "1610.09585"
},
{
"id": "1611.01673"
},
{
"id": "1511.06434"
}
] |
1708.02556 | 6 | a mechanism to enforce the divergence among generators. Ghosh et al. (2017) train many generators by using a multi-class discriminator that, in addition to detecting whether a data sample is fake, predicts which generator produces the sample. The objective function in this model punishes generators for generating samples that are detected as fake but does not directly encourage generators to specialize in generating different types of data. | 1708.02556#6 | Multi-Generator Generative Adversarial Nets | We propose a new approach to train the Generative Adversarial Nets (GANs)
with a mixture of generators to overcome the mode collapsing problem. The main
intuition is to employ multiple generators, instead of using a single one as in
the original GAN. The idea is simple, yet proven to be extremely effective at
covering diverse data modes, easily overcoming the mode collapse and delivering
state-of-the-art results. A minimax formulation is able to establish among a
classifier, a discriminator, and a set of generators in a similar spirit with
GAN. Generators create samples that are intended to come from the same
distribution as the training data, whilst the discriminator determines whether
samples are true data or generated by generators, and the classifier specifies
which generator a sample comes from. The distinguishing feature is that
internal samples are created from multiple generators, and then one of them
will be randomly selected as final output similar to the mechanism of a
probabilistic mixture model. We term our method Mixture GAN (MGAN). We develop
theoretical analysis to prove that, at the equilibrium, the Jensen-Shannon
divergence (JSD) between the mixture of generators' distributions and the
empirical data distribution is minimal, whilst the JSD among generators'
distributions is maximal, hence effectively avoiding the mode collapse. By
utilizing parameter sharing, our proposed model adds minimal computational cost
to the standard GAN, and thus can also efficiently scale to large-scale
datasets. We conduct extensive experiments on synthetic 2D data and natural
image databases (CIFAR-10, STL-10 and ImageNet) to demonstrate the superior
performance of our MGAN in achieving state-of-the-art Inception scores over
latest baselines, generating diverse and appealing recognizable objects at
different resolutions, and specializing in capturing different types of objects
by generators. | http://arxiv.org/pdf/1708.02556 | Quan Hoang, Tu Dinh Nguyen, Trung Le, Dinh Phung | cs.LG, cs.AI, stat.ML | null | null | cs.LG | 20170808 | 20171027 | [
{
"id": "1703.00573"
},
{
"id": "1701.00160"
},
{
"id": "1612.00991"
},
{
"id": "1701.02386"
},
{
"id": "1701.07875"
},
{
"id": "1703.10717"
},
{
"id": "1704.03817"
},
{
"id": "1506.03365"
},
{
"id": "1704.02906"
},
{
"id": "1706.02515"
},
{
"id": "1603.04467"
},
{
"id": "1606.00704"
},
{
"id": "1511.01844"
},
{
"id": "1511.05101"
},
{
"id": "1611.02163"
},
{
"id": "1610.09585"
},
{
"id": "1611.01673"
},
{
"id": "1511.06434"
}
] |
1708.02556 | 7 | We propose in this paper a novel approach to train a mixture of generators. Unlike aforementioned multi-generator GANs, our proposed model simultaneously trains a set of generators with the objec- tive that the mixture of their induced distributions would approximate the data distribution, whilst encouraging them to specialize in different data modes. The result is a novel adversarial architecture formulated as a minimax game among three parties: a classiï¬er, a discriminator, and a set of gener- ators. Generators create samples that are intended to come from the same distribution as the training data, whilst the discriminator determines whether samples are true data or generated by generators, and the classiï¬er speciï¬es which generator a sample comes from. We term our proposed model as Mixture Generative Adversarial Nets (MGAN). We provide analysis that our model is optimized towards minimizing the Jensen-Shannon Divergence (JSD) between the mixture of distributions in- duced by the generators and the data distribution while maximizing the JSD among generators. | 1708.02556#7 | Multi-Generator Generative Adversarial Nets | We propose a new approach to train the Generative Adversarial Nets (GANs)
with a mixture of generators to overcome the mode collapsing problem. The main
intuition is to employ multiple generators, instead of using a single one as in
the original GAN. The idea is simple, yet proven to be extremely effective at
covering diverse data modes, easily overcoming the mode collapse and delivering
state-of-the-art results. A minimax formulation is able to establish among a
classifier, a discriminator, and a set of generators in a similar spirit with
GAN. Generators create samples that are intended to come from the same
distribution as the training data, whilst the discriminator determines whether
samples are true data or generated by generators, and the classifier specifies
which generator a sample comes from. The distinguishing feature is that
internal samples are created from multiple generators, and then one of them
will be randomly selected as final output similar to the mechanism of a
probabilistic mixture model. We term our method Mixture GAN (MGAN). We develop
theoretical analysis to prove that, at the equilibrium, the Jensen-Shannon
divergence (JSD) between the mixture of generators' distributions and the
empirical data distribution is minimal, whilst the JSD among generators'
distributions is maximal, hence effectively avoiding the mode collapse. By
utilizing parameter sharing, our proposed model adds minimal computational cost
to the standard GAN, and thus can also efficiently scale to large-scale
datasets. We conduct extensive experiments on synthetic 2D data and natural
image databases (CIFAR-10, STL-10 and ImageNet) to demonstrate the superior
performance of our MGAN in achieving state-of-the-art Inception scores over
latest baselines, generating diverse and appealing recognizable objects at
different resolutions, and specializing in capturing different types of objects
by generators. | http://arxiv.org/pdf/1708.02556 | Quan Hoang, Tu Dinh Nguyen, Trung Le, Dinh Phung | cs.LG, cs.AI, stat.ML | null | null | cs.LG | 20170808 | 20171027 | [
{
"id": "1703.00573"
},
{
"id": "1701.00160"
},
{
"id": "1612.00991"
},
{
"id": "1701.02386"
},
{
"id": "1701.07875"
},
{
"id": "1703.10717"
},
{
"id": "1704.03817"
},
{
"id": "1506.03365"
},
{
"id": "1704.02906"
},
{
"id": "1706.02515"
},
{
"id": "1603.04467"
},
{
"id": "1606.00704"
},
{
"id": "1511.01844"
},
{
"id": "1511.05101"
},
{
"id": "1611.02163"
},
{
"id": "1610.09585"
},
{
"id": "1611.01673"
},
{
"id": "1511.06434"
}
] |
1708.02556 | 8 | Empirically, our proposed model can be trained efï¬ciently by utilizing parameter sharing among generators, and between the classiï¬er and the discriminator. In addition, simultaneously training many generators while enforcing JSD among generators helps each of them focus on some modes of the data space and learn better. Trained on CIFAR-10, each generator learned to specialize in generating samples from a different class such as horse, car, ship, dog, bird or airplane. Overall, the models trained on the CIFAR-10, STL-10 and ImageNet datasets successfully generated diverse, recognizable objects and achieved state-of-the-art Inception scores (Salimans et al., 2016). The model trained on the CIFAR-10 even outperformed GANs trained in a semi-supervised fashion (Salimans et al., 2016; Odena et al., 2016).
In short, our main contributions are: (i) a novel adversarial model to efï¬ciently train a mixture of generators while enforcing the JSD among the generators; (ii) a theoretical analysis that our objective function is optimized towards minimizing the JSD between the mixture of all generatorsâ distributions and the real data distribution, while maximizing the JSD among generators; and (iii) a comprehensive evaluation on the performance of our method on both synthetic and real-world large-scale datasets of diverse natural scenes.
2 | 1708.02556#8 | Multi-Generator Generative Adversarial Nets | We propose a new approach to train the Generative Adversarial Nets (GANs)
with a mixture of generators to overcome the mode collapsing problem. The main
intuition is to employ multiple generators, instead of using a single one as in
the original GAN. The idea is simple, yet proven to be extremely effective at
covering diverse data modes, easily overcoming the mode collapse and delivering
state-of-the-art results. A minimax formulation is able to establish among a
classifier, a discriminator, and a set of generators in a similar spirit with
GAN. Generators create samples that are intended to come from the same
distribution as the training data, whilst the discriminator determines whether
samples are true data or generated by generators, and the classifier specifies
which generator a sample comes from. The distinguishing feature is that
internal samples are created from multiple generators, and then one of them
will be randomly selected as final output similar to the mechanism of a
probabilistic mixture model. We term our method Mixture GAN (MGAN). We develop
theoretical analysis to prove that, at the equilibrium, the Jensen-Shannon
divergence (JSD) between the mixture of generators' distributions and the
empirical data distribution is minimal, whilst the JSD among generators'
distributions is maximal, hence effectively avoiding the mode collapse. By
utilizing parameter sharing, our proposed model adds minimal computational cost
to the standard GAN, and thus can also efficiently scale to large-scale
datasets. We conduct extensive experiments on synthetic 2D data and natural
image databases (CIFAR-10, STL-10 and ImageNet) to demonstrate the superior
performance of our MGAN in achieving state-of-the-art Inception scores over
latest baselines, generating diverse and appealing recognizable objects at
different resolutions, and specializing in capturing different types of objects
by generators. | http://arxiv.org/pdf/1708.02556 | Quan Hoang, Tu Dinh Nguyen, Trung Le, Dinh Phung | cs.LG, cs.AI, stat.ML | null | null | cs.LG | 20170808 | 20171027 | [
{
"id": "1703.00573"
},
{
"id": "1701.00160"
},
{
"id": "1612.00991"
},
{
"id": "1701.02386"
},
{
"id": "1701.07875"
},
{
"id": "1703.10717"
},
{
"id": "1704.03817"
},
{
"id": "1506.03365"
},
{
"id": "1704.02906"
},
{
"id": "1706.02515"
},
{
"id": "1603.04467"
},
{
"id": "1606.00704"
},
{
"id": "1511.01844"
},
{
"id": "1511.05101"
},
{
"id": "1611.02163"
},
{
"id": "1610.09585"
},
{
"id": "1611.01673"
},
{
"id": "1511.06434"
}
] |
1708.02556 | 9 | 2
x discriminator D | G,(z) | a ars â distinguish between u~ Mult(m) rome ry xand G,(z) z~P, Tied parameters Ne (z) y v || G2.@) >| |... I Tied parameters nN rs : Tiedparameters Â¥ y : , : LY oj which generator was used? v v Gx(z) x~ Paata classifier C
Figure 1: MGANâs architecture with K generators, a binary discriminator, a multi-class classiï¬er.
# 2 GENERATIVE ADVERSARIAL NETS
Given the discriminator D and generator G, both parameterized via neural networks, training GAN can be formulated as the following minimax objective function:
min G max D Exâ¼Pdata(x) [log D (x)] + Ezâ¼Pz [log (1 â D (G (z)))] (1) | 1708.02556#9 | Multi-Generator Generative Adversarial Nets | We propose a new approach to train the Generative Adversarial Nets (GANs)
with a mixture of generators to overcome the mode collapsing problem. The main
intuition is to employ multiple generators, instead of using a single one as in
the original GAN. The idea is simple, yet proven to be extremely effective at
covering diverse data modes, easily overcoming the mode collapse and delivering
state-of-the-art results. A minimax formulation is able to establish among a
classifier, a discriminator, and a set of generators in a similar spirit with
GAN. Generators create samples that are intended to come from the same
distribution as the training data, whilst the discriminator determines whether
samples are true data or generated by generators, and the classifier specifies
which generator a sample comes from. The distinguishing feature is that
internal samples are created from multiple generators, and then one of them
will be randomly selected as final output similar to the mechanism of a
probabilistic mixture model. We term our method Mixture GAN (MGAN). We develop
theoretical analysis to prove that, at the equilibrium, the Jensen-Shannon
divergence (JSD) between the mixture of generators' distributions and the
empirical data distribution is minimal, whilst the JSD among generators'
distributions is maximal, hence effectively avoiding the mode collapse. By
utilizing parameter sharing, our proposed model adds minimal computational cost
to the standard GAN, and thus can also efficiently scale to large-scale
datasets. We conduct extensive experiments on synthetic 2D data and natural
image databases (CIFAR-10, STL-10 and ImageNet) to demonstrate the superior
performance of our MGAN in achieving state-of-the-art Inception scores over
latest baselines, generating diverse and appealing recognizable objects at
different resolutions, and specializing in capturing different types of objects
by generators. | http://arxiv.org/pdf/1708.02556 | Quan Hoang, Tu Dinh Nguyen, Trung Le, Dinh Phung | cs.LG, cs.AI, stat.ML | null | null | cs.LG | 20170808 | 20171027 | [
{
"id": "1703.00573"
},
{
"id": "1701.00160"
},
{
"id": "1612.00991"
},
{
"id": "1701.02386"
},
{
"id": "1701.07875"
},
{
"id": "1703.10717"
},
{
"id": "1704.03817"
},
{
"id": "1506.03365"
},
{
"id": "1704.02906"
},
{
"id": "1706.02515"
},
{
"id": "1603.04467"
},
{
"id": "1606.00704"
},
{
"id": "1511.01844"
},
{
"id": "1511.05101"
},
{
"id": "1611.02163"
},
{
"id": "1610.09585"
},
{
"id": "1611.01673"
},
{
"id": "1511.06434"
}
] |
1708.02556 | 10 | min G max D Exâ¼Pdata(x) [log D (x)] + Ezâ¼Pz [log (1 â D (G (z)))] (1)
where x is drawn from data distribution Pdata, z is drawn from a prior distribution Pz. The mapping G (z) induces a generator distribution Pmodel in data space. GAN alternatively optimizes D and G using stochastic gradient-based learning. As a result, the optimization order in 1 can be reversed, causing the minimax formulation to become maximin. G is therefore incentivized to map every z to a single x that is most likely to be classiï¬ed as true data, leading to mode collapsing problem. Another commonly asserted cause of generating less diverse samples in GAN is that, at the optimal point of D, minimizing G is equivalent to minimizing the JSD between the data and model distributions, which has been empirically proven to prefer to generate samples around only a few modes whilst ignoring other modes (Husz´ar, 2015; Theis et al., 2015).
# 3 PROPOSED MIXTURE GANS | 1708.02556#10 | Multi-Generator Generative Adversarial Nets | We propose a new approach to train the Generative Adversarial Nets (GANs)
with a mixture of generators to overcome the mode collapsing problem. The main
intuition is to employ multiple generators, instead of using a single one as in
the original GAN. The idea is simple, yet proven to be extremely effective at
covering diverse data modes, easily overcoming the mode collapse and delivering
state-of-the-art results. A minimax formulation is able to establish among a
classifier, a discriminator, and a set of generators in a similar spirit with
GAN. Generators create samples that are intended to come from the same
distribution as the training data, whilst the discriminator determines whether
samples are true data or generated by generators, and the classifier specifies
which generator a sample comes from. The distinguishing feature is that
internal samples are created from multiple generators, and then one of them
will be randomly selected as final output similar to the mechanism of a
probabilistic mixture model. We term our method Mixture GAN (MGAN). We develop
theoretical analysis to prove that, at the equilibrium, the Jensen-Shannon
divergence (JSD) between the mixture of generators' distributions and the
empirical data distribution is minimal, whilst the JSD among generators'
distributions is maximal, hence effectively avoiding the mode collapse. By
utilizing parameter sharing, our proposed model adds minimal computational cost
to the standard GAN, and thus can also efficiently scale to large-scale
datasets. We conduct extensive experiments on synthetic 2D data and natural
image databases (CIFAR-10, STL-10 and ImageNet) to demonstrate the superior
performance of our MGAN in achieving state-of-the-art Inception scores over
latest baselines, generating diverse and appealing recognizable objects at
different resolutions, and specializing in capturing different types of objects
by generators. | http://arxiv.org/pdf/1708.02556 | Quan Hoang, Tu Dinh Nguyen, Trung Le, Dinh Phung | cs.LG, cs.AI, stat.ML | null | null | cs.LG | 20170808 | 20171027 | [
{
"id": "1703.00573"
},
{
"id": "1701.00160"
},
{
"id": "1612.00991"
},
{
"id": "1701.02386"
},
{
"id": "1701.07875"
},
{
"id": "1703.10717"
},
{
"id": "1704.03817"
},
{
"id": "1506.03365"
},
{
"id": "1704.02906"
},
{
"id": "1706.02515"
},
{
"id": "1603.04467"
},
{
"id": "1606.00704"
},
{
"id": "1511.01844"
},
{
"id": "1511.05101"
},
{
"id": "1611.02163"
},
{
"id": "1610.09585"
},
{
"id": "1611.01673"
},
{
"id": "1511.06434"
}
] |
1708.02556 | 11 | # 3 PROPOSED MIXTURE GANS
We now present our main contribution of a novel approach that can effectively tackle mode collapse in GAN. Our idea is to use a mixture of many distributions rather than a single one as in the standard GAN, to approximate the data distribution, and simultaneously we enlarge the divergence of those distributions so that they cover different data modes.
To this end, an analogy to a game among K generators G1:K, a discriminator D and a classiï¬er C can be formulated. Each generator Gk maps z to x = Gk (z), thus inducing a single distribution PGk ; and K generators altogether induce a mixture over K distributions, namely Pmodel in the data space. An index u is drawn from a multinomial distribution Mult (Ï) where Ï = [Ï1, Ï2, ..., ÏK] is the coefï¬cients of the mixture; and then the sample Gu (z) is used as the output. Here, we use a predeï¬ned Ï and ï¬x it instead of learning. The discriminator D aims to distinguish between this sample and the training samples. The classiï¬er C performs multi-class classiï¬cation to classify samples labeled by the indices of their corresponding generators. We term this whole process and our model the Mixture Generative Adversarial Nets (MGAN). | 1708.02556#11 | Multi-Generator Generative Adversarial Nets | We propose a new approach to train the Generative Adversarial Nets (GANs)
with a mixture of generators to overcome the mode collapsing problem. The main
intuition is to employ multiple generators, instead of using a single one as in
the original GAN. The idea is simple, yet proven to be extremely effective at
covering diverse data modes, easily overcoming the mode collapse and delivering
state-of-the-art results. A minimax formulation is able to establish among a
classifier, a discriminator, and a set of generators in a similar spirit with
GAN. Generators create samples that are intended to come from the same
distribution as the training data, whilst the discriminator determines whether
samples are true data or generated by generators, and the classifier specifies
which generator a sample comes from. The distinguishing feature is that
internal samples are created from multiple generators, and then one of them
will be randomly selected as final output similar to the mechanism of a
probabilistic mixture model. We term our method Mixture GAN (MGAN). We develop
theoretical analysis to prove that, at the equilibrium, the Jensen-Shannon
divergence (JSD) between the mixture of generators' distributions and the
empirical data distribution is minimal, whilst the JSD among generators'
distributions is maximal, hence effectively avoiding the mode collapse. By
utilizing parameter sharing, our proposed model adds minimal computational cost
to the standard GAN, and thus can also efficiently scale to large-scale
datasets. We conduct extensive experiments on synthetic 2D data and natural
image databases (CIFAR-10, STL-10 and ImageNet) to demonstrate the superior
performance of our MGAN in achieving state-of-the-art Inception scores over
latest baselines, generating diverse and appealing recognizable objects at
different resolutions, and specializing in capturing different types of objects
by generators. | http://arxiv.org/pdf/1708.02556 | Quan Hoang, Tu Dinh Nguyen, Trung Le, Dinh Phung | cs.LG, cs.AI, stat.ML | null | null | cs.LG | 20170808 | 20171027 | [
{
"id": "1703.00573"
},
{
"id": "1701.00160"
},
{
"id": "1612.00991"
},
{
"id": "1701.02386"
},
{
"id": "1701.07875"
},
{
"id": "1703.10717"
},
{
"id": "1704.03817"
},
{
"id": "1506.03365"
},
{
"id": "1704.02906"
},
{
"id": "1706.02515"
},
{
"id": "1603.04467"
},
{
"id": "1606.00704"
},
{
"id": "1511.01844"
},
{
"id": "1511.05101"
},
{
"id": "1611.02163"
},
{
"id": "1610.09585"
},
{
"id": "1611.01673"
},
{
"id": "1511.06434"
}
] |
1708.02556 | 12 | Fig. 1 illustrates the general architecture of our proposed MGAN, where all components are param- eterized by neural networks. Gk (s) tie their parameters together except the input layer, whilst C and D share parameters except the output layer. This parameter sharing scheme enables the networks to leverage their common information such as features at low-level layers that are close to the data layer, hence helps to train model effectively. In addition, it also minimizes the number of parameters and adds minimal complexity to the standard GAN, thus the whole process is still very efï¬cient.
More formally, D, C and G1:K now play the following multi-player minimax optimization game:
# min G1:K,C
# max D
J (G1:K, C, D) = Exâ¼Pdata [log D (x)] + Exâ¼Pmodel [log (1 â D (x))]
âB {e TKEx~Pe, [log Ce i} (2) k=1
3 | 1708.02556#12 | Multi-Generator Generative Adversarial Nets | We propose a new approach to train the Generative Adversarial Nets (GANs)
with a mixture of generators to overcome the mode collapsing problem. The main
intuition is to employ multiple generators, instead of using a single one as in
the original GAN. The idea is simple, yet proven to be extremely effective at
covering diverse data modes, easily overcoming the mode collapse and delivering
state-of-the-art results. A minimax formulation is able to establish among a
classifier, a discriminator, and a set of generators in a similar spirit with
GAN. Generators create samples that are intended to come from the same
distribution as the training data, whilst the discriminator determines whether
samples are true data or generated by generators, and the classifier specifies
which generator a sample comes from. The distinguishing feature is that
internal samples are created from multiple generators, and then one of them
will be randomly selected as final output similar to the mechanism of a
probabilistic mixture model. We term our method Mixture GAN (MGAN). We develop
theoretical analysis to prove that, at the equilibrium, the Jensen-Shannon
divergence (JSD) between the mixture of generators' distributions and the
empirical data distribution is minimal, whilst the JSD among generators'
distributions is maximal, hence effectively avoiding the mode collapse. By
utilizing parameter sharing, our proposed model adds minimal computational cost
to the standard GAN, and thus can also efficiently scale to large-scale
datasets. We conduct extensive experiments on synthetic 2D data and natural
image databases (CIFAR-10, STL-10 and ImageNet) to demonstrate the superior
performance of our MGAN in achieving state-of-the-art Inception scores over
latest baselines, generating diverse and appealing recognizable objects at
different resolutions, and specializing in capturing different types of objects
by generators. | http://arxiv.org/pdf/1708.02556 | Quan Hoang, Tu Dinh Nguyen, Trung Le, Dinh Phung | cs.LG, cs.AI, stat.ML | null | null | cs.LG | 20170808 | 20171027 | [
{
"id": "1703.00573"
},
{
"id": "1701.00160"
},
{
"id": "1612.00991"
},
{
"id": "1701.02386"
},
{
"id": "1701.07875"
},
{
"id": "1703.10717"
},
{
"id": "1704.03817"
},
{
"id": "1506.03365"
},
{
"id": "1704.02906"
},
{
"id": "1706.02515"
},
{
"id": "1603.04467"
},
{
"id": "1606.00704"
},
{
"id": "1511.01844"
},
{
"id": "1511.05101"
},
{
"id": "1611.02163"
},
{
"id": "1610.09585"
},
{
"id": "1611.01673"
},
{
"id": "1511.06434"
}
] |
1708.02556 | 13 | âB {e TKEx~Pe, [log Ce i} (2) k=1
3
where Ck (x) is the probability that x is generated by Gk and β > 0 is the diversity hyper-parameter. The ï¬rst two terms show the interaction between generators and the discriminator as in the standard GAN. The last term should be recognized as a standard softmax loss for a multi-classiï¬cation set- ting, which aims to maximize the entropy for the classiï¬er. This represents the interaction between generators and the classiï¬er, which encourages each generator to produce data separable from those produced by other generators. The strength of this interaction is controlled by β. Similar to GAN, our proposed network can be trained by alternatively updating D, C and G1:K. We refer to Ap- pendix A for the pseudo-code and algorithms for parameter learning for our proposed MGAN.
3.1 THEORETICAL ANALYSIS | 1708.02556#13 | Multi-Generator Generative Adversarial Nets | We propose a new approach to train the Generative Adversarial Nets (GANs)
with a mixture of generators to overcome the mode collapsing problem. The main
intuition is to employ multiple generators, instead of using a single one as in
the original GAN. The idea is simple, yet proven to be extremely effective at
covering diverse data modes, easily overcoming the mode collapse and delivering
state-of-the-art results. A minimax formulation is able to establish among a
classifier, a discriminator, and a set of generators in a similar spirit with
GAN. Generators create samples that are intended to come from the same
distribution as the training data, whilst the discriminator determines whether
samples are true data or generated by generators, and the classifier specifies
which generator a sample comes from. The distinguishing feature is that
internal samples are created from multiple generators, and then one of them
will be randomly selected as final output similar to the mechanism of a
probabilistic mixture model. We term our method Mixture GAN (MGAN). We develop
theoretical analysis to prove that, at the equilibrium, the Jensen-Shannon
divergence (JSD) between the mixture of generators' distributions and the
empirical data distribution is minimal, whilst the JSD among generators'
distributions is maximal, hence effectively avoiding the mode collapse. By
utilizing parameter sharing, our proposed model adds minimal computational cost
to the standard GAN, and thus can also efficiently scale to large-scale
datasets. We conduct extensive experiments on synthetic 2D data and natural
image databases (CIFAR-10, STL-10 and ImageNet) to demonstrate the superior
performance of our MGAN in achieving state-of-the-art Inception scores over
latest baselines, generating diverse and appealing recognizable objects at
different resolutions, and specializing in capturing different types of objects
by generators. | http://arxiv.org/pdf/1708.02556 | Quan Hoang, Tu Dinh Nguyen, Trung Le, Dinh Phung | cs.LG, cs.AI, stat.ML | null | null | cs.LG | 20170808 | 20171027 | [
{
"id": "1703.00573"
},
{
"id": "1701.00160"
},
{
"id": "1612.00991"
},
{
"id": "1701.02386"
},
{
"id": "1701.07875"
},
{
"id": "1703.10717"
},
{
"id": "1704.03817"
},
{
"id": "1506.03365"
},
{
"id": "1704.02906"
},
{
"id": "1706.02515"
},
{
"id": "1603.04467"
},
{
"id": "1606.00704"
},
{
"id": "1511.01844"
},
{
"id": "1511.05101"
},
{
"id": "1611.02163"
},
{
"id": "1610.09585"
},
{
"id": "1611.01673"
},
{
"id": "1511.06434"
}
] |
1708.02556 | 14 | 3.1 THEORETICAL ANALYSIS
Assuming all C, D and G1:K have enough capacity, we show below that at the equilibrium point of the minimax problem in Eq. (2), the JSD between the mixture induced by G1:K and the data distribution is minimal, i.e. pdata = pmodel, and the JSD among K generators is maximal, i.e. two arbitrary generators almost never produce the same data. In what follows we present our mathemat- ical statement and the sketch of their proofs. We refer to Appendix B for full derivations. Proposition 1. For ï¬xed generators G1, G2, ..., GK and their mixture weights Ï1, Ï2, ..., ÏK, the 1:K and Dâ for J (G1:K, C, D) in Eq. (2) are: optimal solution C â = C â
Ci (x) = TPG (and D* (x) = Paata (X) jai TDG; (X) Ddata (X) + Pmodet (x)
C â | 1708.02556#14 | Multi-Generator Generative Adversarial Nets | We propose a new approach to train the Generative Adversarial Nets (GANs)
with a mixture of generators to overcome the mode collapsing problem. The main
intuition is to employ multiple generators, instead of using a single one as in
the original GAN. The idea is simple, yet proven to be extremely effective at
covering diverse data modes, easily overcoming the mode collapse and delivering
state-of-the-art results. A minimax formulation is able to establish among a
classifier, a discriminator, and a set of generators in a similar spirit with
GAN. Generators create samples that are intended to come from the same
distribution as the training data, whilst the discriminator determines whether
samples are true data or generated by generators, and the classifier specifies
which generator a sample comes from. The distinguishing feature is that
internal samples are created from multiple generators, and then one of them
will be randomly selected as final output similar to the mechanism of a
probabilistic mixture model. We term our method Mixture GAN (MGAN). We develop
theoretical analysis to prove that, at the equilibrium, the Jensen-Shannon
divergence (JSD) between the mixture of generators' distributions and the
empirical data distribution is minimal, whilst the JSD among generators'
distributions is maximal, hence effectively avoiding the mode collapse. By
utilizing parameter sharing, our proposed model adds minimal computational cost
to the standard GAN, and thus can also efficiently scale to large-scale
datasets. We conduct extensive experiments on synthetic 2D data and natural
image databases (CIFAR-10, STL-10 and ImageNet) to demonstrate the superior
performance of our MGAN in achieving state-of-the-art Inception scores over
latest baselines, generating diverse and appealing recognizable objects at
different resolutions, and specializing in capturing different types of objects
by generators. | http://arxiv.org/pdf/1708.02556 | Quan Hoang, Tu Dinh Nguyen, Trung Le, Dinh Phung | cs.LG, cs.AI, stat.ML | null | null | cs.LG | 20170808 | 20171027 | [
{
"id": "1703.00573"
},
{
"id": "1701.00160"
},
{
"id": "1612.00991"
},
{
"id": "1701.02386"
},
{
"id": "1701.07875"
},
{
"id": "1703.10717"
},
{
"id": "1704.03817"
},
{
"id": "1506.03365"
},
{
"id": "1704.02906"
},
{
"id": "1706.02515"
},
{
"id": "1603.04467"
},
{
"id": "1606.00704"
},
{
"id": "1511.01844"
},
{
"id": "1511.05101"
},
{
"id": "1611.02163"
},
{
"id": "1610.09585"
},
{
"id": "1611.01673"
},
{
"id": "1511.06434"
}
] |
1708.02556 | 15 | C â
Proof. It can be seen that the solution C â k is a general case of Dâ when D classiï¬es samples from two distributions with equal weight of 1/2. We refer the proofs for Dâ to Prop. 1 in (Goodfellow et al., 2014), and our proof for C â k to Appendix B in this manuscript.
Based on Prop. we further show that at the equilibrium point of the minimax problem in Eq. @). the optimal generator G* = [G7, ...,G@] induces the generated distribution p*, 7.) (x) = an TPax. (x) which is as closest as possible to the true data distribution paata (x) while main- taining the mixture components pg; (x)(s) as furthest as possible to avoid the mode collapse.
taining the mixture components pGâ Theorem 2. At the equilibrium point of the minimax problem in Eq. (2), the optimal Gâ, Dâ, and C â satisfy
# k | 1708.02556#15 | Multi-Generator Generative Adversarial Nets | We propose a new approach to train the Generative Adversarial Nets (GANs)
with a mixture of generators to overcome the mode collapsing problem. The main
intuition is to employ multiple generators, instead of using a single one as in
the original GAN. The idea is simple, yet proven to be extremely effective at
covering diverse data modes, easily overcoming the mode collapse and delivering
state-of-the-art results. A minimax formulation is able to establish among a
classifier, a discriminator, and a set of generators in a similar spirit with
GAN. Generators create samples that are intended to come from the same
distribution as the training data, whilst the discriminator determines whether
samples are true data or generated by generators, and the classifier specifies
which generator a sample comes from. The distinguishing feature is that
internal samples are created from multiple generators, and then one of them
will be randomly selected as final output similar to the mechanism of a
probabilistic mixture model. We term our method Mixture GAN (MGAN). We develop
theoretical analysis to prove that, at the equilibrium, the Jensen-Shannon
divergence (JSD) between the mixture of generators' distributions and the
empirical data distribution is minimal, whilst the JSD among generators'
distributions is maximal, hence effectively avoiding the mode collapse. By
utilizing parameter sharing, our proposed model adds minimal computational cost
to the standard GAN, and thus can also efficiently scale to large-scale
datasets. We conduct extensive experiments on synthetic 2D data and natural
image databases (CIFAR-10, STL-10 and ImageNet) to demonstrate the superior
performance of our MGAN in achieving state-of-the-art Inception scores over
latest baselines, generating diverse and appealing recognizable objects at
different resolutions, and specializing in capturing different types of objects
by generators. | http://arxiv.org/pdf/1708.02556 | Quan Hoang, Tu Dinh Nguyen, Trung Le, Dinh Phung | cs.LG, cs.AI, stat.ML | null | null | cs.LG | 20170808 | 20171027 | [
{
"id": "1703.00573"
},
{
"id": "1701.00160"
},
{
"id": "1612.00991"
},
{
"id": "1701.02386"
},
{
"id": "1701.07875"
},
{
"id": "1703.10717"
},
{
"id": "1704.03817"
},
{
"id": "1506.03365"
},
{
"id": "1704.02906"
},
{
"id": "1706.02515"
},
{
"id": "1603.04467"
},
{
"id": "1606.00704"
},
{
"id": "1511.01844"
},
{
"id": "1511.05101"
},
{
"id": "1611.02163"
},
{
"id": "1610.09585"
},
{
"id": "1611.01673"
},
{
"id": "1511.06434"
}
] |
1708.02556 | 16 | # k
G* = argmin (2-JSD (Paata||Pmodet) â 8» ISDx (Pa,, Pas,-; Pax)) G Cr (x) _ Psi (x) and D* (x) _ Pdata (x) vet 7 [PGs (x) Pdata (X) + Pmodet (xX)
Proof. Substituting C â follows: 1:K and Dâ into Eq. (2), we reformulate the objective function for G1:K as
Pdata (X) Paata (X) + Pmodet (x) {nto [os] k=1 jai IPG; (x) L(Gik) = Ex~Piata [oe ] Beran le xP] Pdata (X) + Pmodet (X) K Kk : Pa, (x) =2- JSD (Paatal|Pmodet) â log 4 â 8 TrEx~ Pg, |log âeâ"-â | ? â B D__ tr log te K =2- ISD (Paatal|Pmodet) â 8 ISD (PG; Pap, ---»Pay) â log 4 â B > me log 7 (4) k=1
Since the last two terms in Eq. (4) are constant, that concludes our proof. | 1708.02556#16 | Multi-Generator Generative Adversarial Nets | We propose a new approach to train the Generative Adversarial Nets (GANs)
with a mixture of generators to overcome the mode collapsing problem. The main
intuition is to employ multiple generators, instead of using a single one as in
the original GAN. The idea is simple, yet proven to be extremely effective at
covering diverse data modes, easily overcoming the mode collapse and delivering
state-of-the-art results. A minimax formulation is able to establish among a
classifier, a discriminator, and a set of generators in a similar spirit with
GAN. Generators create samples that are intended to come from the same
distribution as the training data, whilst the discriminator determines whether
samples are true data or generated by generators, and the classifier specifies
which generator a sample comes from. The distinguishing feature is that
internal samples are created from multiple generators, and then one of them
will be randomly selected as final output similar to the mechanism of a
probabilistic mixture model. We term our method Mixture GAN (MGAN). We develop
theoretical analysis to prove that, at the equilibrium, the Jensen-Shannon
divergence (JSD) between the mixture of generators' distributions and the
empirical data distribution is minimal, whilst the JSD among generators'
distributions is maximal, hence effectively avoiding the mode collapse. By
utilizing parameter sharing, our proposed model adds minimal computational cost
to the standard GAN, and thus can also efficiently scale to large-scale
datasets. We conduct extensive experiments on synthetic 2D data and natural
image databases (CIFAR-10, STL-10 and ImageNet) to demonstrate the superior
performance of our MGAN in achieving state-of-the-art Inception scores over
latest baselines, generating diverse and appealing recognizable objects at
different resolutions, and specializing in capturing different types of objects
by generators. | http://arxiv.org/pdf/1708.02556 | Quan Hoang, Tu Dinh Nguyen, Trung Le, Dinh Phung | cs.LG, cs.AI, stat.ML | null | null | cs.LG | 20170808 | 20171027 | [
{
"id": "1703.00573"
},
{
"id": "1701.00160"
},
{
"id": "1612.00991"
},
{
"id": "1701.02386"
},
{
"id": "1701.07875"
},
{
"id": "1703.10717"
},
{
"id": "1704.03817"
},
{
"id": "1506.03365"
},
{
"id": "1704.02906"
},
{
"id": "1706.02515"
},
{
"id": "1603.04467"
},
{
"id": "1606.00704"
},
{
"id": "1511.01844"
},
{
"id": "1511.05101"
},
{
"id": "1611.02163"
},
{
"id": "1610.09585"
},
{
"id": "1611.01673"
},
{
"id": "1511.06434"
}
] |
1708.02556 | 17 | Since the last two terms in Eq. (4) are constant, that concludes our proof.
This theorem shows that progressing towards the equilibrium is equivalently to minimizing JSD (Paatal|Pmoaet) While maximizing JSD, (Po,,Po,,..-, Po). In the next theorem, we fur- ther clarify the equilibrium point for the specific case wherein the data distribution has the form
4
(3)
Pdata (X) = a TkQk (x) where the mixture components q, (x)(s) are well-separated in the sense that Ex, [qj (x)] = 0 for 7 4 k, ie., for almost everywhere x, if qx (x) > 0 then 4 (x) = 0, Fk. Theorem 3. [f the data distribution has the form: Pdata(X) = an Tek ture components qy, (x)(8) are well-separated, the minimax problem in Eq. problem in Eq. has the following solution: (x) where the mix- or the optimization
# pGâ
, and the corresponding objective value of the optimization problem in Eq. is âBH (7m) = âp an 7, log x where H (7) is the Shannon entropy.
Proof. Please refer to our proof in Appendix B of this manuscript. | 1708.02556#17 | Multi-Generator Generative Adversarial Nets | We propose a new approach to train the Generative Adversarial Nets (GANs)
with a mixture of generators to overcome the mode collapsing problem. The main
intuition is to employ multiple generators, instead of using a single one as in
the original GAN. The idea is simple, yet proven to be extremely effective at
covering diverse data modes, easily overcoming the mode collapse and delivering
state-of-the-art results. A minimax formulation is able to establish among a
classifier, a discriminator, and a set of generators in a similar spirit with
GAN. Generators create samples that are intended to come from the same
distribution as the training data, whilst the discriminator determines whether
samples are true data or generated by generators, and the classifier specifies
which generator a sample comes from. The distinguishing feature is that
internal samples are created from multiple generators, and then one of them
will be randomly selected as final output similar to the mechanism of a
probabilistic mixture model. We term our method Mixture GAN (MGAN). We develop
theoretical analysis to prove that, at the equilibrium, the Jensen-Shannon
divergence (JSD) between the mixture of generators' distributions and the
empirical data distribution is minimal, whilst the JSD among generators'
distributions is maximal, hence effectively avoiding the mode collapse. By
utilizing parameter sharing, our proposed model adds minimal computational cost
to the standard GAN, and thus can also efficiently scale to large-scale
datasets. We conduct extensive experiments on synthetic 2D data and natural
image databases (CIFAR-10, STL-10 and ImageNet) to demonstrate the superior
performance of our MGAN in achieving state-of-the-art Inception scores over
latest baselines, generating diverse and appealing recognizable objects at
different resolutions, and specializing in capturing different types of objects
by generators. | http://arxiv.org/pdf/1708.02556 | Quan Hoang, Tu Dinh Nguyen, Trung Le, Dinh Phung | cs.LG, cs.AI, stat.ML | null | null | cs.LG | 20170808 | 20171027 | [
{
"id": "1703.00573"
},
{
"id": "1701.00160"
},
{
"id": "1612.00991"
},
{
"id": "1701.02386"
},
{
"id": "1701.07875"
},
{
"id": "1703.10717"
},
{
"id": "1704.03817"
},
{
"id": "1506.03365"
},
{
"id": "1704.02906"
},
{
"id": "1706.02515"
},
{
"id": "1603.04467"
},
{
"id": "1606.00704"
},
{
"id": "1511.01844"
},
{
"id": "1511.05101"
},
{
"id": "1611.02163"
},
{
"id": "1610.09585"
},
{
"id": "1611.01673"
},
{
"id": "1511.06434"
}
] |
1708.02556 | 18 | Thm. 3] explicitly offers the optimal solution for the specific case wherein the real data are gen- erated from a mixture distribution whose components are well-separated. This further reveals that if the mixture components are well-separated, by setting the number of generators as the number of mixtures in data and maximizing the divergence between the generated components Pa, (x)(s), we can exactly recover the mixture components gq, (x)(s) using the generated com- ponents pq, (x)(s), hence strongly supporting our motivation when developing MGAN. In prac- tice, C, D, and Gj. are parameterized by neural networks and are optimized in the parameter space rather than in the function space. As all generators Gj. share the same objective func- tion, we can efficiently update their weights using the same backpropagation passes. Empirically, we set the parameter 7, = zVk ⬠{1,...,K}, which further minimizes the objective value âBH(m) = â8 an Tr log = w.t.t 7 in Thm.|3} To simplify the computational graph, we as- sume that each generator is | 1708.02556#18 | Multi-Generator Generative Adversarial Nets | We propose a new approach to train the Generative Adversarial Nets (GANs)
with a mixture of generators to overcome the mode collapsing problem. The main
intuition is to employ multiple generators, instead of using a single one as in
the original GAN. The idea is simple, yet proven to be extremely effective at
covering diverse data modes, easily overcoming the mode collapse and delivering
state-of-the-art results. A minimax formulation is able to establish among a
classifier, a discriminator, and a set of generators in a similar spirit with
GAN. Generators create samples that are intended to come from the same
distribution as the training data, whilst the discriminator determines whether
samples are true data or generated by generators, and the classifier specifies
which generator a sample comes from. The distinguishing feature is that
internal samples are created from multiple generators, and then one of them
will be randomly selected as final output similar to the mechanism of a
probabilistic mixture model. We term our method Mixture GAN (MGAN). We develop
theoretical analysis to prove that, at the equilibrium, the Jensen-Shannon
divergence (JSD) between the mixture of generators' distributions and the
empirical data distribution is minimal, whilst the JSD among generators'
distributions is maximal, hence effectively avoiding the mode collapse. By
utilizing parameter sharing, our proposed model adds minimal computational cost
to the standard GAN, and thus can also efficiently scale to large-scale
datasets. We conduct extensive experiments on synthetic 2D data and natural
image databases (CIFAR-10, STL-10 and ImageNet) to demonstrate the superior
performance of our MGAN in achieving state-of-the-art Inception scores over
latest baselines, generating diverse and appealing recognizable objects at
different resolutions, and specializing in capturing different types of objects
by generators. | http://arxiv.org/pdf/1708.02556 | Quan Hoang, Tu Dinh Nguyen, Trung Le, Dinh Phung | cs.LG, cs.AI, stat.ML | null | null | cs.LG | 20170808 | 20171027 | [
{
"id": "1703.00573"
},
{
"id": "1701.00160"
},
{
"id": "1612.00991"
},
{
"id": "1701.02386"
},
{
"id": "1701.07875"
},
{
"id": "1703.10717"
},
{
"id": "1704.03817"
},
{
"id": "1506.03365"
},
{
"id": "1704.02906"
},
{
"id": "1706.02515"
},
{
"id": "1603.04467"
},
{
"id": "1606.00704"
},
{
"id": "1511.01844"
},
{
"id": "1511.05101"
},
{
"id": "1611.02163"
},
{
"id": "1610.09585"
},
{
"id": "1611.01673"
},
{
"id": "1511.06434"
}
] |
1708.02556 | 20 | # 4 RELATED WORK
Recent attempts to address the mode collapse by modifying the discriminator include minibatch discrimination (Salimans et al., 2016), Unrolled GAN (Metz et al., 2016) and Denoising Feature Matching (DFM) (Warde-Farley & Bengio, 2016). The idea of minibatch discrimination is to al- low the discriminator to detect samples that are noticeably similar to other generated samples. Al- though this method can generate visually appealing samples, it is computationally expensive, thus normally used in the last hidden layer of discriminator. Unrolled GAN improves the learning by unrolling computational graph to include additional optimization steps of the discriminator. It could effectively reduce the mode collapsing problem, but the unrolling step is expensive, rendering it unscalable up to large-scale datasets. DFM augments the objective function of generator with one of a Denoising AutoEncoder (DAE) that minimizes the reconstruction error of activations at the penultimate layer of the discriminator. The idea is that gradient signals from DAE can guide the generator towards producing samples whose activations are close to the manifold of real data activa- tions. DFM is surprisingly effective at avoiding mode collapse, but the involvement of a deep DAE adds considerable computational cost to the model. | 1708.02556#20 | Multi-Generator Generative Adversarial Nets | We propose a new approach to train the Generative Adversarial Nets (GANs)
with a mixture of generators to overcome the mode collapsing problem. The main
intuition is to employ multiple generators, instead of using a single one as in
the original GAN. The idea is simple, yet proven to be extremely effective at
covering diverse data modes, easily overcoming the mode collapse and delivering
state-of-the-art results. A minimax formulation is able to establish among a
classifier, a discriminator, and a set of generators in a similar spirit with
GAN. Generators create samples that are intended to come from the same
distribution as the training data, whilst the discriminator determines whether
samples are true data or generated by generators, and the classifier specifies
which generator a sample comes from. The distinguishing feature is that
internal samples are created from multiple generators, and then one of them
will be randomly selected as final output similar to the mechanism of a
probabilistic mixture model. We term our method Mixture GAN (MGAN). We develop
theoretical analysis to prove that, at the equilibrium, the Jensen-Shannon
divergence (JSD) between the mixture of generators' distributions and the
empirical data distribution is minimal, whilst the JSD among generators'
distributions is maximal, hence effectively avoiding the mode collapse. By
utilizing parameter sharing, our proposed model adds minimal computational cost
to the standard GAN, and thus can also efficiently scale to large-scale
datasets. We conduct extensive experiments on synthetic 2D data and natural
image databases (CIFAR-10, STL-10 and ImageNet) to demonstrate the superior
performance of our MGAN in achieving state-of-the-art Inception scores over
latest baselines, generating diverse and appealing recognizable objects at
different resolutions, and specializing in capturing different types of objects
by generators. | http://arxiv.org/pdf/1708.02556 | Quan Hoang, Tu Dinh Nguyen, Trung Le, Dinh Phung | cs.LG, cs.AI, stat.ML | null | null | cs.LG | 20170808 | 20171027 | [
{
"id": "1703.00573"
},
{
"id": "1701.00160"
},
{
"id": "1612.00991"
},
{
"id": "1701.02386"
},
{
"id": "1701.07875"
},
{
"id": "1703.10717"
},
{
"id": "1704.03817"
},
{
"id": "1506.03365"
},
{
"id": "1704.02906"
},
{
"id": "1706.02515"
},
{
"id": "1603.04467"
},
{
"id": "1606.00704"
},
{
"id": "1511.01844"
},
{
"id": "1511.05101"
},
{
"id": "1611.02163"
},
{
"id": "1610.09585"
},
{
"id": "1611.01673"
},
{
"id": "1511.06434"
}
] |
1708.02556 | 21 | An alternative approach is to train additional discriminators. D2GAN (Nguyen et al., 2017) employs two discriminators to minimize both Kullback-Leibler (KL) and reverse KL divergences, thus plac- ing a fair distribution across the data modes. This method can avoid the mode collapsing problem to a certain extent, but still could not outperform DFM. Another work uses many discriminators to boost the learning of generator (Durugkar et al., 2016). The authors state that this method is robust to mode collapse, but did not provide experimental results to support that claim.
Another direction is to train multiple generators. The so-called MIX+GAN (Arora et al., 2017) is related to our model in the use of mixture but the idea is very different. Based on min-max theorem (Neumann, 1928), the MIX+GAN trains a mixture of multiple generators and discriminators with
5
different parameters to play mixed strategies in a min-max game. The total reward of this game is computed by weighted averaging rewards over all pairs of generator and discriminator. The lack of parameter sharing renders this method computationally expensive to train. Moreover, there is no mechanism to enforce the divergence among generators as in ours. | 1708.02556#21 | Multi-Generator Generative Adversarial Nets | We propose a new approach to train the Generative Adversarial Nets (GANs)
with a mixture of generators to overcome the mode collapsing problem. The main
intuition is to employ multiple generators, instead of using a single one as in
the original GAN. The idea is simple, yet proven to be extremely effective at
covering diverse data modes, easily overcoming the mode collapse and delivering
state-of-the-art results. A minimax formulation is able to establish among a
classifier, a discriminator, and a set of generators in a similar spirit with
GAN. Generators create samples that are intended to come from the same
distribution as the training data, whilst the discriminator determines whether
samples are true data or generated by generators, and the classifier specifies
which generator a sample comes from. The distinguishing feature is that
internal samples are created from multiple generators, and then one of them
will be randomly selected as final output similar to the mechanism of a
probabilistic mixture model. We term our method Mixture GAN (MGAN). We develop
theoretical analysis to prove that, at the equilibrium, the Jensen-Shannon
divergence (JSD) between the mixture of generators' distributions and the
empirical data distribution is minimal, whilst the JSD among generators'
distributions is maximal, hence effectively avoiding the mode collapse. By
utilizing parameter sharing, our proposed model adds minimal computational cost
to the standard GAN, and thus can also efficiently scale to large-scale
datasets. We conduct extensive experiments on synthetic 2D data and natural
image databases (CIFAR-10, STL-10 and ImageNet) to demonstrate the superior
performance of our MGAN in achieving state-of-the-art Inception scores over
latest baselines, generating diverse and appealing recognizable objects at
different resolutions, and specializing in capturing different types of objects
by generators. | http://arxiv.org/pdf/1708.02556 | Quan Hoang, Tu Dinh Nguyen, Trung Le, Dinh Phung | cs.LG, cs.AI, stat.ML | null | null | cs.LG | 20170808 | 20171027 | [
{
"id": "1703.00573"
},
{
"id": "1701.00160"
},
{
"id": "1612.00991"
},
{
"id": "1701.02386"
},
{
"id": "1701.07875"
},
{
"id": "1703.10717"
},
{
"id": "1704.03817"
},
{
"id": "1506.03365"
},
{
"id": "1704.02906"
},
{
"id": "1706.02515"
},
{
"id": "1603.04467"
},
{
"id": "1606.00704"
},
{
"id": "1511.01844"
},
{
"id": "1511.05101"
},
{
"id": "1611.02163"
},
{
"id": "1610.09585"
},
{
"id": "1611.01673"
},
{
"id": "1511.06434"
}
] |
1708.02556 | 22 | Some attempts have been made to train a mixture of GANs in a similar spirit with boosting algo- rithms. Wang et al. (2016) propose an additive procedure to incrementally train new GANs on a subset of the training data that are badly modeled by previous generators. As the discriminator is expected to classify samples from this subset as real with high conï¬dence, i.e. D (x) is high, the subset can be chosen to include x where D (x) is larger than a predeï¬ned threshold. Tolstikhin et al. (2017), however, show that this heuristic fails to address the mode collapsing problem. Thus they propose AdaGAN to introduce a robust reweighing scheme to prepare training data for the next GAN. AdaGAN and boosting-inspired GANs in general are based on the assumption that a single- generator GAN can learn to generate impressive images of some modes such as dogs or cats but fails to cover other modes such as giraffe. Therefore, removing images of dogs or cats from the training data and train a next GAN can create a better mixture. This assumption is not true in practice as current single-generator GANs trained on diverse data sets such as ImageNet (Russakovsky et al., 2015) tend to generate images of unrecognizable objects. | 1708.02556#22 | Multi-Generator Generative Adversarial Nets | We propose a new approach to train the Generative Adversarial Nets (GANs)
with a mixture of generators to overcome the mode collapsing problem. The main
intuition is to employ multiple generators, instead of using a single one as in
the original GAN. The idea is simple, yet proven to be extremely effective at
covering diverse data modes, easily overcoming the mode collapse and delivering
state-of-the-art results. A minimax formulation is able to establish among a
classifier, a discriminator, and a set of generators in a similar spirit with
GAN. Generators create samples that are intended to come from the same
distribution as the training data, whilst the discriminator determines whether
samples are true data or generated by generators, and the classifier specifies
which generator a sample comes from. The distinguishing feature is that
internal samples are created from multiple generators, and then one of them
will be randomly selected as final output similar to the mechanism of a
probabilistic mixture model. We term our method Mixture GAN (MGAN). We develop
theoretical analysis to prove that, at the equilibrium, the Jensen-Shannon
divergence (JSD) between the mixture of generators' distributions and the
empirical data distribution is minimal, whilst the JSD among generators'
distributions is maximal, hence effectively avoiding the mode collapse. By
utilizing parameter sharing, our proposed model adds minimal computational cost
to the standard GAN, and thus can also efficiently scale to large-scale
datasets. We conduct extensive experiments on synthetic 2D data and natural
image databases (CIFAR-10, STL-10 and ImageNet) to demonstrate the superior
performance of our MGAN in achieving state-of-the-art Inception scores over
latest baselines, generating diverse and appealing recognizable objects at
different resolutions, and specializing in capturing different types of objects
by generators. | http://arxiv.org/pdf/1708.02556 | Quan Hoang, Tu Dinh Nguyen, Trung Le, Dinh Phung | cs.LG, cs.AI, stat.ML | null | null | cs.LG | 20170808 | 20171027 | [
{
"id": "1703.00573"
},
{
"id": "1701.00160"
},
{
"id": "1612.00991"
},
{
"id": "1701.02386"
},
{
"id": "1701.07875"
},
{
"id": "1703.10717"
},
{
"id": "1704.03817"
},
{
"id": "1506.03365"
},
{
"id": "1704.02906"
},
{
"id": "1706.02515"
},
{
"id": "1603.04467"
},
{
"id": "1606.00704"
},
{
"id": "1511.01844"
},
{
"id": "1511.05101"
},
{
"id": "1611.02163"
},
{
"id": "1610.09585"
},
{
"id": "1611.01673"
},
{
"id": "1511.06434"
}
] |
1708.02556 | 23 | The most closely related to ours is MAD-GAN (Ghosh et al., 2017) which trains many generators and uses a multi-class classiï¬er as the discriminator. In this work, two strategies are proposed to ad- dress the mode collapse: (i) augmenting generatorâs objective function with a user-deï¬ned similarity based function to encourage different generators to generate diverse samples, and (ii) modifying dis- criminatorâs objective functions to push different generators towards different identiï¬able modes by separating samples of each generator. Our approach is different in that, rather than modifying the discriminator, we use an additional classiï¬er that discriminates samples produced by each generator from those by others under multi-class classiï¬cation setting. This nicely results in an optimiza- tion problem that maximizes the JSD among generators, thus naturally enforcing them to generate diverse samples and effectively avoiding mode collapse.
# 5 EXPERIMENTS | 1708.02556#23 | Multi-Generator Generative Adversarial Nets | We propose a new approach to train the Generative Adversarial Nets (GANs)
with a mixture of generators to overcome the mode collapsing problem. The main
intuition is to employ multiple generators, instead of using a single one as in
the original GAN. The idea is simple, yet proven to be extremely effective at
covering diverse data modes, easily overcoming the mode collapse and delivering
state-of-the-art results. A minimax formulation is able to establish among a
classifier, a discriminator, and a set of generators in a similar spirit with
GAN. Generators create samples that are intended to come from the same
distribution as the training data, whilst the discriminator determines whether
samples are true data or generated by generators, and the classifier specifies
which generator a sample comes from. The distinguishing feature is that
internal samples are created from multiple generators, and then one of them
will be randomly selected as final output similar to the mechanism of a
probabilistic mixture model. We term our method Mixture GAN (MGAN). We develop
theoretical analysis to prove that, at the equilibrium, the Jensen-Shannon
divergence (JSD) between the mixture of generators' distributions and the
empirical data distribution is minimal, whilst the JSD among generators'
distributions is maximal, hence effectively avoiding the mode collapse. By
utilizing parameter sharing, our proposed model adds minimal computational cost
to the standard GAN, and thus can also efficiently scale to large-scale
datasets. We conduct extensive experiments on synthetic 2D data and natural
image databases (CIFAR-10, STL-10 and ImageNet) to demonstrate the superior
performance of our MGAN in achieving state-of-the-art Inception scores over
latest baselines, generating diverse and appealing recognizable objects at
different resolutions, and specializing in capturing different types of objects
by generators. | http://arxiv.org/pdf/1708.02556 | Quan Hoang, Tu Dinh Nguyen, Trung Le, Dinh Phung | cs.LG, cs.AI, stat.ML | null | null | cs.LG | 20170808 | 20171027 | [
{
"id": "1703.00573"
},
{
"id": "1701.00160"
},
{
"id": "1612.00991"
},
{
"id": "1701.02386"
},
{
"id": "1701.07875"
},
{
"id": "1703.10717"
},
{
"id": "1704.03817"
},
{
"id": "1506.03365"
},
{
"id": "1704.02906"
},
{
"id": "1706.02515"
},
{
"id": "1603.04467"
},
{
"id": "1606.00704"
},
{
"id": "1511.01844"
},
{
"id": "1511.05101"
},
{
"id": "1611.02163"
},
{
"id": "1610.09585"
},
{
"id": "1611.01673"
},
{
"id": "1511.06434"
}
] |
1708.02556 | 24 | # 5 EXPERIMENTS
In this section, we conduct experiments on both synthetic data and real-world large-scale datasets. The aim of using synthetic data is to visualize, examine and evaluate the learning behaviors of our proposed MGAN, whilst using real-world datasets to quantitatively demonstrate its efï¬cacy and scalability of addressing the mode collapse in a much larger and wider data space. For fair comparison, we use experimental settings that are identical to previous work, and hence we quote the results from the latest state-of-the-art GAN-based models to compare with ours. | 1708.02556#24 | Multi-Generator Generative Adversarial Nets | We propose a new approach to train the Generative Adversarial Nets (GANs)
with a mixture of generators to overcome the mode collapsing problem. The main
intuition is to employ multiple generators, instead of using a single one as in
the original GAN. The idea is simple, yet proven to be extremely effective at
covering diverse data modes, easily overcoming the mode collapse and delivering
state-of-the-art results. A minimax formulation is able to establish among a
classifier, a discriminator, and a set of generators in a similar spirit with
GAN. Generators create samples that are intended to come from the same
distribution as the training data, whilst the discriminator determines whether
samples are true data or generated by generators, and the classifier specifies
which generator a sample comes from. The distinguishing feature is that
internal samples are created from multiple generators, and then one of them
will be randomly selected as final output similar to the mechanism of a
probabilistic mixture model. We term our method Mixture GAN (MGAN). We develop
theoretical analysis to prove that, at the equilibrium, the Jensen-Shannon
divergence (JSD) between the mixture of generators' distributions and the
empirical data distribution is minimal, whilst the JSD among generators'
distributions is maximal, hence effectively avoiding the mode collapse. By
utilizing parameter sharing, our proposed model adds minimal computational cost
to the standard GAN, and thus can also efficiently scale to large-scale
datasets. We conduct extensive experiments on synthetic 2D data and natural
image databases (CIFAR-10, STL-10 and ImageNet) to demonstrate the superior
performance of our MGAN in achieving state-of-the-art Inception scores over
latest baselines, generating diverse and appealing recognizable objects at
different resolutions, and specializing in capturing different types of objects
by generators. | http://arxiv.org/pdf/1708.02556 | Quan Hoang, Tu Dinh Nguyen, Trung Le, Dinh Phung | cs.LG, cs.AI, stat.ML | null | null | cs.LG | 20170808 | 20171027 | [
{
"id": "1703.00573"
},
{
"id": "1701.00160"
},
{
"id": "1612.00991"
},
{
"id": "1701.02386"
},
{
"id": "1701.07875"
},
{
"id": "1703.10717"
},
{
"id": "1704.03817"
},
{
"id": "1506.03365"
},
{
"id": "1704.02906"
},
{
"id": "1706.02515"
},
{
"id": "1603.04467"
},
{
"id": "1606.00704"
},
{
"id": "1511.01844"
},
{
"id": "1511.05101"
},
{
"id": "1611.02163"
},
{
"id": "1610.09585"
},
{
"id": "1611.01673"
},
{
"id": "1511.06434"
}
] |
1708.02556 | 25 | We use TensorFlow (Abadi et al., 2016) to implement our model and will release the code after publication. For all experiments, we use: (i) shared parameters among generators in all layers except for the weights from the input to the ï¬rst hidden layer; (ii) shared parameters between discriminator and classiï¬er in all layers except for the weights from the penultimate layer to the output; (iii) Adam optimizer (Kingma & Ba, 2014) with learning rate of 0.0002 and the ï¬rst-order momentum of 0.5; (iv) minibatch size of 64 samples for training discriminators; (v) ReLU activations (Nair & Hinton, 2010) for generators; (vi) Leaky ReLU (Maas et al., 2013) with slope of 0.2 for discriminator and classiï¬er; and (vii) weights randomly initialized from Gaussian distribution N (0, 0.02I) and zero biases. We refer to Appendix C for detailed model architectures and additional experimental results.
# 5.1 SYNTHETIC DATA | 1708.02556#25 | Multi-Generator Generative Adversarial Nets | We propose a new approach to train the Generative Adversarial Nets (GANs)
with a mixture of generators to overcome the mode collapsing problem. The main
intuition is to employ multiple generators, instead of using a single one as in
the original GAN. The idea is simple, yet proven to be extremely effective at
covering diverse data modes, easily overcoming the mode collapse and delivering
state-of-the-art results. A minimax formulation is able to establish among a
classifier, a discriminator, and a set of generators in a similar spirit with
GAN. Generators create samples that are intended to come from the same
distribution as the training data, whilst the discriminator determines whether
samples are true data or generated by generators, and the classifier specifies
which generator a sample comes from. The distinguishing feature is that
internal samples are created from multiple generators, and then one of them
will be randomly selected as final output similar to the mechanism of a
probabilistic mixture model. We term our method Mixture GAN (MGAN). We develop
theoretical analysis to prove that, at the equilibrium, the Jensen-Shannon
divergence (JSD) between the mixture of generators' distributions and the
empirical data distribution is minimal, whilst the JSD among generators'
distributions is maximal, hence effectively avoiding the mode collapse. By
utilizing parameter sharing, our proposed model adds minimal computational cost
to the standard GAN, and thus can also efficiently scale to large-scale
datasets. We conduct extensive experiments on synthetic 2D data and natural
image databases (CIFAR-10, STL-10 and ImageNet) to demonstrate the superior
performance of our MGAN in achieving state-of-the-art Inception scores over
latest baselines, generating diverse and appealing recognizable objects at
different resolutions, and specializing in capturing different types of objects
by generators. | http://arxiv.org/pdf/1708.02556 | Quan Hoang, Tu Dinh Nguyen, Trung Le, Dinh Phung | cs.LG, cs.AI, stat.ML | null | null | cs.LG | 20170808 | 20171027 | [
{
"id": "1703.00573"
},
{
"id": "1701.00160"
},
{
"id": "1612.00991"
},
{
"id": "1701.02386"
},
{
"id": "1701.07875"
},
{
"id": "1703.10717"
},
{
"id": "1704.03817"
},
{
"id": "1506.03365"
},
{
"id": "1704.02906"
},
{
"id": "1706.02515"
},
{
"id": "1603.04467"
},
{
"id": "1606.00704"
},
{
"id": "1511.01844"
},
{
"id": "1511.05101"
},
{
"id": "1611.02163"
},
{
"id": "1610.09585"
},
{
"id": "1611.01673"
},
{
"id": "1511.06434"
}
] |
1708.02556 | 26 | # 5.1 SYNTHETIC DATA
In the ï¬rst experiment, following (Nguyen et al., 2017) we reuse the experimental design proposed in (Metz et al., 2016) to investigate how well our MGAN can explore and capture multiple data modes. The training data is sampled from a 2D mixture of 8 isotropic Gaussian distributions with a covariance matrix of 0.02I and means arranged in a circle of zero centroid and radius of 2.0. Our purpose of using such small variance is to create low density regions and separate the modes.
We employ 8 generators, each with a simple architecture of an input layer with 256 noise units drawn from isotropic multivariate Gaussian distribution N (0, I), and two fully connected hidden
6
cd * * * ey *#, 4 es fa % s * Sa *. + ° ee ta. i ¥, * Ea * * Seg Kaye ata e*s * a*y ate CZ * cd * * +* * * * gt ee Fy t *,* ey *,*
a | 1708.02556#26 | Multi-Generator Generative Adversarial Nets | We propose a new approach to train the Generative Adversarial Nets (GANs)
with a mixture of generators to overcome the mode collapsing problem. The main
intuition is to employ multiple generators, instead of using a single one as in
the original GAN. The idea is simple, yet proven to be extremely effective at
covering diverse data modes, easily overcoming the mode collapse and delivering
state-of-the-art results. A minimax formulation is able to establish among a
classifier, a discriminator, and a set of generators in a similar spirit with
GAN. Generators create samples that are intended to come from the same
distribution as the training data, whilst the discriminator determines whether
samples are true data or generated by generators, and the classifier specifies
which generator a sample comes from. The distinguishing feature is that
internal samples are created from multiple generators, and then one of them
will be randomly selected as final output similar to the mechanism of a
probabilistic mixture model. We term our method Mixture GAN (MGAN). We develop
theoretical analysis to prove that, at the equilibrium, the Jensen-Shannon
divergence (JSD) between the mixture of generators' distributions and the
empirical data distribution is minimal, whilst the JSD among generators'
distributions is maximal, hence effectively avoiding the mode collapse. By
utilizing parameter sharing, our proposed model adds minimal computational cost
to the standard GAN, and thus can also efficiently scale to large-scale
datasets. We conduct extensive experiments on synthetic 2D data and natural
image databases (CIFAR-10, STL-10 and ImageNet) to demonstrate the superior
performance of our MGAN in achieving state-of-the-art Inception scores over
latest baselines, generating diverse and appealing recognizable objects at
different resolutions, and specializing in capturing different types of objects
by generators. | http://arxiv.org/pdf/1708.02556 | Quan Hoang, Tu Dinh Nguyen, Trung Le, Dinh Phung | cs.LG, cs.AI, stat.ML | null | null | cs.LG | 20170808 | 20171027 | [
{
"id": "1703.00573"
},
{
"id": "1701.00160"
},
{
"id": "1612.00991"
},
{
"id": "1701.02386"
},
{
"id": "1701.07875"
},
{
"id": "1703.10717"
},
{
"id": "1704.03817"
},
{
"id": "1506.03365"
},
{
"id": "1704.02906"
},
{
"id": "1706.02515"
},
{
"id": "1603.04467"
},
{
"id": "1606.00704"
},
{
"id": "1511.01844"
},
{
"id": "1511.05101"
},
{
"id": "1611.02163"
},
{
"id": "1610.09585"
},
{
"id": "1611.01673"
},
{
"id": "1511.06434"
}
] |
1708.02556 | 27 | a
(a) Symmetric KL divergence. (b) Wasserstein distance. (c) Evolution of data (in blue) generated by GAN, UnrolledGAN, D2GAN and our MGAN from the top row to the bottom, respectively. Data sampled from the true mixture of 8 Gaussians are red.
5K 10k 18k 20K 25K step
Figure 2: The comparison of our MGAN and GANâs variants on 2D synthetic dataset.
layers with 128 ReLU units each. For the discriminator and classiï¬er, one hidden layer with 128 ReLU units is used. The diversity hyperparameter β is set to 0.125. | 1708.02556#27 | Multi-Generator Generative Adversarial Nets | We propose a new approach to train the Generative Adversarial Nets (GANs)
with a mixture of generators to overcome the mode collapsing problem. The main
intuition is to employ multiple generators, instead of using a single one as in
the original GAN. The idea is simple, yet proven to be extremely effective at
covering diverse data modes, easily overcoming the mode collapse and delivering
state-of-the-art results. A minimax formulation is able to establish among a
classifier, a discriminator, and a set of generators in a similar spirit with
GAN. Generators create samples that are intended to come from the same
distribution as the training data, whilst the discriminator determines whether
samples are true data or generated by generators, and the classifier specifies
which generator a sample comes from. The distinguishing feature is that
internal samples are created from multiple generators, and then one of them
will be randomly selected as final output similar to the mechanism of a
probabilistic mixture model. We term our method Mixture GAN (MGAN). We develop
theoretical analysis to prove that, at the equilibrium, the Jensen-Shannon
divergence (JSD) between the mixture of generators' distributions and the
empirical data distribution is minimal, whilst the JSD among generators'
distributions is maximal, hence effectively avoiding the mode collapse. By
utilizing parameter sharing, our proposed model adds minimal computational cost
to the standard GAN, and thus can also efficiently scale to large-scale
datasets. We conduct extensive experiments on synthetic 2D data and natural
image databases (CIFAR-10, STL-10 and ImageNet) to demonstrate the superior
performance of our MGAN in achieving state-of-the-art Inception scores over
latest baselines, generating diverse and appealing recognizable objects at
different resolutions, and specializing in capturing different types of objects
by generators. | http://arxiv.org/pdf/1708.02556 | Quan Hoang, Tu Dinh Nguyen, Trung Le, Dinh Phung | cs.LG, cs.AI, stat.ML | null | null | cs.LG | 20170808 | 20171027 | [
{
"id": "1703.00573"
},
{
"id": "1701.00160"
},
{
"id": "1612.00991"
},
{
"id": "1701.02386"
},
{
"id": "1701.07875"
},
{
"id": "1703.10717"
},
{
"id": "1704.03817"
},
{
"id": "1506.03365"
},
{
"id": "1704.02906"
},
{
"id": "1706.02515"
},
{
"id": "1603.04467"
},
{
"id": "1606.00704"
},
{
"id": "1511.01844"
},
{
"id": "1511.05101"
},
{
"id": "1611.02163"
},
{
"id": "1610.09585"
},
{
"id": "1611.01673"
},
{
"id": "1511.06434"
}
] |
1708.02556 | 28 | Fig. 2c shows the evolution of 512 samples generated by our model and baselines through time. It can be seen that the regular GAN generates data collapsing into a single mode hovering around the valid modes of data distribution, thus reï¬ecting the mode collapse in GAN as expected. At the same time, UnrolledGAN (Metz et al., 2016), D2GAN (Nguyen et al., 2017) and our MGAN distribute data around all 8 mixture components, and hence demonstrating the abilities to successfully learn multimodal data in this case. Our proposed model, however, converges much faster than the other two since it successfully explores and neatly covers all modes at the early step 15K, whilst two baselines produce samples cycling around till the last steps. At the end, our MGAN captures data modes more precisely than UnrolledGAN and D2GAN since, in each mode, the UnrolledGAN generates data that concentrate only on several points around the modeâs centroid, thus seems to produce fewer samples than ours whose samples fairly spread out the entire mode, but not exceed the boundary whilst the D2GAN still generates many points scattered between two adjacent modes. | 1708.02556#28 | Multi-Generator Generative Adversarial Nets | We propose a new approach to train the Generative Adversarial Nets (GANs)
with a mixture of generators to overcome the mode collapsing problem. The main
intuition is to employ multiple generators, instead of using a single one as in
the original GAN. The idea is simple, yet proven to be extremely effective at
covering diverse data modes, easily overcoming the mode collapse and delivering
state-of-the-art results. A minimax formulation is able to establish among a
classifier, a discriminator, and a set of generators in a similar spirit with
GAN. Generators create samples that are intended to come from the same
distribution as the training data, whilst the discriminator determines whether
samples are true data or generated by generators, and the classifier specifies
which generator a sample comes from. The distinguishing feature is that
internal samples are created from multiple generators, and then one of them
will be randomly selected as final output similar to the mechanism of a
probabilistic mixture model. We term our method Mixture GAN (MGAN). We develop
theoretical analysis to prove that, at the equilibrium, the Jensen-Shannon
divergence (JSD) between the mixture of generators' distributions and the
empirical data distribution is minimal, whilst the JSD among generators'
distributions is maximal, hence effectively avoiding the mode collapse. By
utilizing parameter sharing, our proposed model adds minimal computational cost
to the standard GAN, and thus can also efficiently scale to large-scale
datasets. We conduct extensive experiments on synthetic 2D data and natural
image databases (CIFAR-10, STL-10 and ImageNet) to demonstrate the superior
performance of our MGAN in achieving state-of-the-art Inception scores over
latest baselines, generating diverse and appealing recognizable objects at
different resolutions, and specializing in capturing different types of objects
by generators. | http://arxiv.org/pdf/1708.02556 | Quan Hoang, Tu Dinh Nguyen, Trung Le, Dinh Phung | cs.LG, cs.AI, stat.ML | null | null | cs.LG | 20170808 | 20171027 | [
{
"id": "1703.00573"
},
{
"id": "1701.00160"
},
{
"id": "1612.00991"
},
{
"id": "1701.02386"
},
{
"id": "1701.07875"
},
{
"id": "1703.10717"
},
{
"id": "1704.03817"
},
{
"id": "1506.03365"
},
{
"id": "1704.02906"
},
{
"id": "1706.02515"
},
{
"id": "1603.04467"
},
{
"id": "1606.00704"
},
{
"id": "1511.01844"
},
{
"id": "1511.05101"
},
{
"id": "1611.02163"
},
{
"id": "1610.09585"
},
{
"id": "1611.01673"
},
{
"id": "1511.06434"
}
] |
1708.02556 | 29 | Next we further quantitatively compare the quality of generated data. Since we know the true dis- tribution Pdata in this case, we employ two measures, namely symmetric Kullback-Leibler (KL) divergence and Wasserstein distance. These measures compute the distance between the normalized histograms of 10,000 points generated from the model to true Pdata. Figs. 2a and 2b again clearly demonstrate the superiority of our approach over GAN, UnrolledGAN and D2GAN w.r.t both dis- tances (lower is better); notably the Wasserstein distances from ours and D2GANâs to the true distri- bution almost reduce to zero, and at the same time, our symmetric KL metric is signiï¬cantly better than that of D2GAN. These ï¬gures also show the stability of our MGAN (black curves) and D2GAN (red curves) during training as they are much less ï¬uctuating compared with GAN (green curves) and UnrolledGAN (blue curves). | 1708.02556#29 | Multi-Generator Generative Adversarial Nets | We propose a new approach to train the Generative Adversarial Nets (GANs)
with a mixture of generators to overcome the mode collapsing problem. The main
intuition is to employ multiple generators, instead of using a single one as in
the original GAN. The idea is simple, yet proven to be extremely effective at
covering diverse data modes, easily overcoming the mode collapse and delivering
state-of-the-art results. A minimax formulation is able to establish among a
classifier, a discriminator, and a set of generators in a similar spirit with
GAN. Generators create samples that are intended to come from the same
distribution as the training data, whilst the discriminator determines whether
samples are true data or generated by generators, and the classifier specifies
which generator a sample comes from. The distinguishing feature is that
internal samples are created from multiple generators, and then one of them
will be randomly selected as final output similar to the mechanism of a
probabilistic mixture model. We term our method Mixture GAN (MGAN). We develop
theoretical analysis to prove that, at the equilibrium, the Jensen-Shannon
divergence (JSD) between the mixture of generators' distributions and the
empirical data distribution is minimal, whilst the JSD among generators'
distributions is maximal, hence effectively avoiding the mode collapse. By
utilizing parameter sharing, our proposed model adds minimal computational cost
to the standard GAN, and thus can also efficiently scale to large-scale
datasets. We conduct extensive experiments on synthetic 2D data and natural
image databases (CIFAR-10, STL-10 and ImageNet) to demonstrate the superior
performance of our MGAN in achieving state-of-the-art Inception scores over
latest baselines, generating diverse and appealing recognizable objects at
different resolutions, and specializing in capturing different types of objects
by generators. | http://arxiv.org/pdf/1708.02556 | Quan Hoang, Tu Dinh Nguyen, Trung Le, Dinh Phung | cs.LG, cs.AI, stat.ML | null | null | cs.LG | 20170808 | 20171027 | [
{
"id": "1703.00573"
},
{
"id": "1701.00160"
},
{
"id": "1612.00991"
},
{
"id": "1701.02386"
},
{
"id": "1701.07875"
},
{
"id": "1703.10717"
},
{
"id": "1704.03817"
},
{
"id": "1506.03365"
},
{
"id": "1704.02906"
},
{
"id": "1706.02515"
},
{
"id": "1603.04467"
},
{
"id": "1606.00704"
},
{
"id": "1511.01844"
},
{
"id": "1511.05101"
},
{
"id": "1611.02163"
},
{
"id": "1610.09585"
},
{
"id": "1611.01673"
},
{
"id": "1511.06434"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.