doi
stringlengths
10
10
chunk-id
int64
0
936
chunk
stringlengths
401
2.02k
id
stringlengths
12
14
title
stringlengths
8
162
summary
stringlengths
228
1.92k
source
stringlengths
31
31
authors
stringlengths
7
6.97k
categories
stringlengths
5
107
comment
stringlengths
4
398
journal_ref
stringlengths
8
194
primary_category
stringlengths
5
17
published
stringlengths
8
8
updated
stringlengths
8
8
references
list
1709.01134
17
Table 2 reports the accuracy of AlexNet when we double the number of filter maps in a layer. With doubling of filter maps, AlexNet with 4-bits weights and 2-bits activations exhibits accuracy at-par with full-precision networks. Operating with 4-bits weights and 4-bits activations surpasses the baseline accuracy by 1.44%. With binary weights and activations we better the accuracy of XNOR- NET [17] by 4%. 1Compute cost is the product of the number of FMA operations and the sum of width of the activation and weight operands. 3 When doubling the number of filter maps, AlexNet’s raw compute operations grow by 3.9x com- pared to the baseline full-precision network, however by using reduced-precision operands the over- all compute complexity is a fraction of the baseline. For example, with 4b operands for weights and activations and 2x the number of filters, reduced-precision AlexNet is just 49% of the total compute cost of the full-precision baseline (compute cost comparison is shown in Table 3). Table 3: Compute cost of AlexNet 2x-wide vs. 1x-wide as preci- sion of activations (A) and weights (W) changes.
1709.01134#17
WRPN: Wide Reduced-Precision Networks
For computer vision applications, prior works have shown the efficacy of reducing numeric precision of model parameters (network weights) in deep neural networks. Activation maps, however, occupy a large memory footprint during both the training and inference step when using mini-batches of inputs. One way to reduce this large memory footprint is to reduce the precision of activations. However, past works have shown that reducing the precision of activations hurts model accuracy. We study schemes to train networks from scratch using reduced-precision activations without hurting accuracy. We reduce the precision of activation maps (along with model parameters) and increase the number of filter maps in a layer, and find that this scheme matches or surpasses the accuracy of the baseline full-precision network. As a result, one can significantly improve the execution efficiency (e.g. reduce dynamic memory footprint, memory bandwidth and computational energy) and speed up the training and inference process with appropriate hardware support. We call our scheme WRPN - wide reduced-precision networks. We report results and show that WRPN scheme is better than previously reported accuracies on ILSVRC-12 dataset while being computationally less expensive compared to previously reported reduced-precision networks.
http://arxiv.org/pdf/1709.01134
Asit Mishra, Eriko Nurvitadhi, Jeffrey J Cook, Debbie Marr
cs.CV, cs.LG, cs.NE
null
null
cs.CV
20170904
20170904
[]
1709.01134
19
We also experiment with other widening factors. With 1.3x widening of filters and with 4-bits of activation precision one can go as low as 8-bits of weight precision while still being at-par with baseline accuracy. With 1.1x wide filters, at least 8-bits weight and 16-bits activation precision is required for accuracy to match baseline full-precision 1x wide accuracy. Further, as Table 3 shows, when widening filters by 2x, one needs to lower precision to at least 8-bits so that the total compute cost is not more than baseline compute cost. Thus, there is a trade-off between widening and reducing the precision of network parameters. In our work, we trade-off higher number of raw compute operations with aggressively reducing the precision of the operands involved in these operations (activation maps and filter weights) while not sacrificing the model accuracy. Apart from other benefits of reduced precision activations as mentioned earlier, widening filter maps also improves the efficiency of underlying GEMM calls for convolution operations since compute accelerators are typically more efficient on a single kernel con- sisting of parallel computation on large data-structures as opposed to many small sized kernels [24]. # 4 Studies on deeper networks
1709.01134#19
WRPN: Wide Reduced-Precision Networks
For computer vision applications, prior works have shown the efficacy of reducing numeric precision of model parameters (network weights) in deep neural networks. Activation maps, however, occupy a large memory footprint during both the training and inference step when using mini-batches of inputs. One way to reduce this large memory footprint is to reduce the precision of activations. However, past works have shown that reducing the precision of activations hurts model accuracy. We study schemes to train networks from scratch using reduced-precision activations without hurting accuracy. We reduce the precision of activation maps (along with model parameters) and increase the number of filter maps in a layer, and find that this scheme matches or surpasses the accuracy of the baseline full-precision network. As a result, one can significantly improve the execution efficiency (e.g. reduce dynamic memory footprint, memory bandwidth and computational energy) and speed up the training and inference process with appropriate hardware support. We call our scheme WRPN - wide reduced-precision networks. We report results and show that WRPN scheme is better than previously reported accuracies on ILSVRC-12 dataset while being computationally less expensive compared to previously reported reduced-precision networks.
http://arxiv.org/pdf/1709.01134
Asit Mishra, Eriko Nurvitadhi, Jeffrey J Cook, Debbie Marr
cs.CV, cs.LG, cs.NE
null
null
cs.CV
20170904
20170904
[]
1709.01134
20
# 4 Studies on deeper networks We study how our scheme applies to deeper networks. For this, we study ResNet-34 [8] and batch- normalized Inception [9] and find similar trends, particularly that 2-bits weight and 4-bits activations continue to provide at-par accuracy as baseline. We use TensorFlow [2] and tensorpack [1] for all our evaluations and use ILSVRC-12 train and val dataset for analysis.2 # 4.1 ResNet
1709.01134#20
WRPN: Wide Reduced-Precision Networks
For computer vision applications, prior works have shown the efficacy of reducing numeric precision of model parameters (network weights) in deep neural networks. Activation maps, however, occupy a large memory footprint during both the training and inference step when using mini-batches of inputs. One way to reduce this large memory footprint is to reduce the precision of activations. However, past works have shown that reducing the precision of activations hurts model accuracy. We study schemes to train networks from scratch using reduced-precision activations without hurting accuracy. We reduce the precision of activation maps (along with model parameters) and increase the number of filter maps in a layer, and find that this scheme matches or surpasses the accuracy of the baseline full-precision network. As a result, one can significantly improve the execution efficiency (e.g. reduce dynamic memory footprint, memory bandwidth and computational energy) and speed up the training and inference process with appropriate hardware support. We call our scheme WRPN - wide reduced-precision networks. We report results and show that WRPN scheme is better than previously reported accuracies on ILSVRC-12 dataset while being computationally less expensive compared to previously reported reduced-precision networks.
http://arxiv.org/pdf/1709.01134
Asit Mishra, Eriko Nurvitadhi, Jeffrey J Cook, Debbie Marr
cs.CV, cs.LG, cs.NE
null
null
cs.CV
20170904
20170904
[]
1709.01134
21
# 4.1 ResNet ResNet-34 has 3x3 filters in each of its modular layers with shortcut connections being 1x1. The filter bank width changes from 64 to 512 as depth increases. We use the pre-activation variant of ResNet and the baseline top-1 accuracy of our ResNet-34 implementation using single-precision 32-bits data format is 73.59%. Binarizing weights and activations for all layers except the first and the last layer in this network gives top-1 accuracy of 60.5%. For binarizing ResNet we did not re- order any layer (as is done in XNOR-NET). We used the same hyper-parameters and learning rate schedule as the baseline network. As a reference, for ResNet-18, the gap between XNOR-NET (1b weights and activations) and full-precision network is 18% [17]. It is also interesting to note that top-1 accuracy of single-precision AlexNet (57.20%) is lower than the top-1 accuracy of binarized ResNet-34 (60.5%).
1709.01134#21
WRPN: Wide Reduced-Precision Networks
For computer vision applications, prior works have shown the efficacy of reducing numeric precision of model parameters (network weights) in deep neural networks. Activation maps, however, occupy a large memory footprint during both the training and inference step when using mini-batches of inputs. One way to reduce this large memory footprint is to reduce the precision of activations. However, past works have shown that reducing the precision of activations hurts model accuracy. We study schemes to train networks from scratch using reduced-precision activations without hurting accuracy. We reduce the precision of activation maps (along with model parameters) and increase the number of filter maps in a layer, and find that this scheme matches or surpasses the accuracy of the baseline full-precision network. As a result, one can significantly improve the execution efficiency (e.g. reduce dynamic memory footprint, memory bandwidth and computational energy) and speed up the training and inference process with appropriate hardware support. We call our scheme WRPN - wide reduced-precision networks. We report results and show that WRPN scheme is better than previously reported accuracies on ILSVRC-12 dataset while being computationally less expensive compared to previously reported reduced-precision networks.
http://arxiv.org/pdf/1709.01134
Asit Mishra, Eriko Nurvitadhi, Jeffrey J Cook, Debbie Marr
cs.CV, cs.LG, cs.NE
null
null
cs.CV
20170904
20170904
[]
1709.01134
22
We experimented with doubling number of filters in each layer and reduce the precision of activa- tions and weights. Table 4 shows the results of our analysis. Doubling the number of filters and 4-bits precision for both weights and activations beats the baseline accuracy by 0.9%. 4-bits acti- vations and 2-bits (ternary) weights has top-1 accuracy at-par with baseline. Reducing precision to 2-bits for both weights and activations degrades accuracy by only 0.2% compared to baseline. 2We will open-source our implementation of reduced-precision AlexNet, ResNet and batch-normalized Inception networks. 4 Table 4: ResNet-34 top-1 validation accuracy % and compute cost as precision of activations (A) and weights (W) varies.
1709.01134#22
WRPN: Wide Reduced-Precision Networks
For computer vision applications, prior works have shown the efficacy of reducing numeric precision of model parameters (network weights) in deep neural networks. Activation maps, however, occupy a large memory footprint during both the training and inference step when using mini-batches of inputs. One way to reduce this large memory footprint is to reduce the precision of activations. However, past works have shown that reducing the precision of activations hurts model accuracy. We study schemes to train networks from scratch using reduced-precision activations without hurting accuracy. We reduce the precision of activation maps (along with model parameters) and increase the number of filter maps in a layer, and find that this scheme matches or surpasses the accuracy of the baseline full-precision network. As a result, one can significantly improve the execution efficiency (e.g. reduce dynamic memory footprint, memory bandwidth and computational energy) and speed up the training and inference process with appropriate hardware support. We call our scheme WRPN - wide reduced-precision networks. We report results and show that WRPN scheme is better than previously reported accuracies on ILSVRC-12 dataset while being computationally less expensive compared to previously reported reduced-precision networks.
http://arxiv.org/pdf/1709.01134
Asit Mishra, Eriko Nurvitadhi, Jeffrey J Cook, Debbie Marr
cs.CV, cs.LG, cs.NE
null
null
cs.CV
20170904
20170904
[]
1709.01134
23
4 Table 4: ResNet-34 top-1 validation accuracy % and compute cost as precision of activations (A) and weights (W) varies. Width Precision Top-1 Acc. % Compute cost 1x wide 32b A, 32b W 73.59 60.54 1b A, 1b W 1x 0.03x 2x wide 4b A, 8b W 4b A, 4b W 4b A, 2b W 2b A, 4b W 2b A, 2b W 1b A, 1b W 74.48 74.52 73.58 73.50 73.32 69.85 0.74x 0.50x 0.39x 0.39x 0.27x 0.15x 3x wide 1b A, 1b W 72.38 0.30x Binarizing the weights and activations with 2x wide filters has a top-1 accuracy of 69.85%. This is just 3.7% worse than baseline full-precision network while being only 15% of the cost of the baseline network. Widening the filters by 3x and binarizing the weights and activations reduces this gap to 1.2% while the 3x wide network is 30% the cost of the full-precision baseline network.
1709.01134#23
WRPN: Wide Reduced-Precision Networks
For computer vision applications, prior works have shown the efficacy of reducing numeric precision of model parameters (network weights) in deep neural networks. Activation maps, however, occupy a large memory footprint during both the training and inference step when using mini-batches of inputs. One way to reduce this large memory footprint is to reduce the precision of activations. However, past works have shown that reducing the precision of activations hurts model accuracy. We study schemes to train networks from scratch using reduced-precision activations without hurting accuracy. We reduce the precision of activation maps (along with model parameters) and increase the number of filter maps in a layer, and find that this scheme matches or surpasses the accuracy of the baseline full-precision network. As a result, one can significantly improve the execution efficiency (e.g. reduce dynamic memory footprint, memory bandwidth and computational energy) and speed up the training and inference process with appropriate hardware support. We call our scheme WRPN - wide reduced-precision networks. We report results and show that WRPN scheme is better than previously reported accuracies on ILSVRC-12 dataset while being computationally less expensive compared to previously reported reduced-precision networks.
http://arxiv.org/pdf/1709.01134
Asit Mishra, Eriko Nurvitadhi, Jeffrey J Cook, Debbie Marr
cs.CV, cs.LG, cs.NE
null
null
cs.CV
20170904
20170904
[]
1709.01134
24
Although 4-bits precision seems to be enough for wide networks, we advocate for 4-bits activa- tion precision and 2-bits weight precision. This is because with ternary weights one can get rid of the multipliers and use adders instead. Additionally, with this configuration there is no loss of accuracy. Further, if some accuracy degradation is tolerable, one can even go to binary circuits for efficient hardware implementation while saving 32x in bandwidth for each of weights and activa- tions compared to full-precision networks. All these gains can be realized with simpler hardware implementation and lower compute cost compared to baseline networks. To the best of our knowledge, our ResNet binary and ternary (with 2-bits or 4-bits activation) top-1 accuracies are state-of-the-art results in the literature including unpublished technical reports (with similar data augmentation [14]). # 4.2 Batch-normalized Inception
1709.01134#24
WRPN: Wide Reduced-Precision Networks
For computer vision applications, prior works have shown the efficacy of reducing numeric precision of model parameters (network weights) in deep neural networks. Activation maps, however, occupy a large memory footprint during both the training and inference step when using mini-batches of inputs. One way to reduce this large memory footprint is to reduce the precision of activations. However, past works have shown that reducing the precision of activations hurts model accuracy. We study schemes to train networks from scratch using reduced-precision activations without hurting accuracy. We reduce the precision of activation maps (along with model parameters) and increase the number of filter maps in a layer, and find that this scheme matches or surpasses the accuracy of the baseline full-precision network. As a result, one can significantly improve the execution efficiency (e.g. reduce dynamic memory footprint, memory bandwidth and computational energy) and speed up the training and inference process with appropriate hardware support. We call our scheme WRPN - wide reduced-precision networks. We report results and show that WRPN scheme is better than previously reported accuracies on ILSVRC-12 dataset while being computationally less expensive compared to previously reported reduced-precision networks.
http://arxiv.org/pdf/1709.01134
Asit Mishra, Eriko Nurvitadhi, Jeffrey J Cook, Debbie Marr
cs.CV, cs.LG, cs.NE
null
null
cs.CV
20170904
20170904
[]
1709.01134
25
# 4.2 Batch-normalized Inception We applied WRPN scheme to batch-normalized Inception network [9]. This network includes batch normalization of all layers and is a variant of GoogleNet [20] where the 5x5 convolutional filters are replaced by two 3x3 convolutions with up to 128 wide filters. Table 5 shows the results of our analysis. Using 4-bits activations and 2-bits weight and doubling the number of filter banks in the network produces a model that is almost at-par in accuracy with the baseline single-precision network (0.02% loss in accuracy). Wide network with binary weights and activations is within 6.6% of the full-precision baseline network. Table 5: Batch-normalized Inception top-1 validation accuracy % and compute cost as precision of activations (A) and weights (W) varies.
1709.01134#25
WRPN: Wide Reduced-Precision Networks
For computer vision applications, prior works have shown the efficacy of reducing numeric precision of model parameters (network weights) in deep neural networks. Activation maps, however, occupy a large memory footprint during both the training and inference step when using mini-batches of inputs. One way to reduce this large memory footprint is to reduce the precision of activations. However, past works have shown that reducing the precision of activations hurts model accuracy. We study schemes to train networks from scratch using reduced-precision activations without hurting accuracy. We reduce the precision of activation maps (along with model parameters) and increase the number of filter maps in a layer, and find that this scheme matches or surpasses the accuracy of the baseline full-precision network. As a result, one can significantly improve the execution efficiency (e.g. reduce dynamic memory footprint, memory bandwidth and computational energy) and speed up the training and inference process with appropriate hardware support. We call our scheme WRPN - wide reduced-precision networks. We report results and show that WRPN scheme is better than previously reported accuracies on ILSVRC-12 dataset while being computationally less expensive compared to previously reported reduced-precision networks.
http://arxiv.org/pdf/1709.01134
Asit Mishra, Eriko Nurvitadhi, Jeffrey J Cook, Debbie Marr
cs.CV, cs.LG, cs.NE
null
null
cs.CV
20170904
20170904
[]
1709.01134
27
# 5 Hardware friendly quantization scheme We adopt the straight-through estimator (STE) approach in our work [3]. When quantizing a real number to k-bits, the ordinality of the set of quantized numbers is 2k. Mathematically, this small and finite set would have zero gradients with respect to its inputs. STE method circumvents this problem by defining an operator that has arbitrary forward and backward operations. 5 Prior works using the STE approach define operators that quantize the weights based on the expec- tation of the weight tensors. For instance, TWN [12] uses a threshold and a scaling factor for each layer to quantize weights to ternary domain. In TTQ [27], the scaling factors are learned parameters. XNOR-NET binarizes the weight tensor by computing the sign of the tensor values and then scaling by the mean of the absolute value of each output channel of weights. DoReFa uses a single scaling factor across the entire layer. For quantizing weights to k-bits, where k > 1, DoReFa uses: wk = 2 ∗ quantizek( tanh(wi) 2 tanh(wi) ) + 1 2 ) − 1) (1)
1709.01134#27
WRPN: Wide Reduced-Precision Networks
For computer vision applications, prior works have shown the efficacy of reducing numeric precision of model parameters (network weights) in deep neural networks. Activation maps, however, occupy a large memory footprint during both the training and inference step when using mini-batches of inputs. One way to reduce this large memory footprint is to reduce the precision of activations. However, past works have shown that reducing the precision of activations hurts model accuracy. We study schemes to train networks from scratch using reduced-precision activations without hurting accuracy. We reduce the precision of activation maps (along with model parameters) and increase the number of filter maps in a layer, and find that this scheme matches or surpasses the accuracy of the baseline full-precision network. As a result, one can significantly improve the execution efficiency (e.g. reduce dynamic memory footprint, memory bandwidth and computational energy) and speed up the training and inference process with appropriate hardware support. We call our scheme WRPN - wide reduced-precision networks. We report results and show that WRPN scheme is better than previously reported accuracies on ILSVRC-12 dataset while being computationally less expensive compared to previously reported reduced-precision networks.
http://arxiv.org/pdf/1709.01134
Asit Mishra, Eriko Nurvitadhi, Jeffrey J Cook, Debbie Marr
cs.CV, cs.LG, cs.NE
null
null
cs.CV
20170904
20170904
[]
1709.01134
28
# max( | ∗ | Here wk is the k-bit quantized version of inputs wi and quantizek is a quantization function that quantizes a floating-point number wi in the range [0, 1] to a k-bit number in the same range. The transcendental tanh operation constrains the weight value to lie in between 1 and +1. The affine transformation post quantization brings the range to [ − We build on these approaches and propose a much simpler scheme. For quantizing weight ten- 1, 1] using min-max operation (e.g. sors we first hard constrain the values to lie within the range [ tf.clip_by_val when using Tensorflow [2]). For quantizing activation tensor values, we constrain the values to lie within the range [0, 1]. This step is followed by a quantization step where a real number is quantized into a k-bit number. This is given as, for k > 1: wk = 1 2k−1 1 round((2 k−1 − 1) ∗ wi) and ak = 1 2k 1 round((2 k − 1) ∗ ai) (2)
1709.01134#28
WRPN: Wide Reduced-Precision Networks
For computer vision applications, prior works have shown the efficacy of reducing numeric precision of model parameters (network weights) in deep neural networks. Activation maps, however, occupy a large memory footprint during both the training and inference step when using mini-batches of inputs. One way to reduce this large memory footprint is to reduce the precision of activations. However, past works have shown that reducing the precision of activations hurts model accuracy. We study schemes to train networks from scratch using reduced-precision activations without hurting accuracy. We reduce the precision of activation maps (along with model parameters) and increase the number of filter maps in a layer, and find that this scheme matches or surpasses the accuracy of the baseline full-precision network. As a result, one can significantly improve the execution efficiency (e.g. reduce dynamic memory footprint, memory bandwidth and computational energy) and speed up the training and inference process with appropriate hardware support. We call our scheme WRPN - wide reduced-precision networks. We report results and show that WRPN scheme is better than previously reported accuracies on ILSVRC-12 dataset while being computationally less expensive compared to previously reported reduced-precision networks.
http://arxiv.org/pdf/1709.01134
Asit Mishra, Eriko Nurvitadhi, Jeffrey J Cook, Debbie Marr
cs.CV, cs.LG, cs.NE
null
null
cs.CV
20170904
20170904
[]
1709.01134
29
− − Here wi and ai are input real-valued weights and activation tensor and wk and ak are their quan- tized versions. One bit is reserved for sign-bit in case of weight values, hence the use of 2k−1 for these quantized values. Thus, weights can be stored and interpreted using signed data-types and activations using un-signed data-types. With appropriate affine transformations, the convolution op- erations (the bulk of the compute operations in the network during forward pass) can be done using quantized values (integer operations in hardware) followed by scaling with floating-point constants (this scaling operation can be done in parallel with the convolution operation in hardware). When k = 1, for binary weights we use the BWN approach [5] where the binarized weight value is com- puted based on the sign of input value followed by scaling with the mean of absolute values. For binarized activations we use the formulation in Eq. 2. We do not quantize the gradients and maintain the weights in reduced precision format.
1709.01134#29
WRPN: Wide Reduced-Precision Networks
For computer vision applications, prior works have shown the efficacy of reducing numeric precision of model parameters (network weights) in deep neural networks. Activation maps, however, occupy a large memory footprint during both the training and inference step when using mini-batches of inputs. One way to reduce this large memory footprint is to reduce the precision of activations. However, past works have shown that reducing the precision of activations hurts model accuracy. We study schemes to train networks from scratch using reduced-precision activations without hurting accuracy. We reduce the precision of activation maps (along with model parameters) and increase the number of filter maps in a layer, and find that this scheme matches or surpasses the accuracy of the baseline full-precision network. As a result, one can significantly improve the execution efficiency (e.g. reduce dynamic memory footprint, memory bandwidth and computational energy) and speed up the training and inference process with appropriate hardware support. We call our scheme WRPN - wide reduced-precision networks. We report results and show that WRPN scheme is better than previously reported accuracies on ILSVRC-12 dataset while being computationally less expensive compared to previously reported reduced-precision networks.
http://arxiv.org/pdf/1709.01134
Asit Mishra, Eriko Nurvitadhi, Jeffrey J Cook, Debbie Marr
cs.CV, cs.LG, cs.NE
null
null
cs.CV
20170904
20170904
[]
1709.01134
30
For convolution operation when using WRPN, the forward pass during training (and the inference step) involves matrix multiplication of k-bits signed and k-bits unsigned operands. Since gradi- ent values are in 32-bits floating-point format, the backward pass involves a matrix multiplication operation using 32-bits and k-bits operand for gradient and weight update. When k > 1, the hard clipping of tensors to a range maps efficiently to min-max comparator units in hardware as opposed to using transcendental operations which are long latency operations. TTQ and DoRefa schemes involve division operation and computing a maximum value in the input tensor. Floating-point division operation is expensive in hardware and computing the maximum in a tensor is an O(n) operation. Additionally, our quantization parameters are static and do not require any learning or involve back-propagation like TTQ approach. We avoid each of these costly operations and propose a simpler quantization scheme (clipping followed by rounding). # 5.1 Efficiency improvements of reduced-precision operations on GPU, FPGA and ASIC
1709.01134#30
WRPN: Wide Reduced-Precision Networks
For computer vision applications, prior works have shown the efficacy of reducing numeric precision of model parameters (network weights) in deep neural networks. Activation maps, however, occupy a large memory footprint during both the training and inference step when using mini-batches of inputs. One way to reduce this large memory footprint is to reduce the precision of activations. However, past works have shown that reducing the precision of activations hurts model accuracy. We study schemes to train networks from scratch using reduced-precision activations without hurting accuracy. We reduce the precision of activation maps (along with model parameters) and increase the number of filter maps in a layer, and find that this scheme matches or surpasses the accuracy of the baseline full-precision network. As a result, one can significantly improve the execution efficiency (e.g. reduce dynamic memory footprint, memory bandwidth and computational energy) and speed up the training and inference process with appropriate hardware support. We call our scheme WRPN - wide reduced-precision networks. We report results and show that WRPN scheme is better than previously reported accuracies on ILSVRC-12 dataset while being computationally less expensive compared to previously reported reduced-precision networks.
http://arxiv.org/pdf/1709.01134
Asit Mishra, Eriko Nurvitadhi, Jeffrey J Cook, Debbie Marr
cs.CV, cs.LG, cs.NE
null
null
cs.CV
20170904
20170904
[]
1709.01134
31
# 5.1 Efficiency improvements of reduced-precision operations on GPU, FPGA and ASIC In practice, the effective performance and energy efficiency one could achieve on a low-precision compute operation highly depends on the hardware that runs these operations. We study the effi- ciency of low-precision operations on various hardware targets – GPU, FPGA, and ASIC. For GPU, we evaluate WRPN on Nvidia Titan X Pascal and for FPGA we use Intel Arria-10. We collect performance numbers from both previously reported analysis [16] as well as our own ex- periments. For FPGA, we implement a DNN accelerator architecture shown in Figure 3(a). This is a prototypical accelerator design used in various works (e.g., on FPGA [16] and ASIC such as TPU [10]). The core of the accelerator consists of a systolic array of processing elements (PEs) 6
1709.01134#31
WRPN: Wide Reduced-Precision Networks
For computer vision applications, prior works have shown the efficacy of reducing numeric precision of model parameters (network weights) in deep neural networks. Activation maps, however, occupy a large memory footprint during both the training and inference step when using mini-batches of inputs. One way to reduce this large memory footprint is to reduce the precision of activations. However, past works have shown that reducing the precision of activations hurts model accuracy. We study schemes to train networks from scratch using reduced-precision activations without hurting accuracy. We reduce the precision of activation maps (along with model parameters) and increase the number of filter maps in a layer, and find that this scheme matches or surpasses the accuracy of the baseline full-precision network. As a result, one can significantly improve the execution efficiency (e.g. reduce dynamic memory footprint, memory bandwidth and computational energy) and speed up the training and inference process with appropriate hardware support. We call our scheme WRPN - wide reduced-precision networks. We report results and show that WRPN scheme is better than previously reported accuracies on ILSVRC-12 dataset while being computationally less expensive compared to previously reported reduced-precision networks.
http://arxiv.org/pdf/1709.01134
Asit Mishra, Eriko Nurvitadhi, Jeffrey J Cook, Debbie Marr
cs.CV, cs.LG, cs.NE
null
null
cs.CV
20170904
20170904
[]
1709.01134
32
6 to perform matrix and vector operations, along with on-chip buffers, as well as off-chip memory management unit. The PEs can be configured to support different precision – (FP32, FP32), (INT4, INT4), (INT4, TER2), and (BIN1, BIN1). The (INT4, TER2) PE operates on ternary (+1,0,-1) val- ues and is optimized to include only an adder since there is no need for a multiplier in this case. The binary (BIN1, BIN1) PE is implemented using XNOR and bitcount. Our RTL design targets Arria-10 1150 FPGA. For our ASIC study, we synthesize the PE design using Intel 14 nm process technology to obtain area and energy estimates.
1709.01134#32
WRPN: Wide Reduced-Precision Networks
For computer vision applications, prior works have shown the efficacy of reducing numeric precision of model parameters (network weights) in deep neural networks. Activation maps, however, occupy a large memory footprint during both the training and inference step when using mini-batches of inputs. One way to reduce this large memory footprint is to reduce the precision of activations. However, past works have shown that reducing the precision of activations hurts model accuracy. We study schemes to train networks from scratch using reduced-precision activations without hurting accuracy. We reduce the precision of activation maps (along with model parameters) and increase the number of filter maps in a layer, and find that this scheme matches or surpasses the accuracy of the baseline full-precision network. As a result, one can significantly improve the execution efficiency (e.g. reduce dynamic memory footprint, memory bandwidth and computational energy) and speed up the training and inference process with appropriate hardware support. We call our scheme WRPN - wide reduced-precision networks. We report results and show that WRPN scheme is better than previously reported accuracies on ILSVRC-12 dataset while being computationally less expensive compared to previously reported reduced-precision networks.
http://arxiv.org/pdf/1709.01134
Asit Mishra, Eriko Nurvitadhi, Jeffrey J Cook, Debbie Marr
cs.CV, cs.LG, cs.NE
null
null
cs.CV
20170904
20170904
[]
1709.01134
33
%76;$(.--,/.($1’-$)’15H&2I$B() +’,-’$ -()&01)-(C$(&25-$&)($91"’&5$&($1$I++,$9&)$ 9+’$5/()+0$:+3*.’-5&(&+2$+.-’1)&+2( @2$.’15)&5-C$+2:D$/.$)+$EF>$ (.--,/.($&2$678C$(&25-$2+$ (/..+’)$9+’$(/"$G*"&)$.’-5&(&+2( 0--12(3+(45. / / / . - , + ’ * # ) ( ’ & " % $ # # " ! <=>$9’+0$ %7<=$)+$ ?@AB . - , + ’ * # ) ( ’ & " % $ # # " ! 45 45 . - , + ’ * # ) ( ’ & " % $ # # " ! / 45 45 / !"#$%&’( )*++,-. 6,7(89-: !"#$%&’()*+’,-’$
1709.01134#33
WRPN: Wide Reduced-Precision Networks
For computer vision applications, prior works have shown the efficacy of reducing numeric precision of model parameters (network weights) in deep neural networks. Activation maps, however, occupy a large memory footprint during both the training and inference step when using mini-batches of inputs. One way to reduce this large memory footprint is to reduce the precision of activations. However, past works have shown that reducing the precision of activations hurts model accuracy. We study schemes to train networks from scratch using reduced-precision activations without hurting accuracy. We reduce the precision of activation maps (along with model parameters) and increase the number of filter maps in a layer, and find that this scheme matches or surpasses the accuracy of the baseline full-precision network. As a result, one can significantly improve the execution efficiency (e.g. reduce dynamic memory footprint, memory bandwidth and computational energy) and speed up the training and inference process with appropriate hardware support. We call our scheme WRPN - wide reduced-precision networks. We report results and show that WRPN scheme is better than previously reported accuracies on ILSVRC-12 dataset while being computationally less expensive compared to previously reported reduced-precision networks.
http://arxiv.org/pdf/1709.01134
Asit Mishra, Eriko Nurvitadhi, Jeffrey J Cook, Debbie Marr
cs.CV, cs.LG, cs.NE
null
null
cs.CV
20170904
20170904
[]
1709.01134
34
# " ! / 45 45 / !"#$%&’( )*++,-. 6,7(89-: !"#$%&’()*+’,-’$ (.--,/.$-()&01)-($ "1(-,$+2$"&)*3&,)4 !5#$ 678$(.--,/.($ 9+’$:+3*.’-5&(&+2$ +.-’1)&+2( !,#$%76;$(.--,/.($ 9+’$:+3*.’-5&(&+2$ +.-’1)&+2( !1#$JAA$ 41’,31’-$ /2,-’$()/,D BF<F < & 4 , : ? 8 ’ # 3 2 1 0 * ( / * # , . - , + ’ * # ) ( ’ & > 2 # 0 # ) ( * " 0 . - , + ’ * # ) ( ’ & > 2 # 0 # ) ( * " 0 ’ 7 6 * # 2 5 < ; 4 & 4 , : 9 4 # 3 2 1 0 * ( / * # , 8 = =
1709.01134#34
WRPN: Wide Reduced-Precision Networks
For computer vision applications, prior works have shown the efficacy of reducing numeric precision of model parameters (network weights) in deep neural networks. Activation maps, however, occupy a large memory footprint during both the training and inference step when using mini-batches of inputs. One way to reduce this large memory footprint is to reduce the precision of activations. However, past works have shown that reducing the precision of activations hurts model accuracy. We study schemes to train networks from scratch using reduced-precision activations without hurting accuracy. We reduce the precision of activation maps (along with model parameters) and increase the number of filter maps in a layer, and find that this scheme matches or surpasses the accuracy of the baseline full-precision network. As a result, one can significantly improve the execution efficiency (e.g. reduce dynamic memory footprint, memory bandwidth and computational energy) and speed up the training and inference process with appropriate hardware support. We call our scheme WRPN - wide reduced-precision networks. We report results and show that WRPN scheme is better than previously reported accuracies on ILSVRC-12 dataset while being computationally less expensive compared to previously reported reduced-precision networks.
http://arxiv.org/pdf/1709.01134
Asit Mishra, Eriko Nurvitadhi, Jeffrey J Cook, Debbie Marr
cs.CV, cs.LG, cs.NE
null
null
cs.CV
20170904
20170904
[]
1709.01134
36
# Figure 3: Efficiency improvements from low-precision operations on GPU, FPGA and ASIC. Figure 3(b) - (g) summarize our analysis. Figure 3(b) shows the efficiency improvements using first-order estimates where the efficiency is computed based on number of bits used in the operation. With this method we would expect (INT4, INT4) and (BIN1, BIN1) to be 8x and 32x more efficient, respectively, than (FP32, FP32). However, in practice the efficiency gains from reducing precision depend on whether the underlying hardware can take advantage of such low-precisions. Figure 3(c) shows performance improvement on Titan X GPU for various low-precision operations relative to FP32. In this case, GPU can only achieve up to 4x improvements in performance over FP32 baseline. This is because GPU only provides first-class support for INT8 operations, and is not able to take advantage of the lower INT4, TER2, and BIN1 precisions. On the contrary, FPGA can take advantage of such low precisions, since they are amenable for implementations on the FPGA’s reconfigurable fabric.
1709.01134#36
WRPN: Wide Reduced-Precision Networks
For computer vision applications, prior works have shown the efficacy of reducing numeric precision of model parameters (network weights) in deep neural networks. Activation maps, however, occupy a large memory footprint during both the training and inference step when using mini-batches of inputs. One way to reduce this large memory footprint is to reduce the precision of activations. However, past works have shown that reducing the precision of activations hurts model accuracy. We study schemes to train networks from scratch using reduced-precision activations without hurting accuracy. We reduce the precision of activation maps (along with model parameters) and increase the number of filter maps in a layer, and find that this scheme matches or surpasses the accuracy of the baseline full-precision network. As a result, one can significantly improve the execution efficiency (e.g. reduce dynamic memory footprint, memory bandwidth and computational energy) and speed up the training and inference process with appropriate hardware support. We call our scheme WRPN - wide reduced-precision networks. We report results and show that WRPN scheme is better than previously reported accuracies on ILSVRC-12 dataset while being computationally less expensive compared to previously reported reduced-precision networks.
http://arxiv.org/pdf/1709.01134
Asit Mishra, Eriko Nurvitadhi, Jeffrey J Cook, Debbie Marr
cs.CV, cs.LG, cs.NE
null
null
cs.CV
20170904
20170904
[]
1709.01134
37
Figure 3(d) shows that the performance improvements from (INT4, INT4), (INT4, TER2), and (BIN1, BIN1) track well with the first-order estimates from Figure 3(b). In fact, for (BIN1, BIN1), FPGA improvements exceed the first-order estimate. Reducing the precision simplifies the design of compute units and lower buffering requirements on FPGA board. Compute-precision reduction leads to significant improvement in throughput due to smaller hardware designs (allowing more par- allelism) and shorter circuit delay (allowing higher frequency). Figure 3(e) shows the performance and performance/Watt of the reduced-precision operations on GPU and FPGA. FPGA performs quite well on very low precision operations. In terms of performance/watt, FPGA does better than GPU on (INT4, INT4) and lower precisions. 7
1709.01134#37
WRPN: Wide Reduced-Precision Networks
For computer vision applications, prior works have shown the efficacy of reducing numeric precision of model parameters (network weights) in deep neural networks. Activation maps, however, occupy a large memory footprint during both the training and inference step when using mini-batches of inputs. One way to reduce this large memory footprint is to reduce the precision of activations. However, past works have shown that reducing the precision of activations hurts model accuracy. We study schemes to train networks from scratch using reduced-precision activations without hurting accuracy. We reduce the precision of activation maps (along with model parameters) and increase the number of filter maps in a layer, and find that this scheme matches or surpasses the accuracy of the baseline full-precision network. As a result, one can significantly improve the execution efficiency (e.g. reduce dynamic memory footprint, memory bandwidth and computational energy) and speed up the training and inference process with appropriate hardware support. We call our scheme WRPN - wide reduced-precision networks. We report results and show that WRPN scheme is better than previously reported accuracies on ILSVRC-12 dataset while being computationally less expensive compared to previously reported reduced-precision networks.
http://arxiv.org/pdf/1709.01134
Asit Mishra, Eriko Nurvitadhi, Jeffrey J Cook, Debbie Marr
cs.CV, cs.LG, cs.NE
null
null
cs.CV
20170904
20170904
[]
1709.01134
38
7 ASIC allows for a truly customized hardware implementation. Our ASIC study provides insights to the upper bound of the efficiency benefits possible from low-precision operations. Figure 3(f) and 3(g) show improvement in performance and energy efficiency of the various low-precision ASIC PEs relative to baseline FP32 PE. As the figures show, going to lower precision offers 2 to 3 orders of magnitude efficiency improvements. In summary, FPGA and ASIC are well suited for our WRPN approach. At 2x wide, our WRPN approach requires 4x more total operations than the original network. However, for INT4 or lower precision, each operation is 6.5x or better in efficiency than FP32 for FPGA and ASIC. Hence, WRPN delivers an overall efficiency win. # 6 Related work
1709.01134#38
WRPN: Wide Reduced-Precision Networks
For computer vision applications, prior works have shown the efficacy of reducing numeric precision of model parameters (network weights) in deep neural networks. Activation maps, however, occupy a large memory footprint during both the training and inference step when using mini-batches of inputs. One way to reduce this large memory footprint is to reduce the precision of activations. However, past works have shown that reducing the precision of activations hurts model accuracy. We study schemes to train networks from scratch using reduced-precision activations without hurting accuracy. We reduce the precision of activation maps (along with model parameters) and increase the number of filter maps in a layer, and find that this scheme matches or surpasses the accuracy of the baseline full-precision network. As a result, one can significantly improve the execution efficiency (e.g. reduce dynamic memory footprint, memory bandwidth and computational energy) and speed up the training and inference process with appropriate hardware support. We call our scheme WRPN - wide reduced-precision networks. We report results and show that WRPN scheme is better than previously reported accuracies on ILSVRC-12 dataset while being computationally less expensive compared to previously reported reduced-precision networks.
http://arxiv.org/pdf/1709.01134
Asit Mishra, Eriko Nurvitadhi, Jeffrey J Cook, Debbie Marr
cs.CV, cs.LG, cs.NE
null
null
cs.CV
20170904
20170904
[]
1709.01134
39
# 6 Related work Reduced-precision DNNs is an active research area. Reducing precision of weights for efficient inference pipeline has been very well studied. Works like Binary connect (BC) [5], Ternary-weight networks (TWN) [12], fine-grained ternary quantization [14] and INQ [25] target reducing the pre- cision of network weights while still using full-precision activations. Accuracy is almost always degraded when quantizing the weights. For AlexNet on Imagenet, TWN loses 5% top-1 accuracy. Schemes like INQ, [18] and [14] do fine-tuning to quantize the network weights and do not sacrifice accuracy as much but are not applicable for training networks from scratch. INQ shows promising results with 5-bits of precision.
1709.01134#39
WRPN: Wide Reduced-Precision Networks
For computer vision applications, prior works have shown the efficacy of reducing numeric precision of model parameters (network weights) in deep neural networks. Activation maps, however, occupy a large memory footprint during both the training and inference step when using mini-batches of inputs. One way to reduce this large memory footprint is to reduce the precision of activations. However, past works have shown that reducing the precision of activations hurts model accuracy. We study schemes to train networks from scratch using reduced-precision activations without hurting accuracy. We reduce the precision of activation maps (along with model parameters) and increase the number of filter maps in a layer, and find that this scheme matches or surpasses the accuracy of the baseline full-precision network. As a result, one can significantly improve the execution efficiency (e.g. reduce dynamic memory footprint, memory bandwidth and computational energy) and speed up the training and inference process with appropriate hardware support. We call our scheme WRPN - wide reduced-precision networks. We report results and show that WRPN scheme is better than previously reported accuracies on ILSVRC-12 dataset while being computationally less expensive compared to previously reported reduced-precision networks.
http://arxiv.org/pdf/1709.01134
Asit Mishra, Eriko Nurvitadhi, Jeffrey J Cook, Debbie Marr
cs.CV, cs.LG, cs.NE
null
null
cs.CV
20170904
20170904
[]
1709.01134
40
XNOR-NET [17], BNN [4], DoReFa [26] and TTQ [27] target training as well. While TTQ targets weight quantization only, most works targeting activation quantization hurt accuracy. XNOR-NET approach reduces top-1 accuracy by 12% and DoReFa by 8% when quantizing both weights and activations to 1-bit (for AlexNet on ImageNet). Further, XNOR-NET requires re-ordering of layers for its scheme to work. Recent work in [6] targets low-precision activations and reports accuracy within 1% of baseline with 5-bits precision and logarithmic (with base √2) quantization. With fine-tuning this gap can be narrowed to be within 0.6% but not all layers are quantized.
1709.01134#40
WRPN: Wide Reduced-Precision Networks
For computer vision applications, prior works have shown the efficacy of reducing numeric precision of model parameters (network weights) in deep neural networks. Activation maps, however, occupy a large memory footprint during both the training and inference step when using mini-batches of inputs. One way to reduce this large memory footprint is to reduce the precision of activations. However, past works have shown that reducing the precision of activations hurts model accuracy. We study schemes to train networks from scratch using reduced-precision activations without hurting accuracy. We reduce the precision of activation maps (along with model parameters) and increase the number of filter maps in a layer, and find that this scheme matches or surpasses the accuracy of the baseline full-precision network. As a result, one can significantly improve the execution efficiency (e.g. reduce dynamic memory footprint, memory bandwidth and computational energy) and speed up the training and inference process with appropriate hardware support. We call our scheme WRPN - wide reduced-precision networks. We report results and show that WRPN scheme is better than previously reported accuracies on ILSVRC-12 dataset while being computationally less expensive compared to previously reported reduced-precision networks.
http://arxiv.org/pdf/1709.01134
Asit Mishra, Eriko Nurvitadhi, Jeffrey J Cook, Debbie Marr
cs.CV, cs.LG, cs.NE
null
null
cs.CV
20170904
20170904
[]
1709.01134
41
Non-multiples of two for operand values introduces hardware inefficiency in that memory accesses are no longer DRAM or cache-boundary aligned and end-to-end run-time performance aspect is un- clear when using complicated quantization schemes. We target end-to-end training and inference, using very simple quantization method and aim for reducing precision without any loss in accuracy. To the best of our knowledge, our work is the first to study reduced-precision deep and wide net- works, and show accuracy at-par with baseline for as low a precision as 4-bits activations and 2-bits weights. We report state of the art accuracy for wide binarized AlexNet and ResNet while still being lower in compute cost. # 7 Conclusions
1709.01134#41
WRPN: Wide Reduced-Precision Networks
For computer vision applications, prior works have shown the efficacy of reducing numeric precision of model parameters (network weights) in deep neural networks. Activation maps, however, occupy a large memory footprint during both the training and inference step when using mini-batches of inputs. One way to reduce this large memory footprint is to reduce the precision of activations. However, past works have shown that reducing the precision of activations hurts model accuracy. We study schemes to train networks from scratch using reduced-precision activations without hurting accuracy. We reduce the precision of activation maps (along with model parameters) and increase the number of filter maps in a layer, and find that this scheme matches or surpasses the accuracy of the baseline full-precision network. As a result, one can significantly improve the execution efficiency (e.g. reduce dynamic memory footprint, memory bandwidth and computational energy) and speed up the training and inference process with appropriate hardware support. We call our scheme WRPN - wide reduced-precision networks. We report results and show that WRPN scheme is better than previously reported accuracies on ILSVRC-12 dataset while being computationally less expensive compared to previously reported reduced-precision networks.
http://arxiv.org/pdf/1709.01134
Asit Mishra, Eriko Nurvitadhi, Jeffrey J Cook, Debbie Marr
cs.CV, cs.LG, cs.NE
null
null
cs.CV
20170904
20170904
[]
1709.01134
42
# 7 Conclusions We present the Wide Reduced-Precision Networks (WRPN) scheme for DNNs. In this scheme, the numeric precision of both weights and activations are significantly reduced without loss of network accuracy. This result is in contrast to many previous works that find reduced-precision activations to detrimentally impact accuracy; specifically, we find that 2-bit weights and 4-bit activations are suf- ficient to match baseline accuracy across many networks including AlexNet, ResNet-34 and batch- normalized Inception. We achieve this result with a new quantization scheme and by increasing the number of filter maps in each reduced-precision layer to compensate for the loss of information capacity induced by reducing the precision. We motivate this work with our observation that full-precision activations contribute significantly more to the memory footprint than full-precision weight parameters when using mini-batch sizes common during training and cloud-based inference; furthermore, by reducing the precision of both activations and weights the compute complexity is greatly reduced (40% of baseline for 2-bit weights and 4-bit activations).
1709.01134#42
WRPN: Wide Reduced-Precision Networks
For computer vision applications, prior works have shown the efficacy of reducing numeric precision of model parameters (network weights) in deep neural networks. Activation maps, however, occupy a large memory footprint during both the training and inference step when using mini-batches of inputs. One way to reduce this large memory footprint is to reduce the precision of activations. However, past works have shown that reducing the precision of activations hurts model accuracy. We study schemes to train networks from scratch using reduced-precision activations without hurting accuracy. We reduce the precision of activation maps (along with model parameters) and increase the number of filter maps in a layer, and find that this scheme matches or surpasses the accuracy of the baseline full-precision network. As a result, one can significantly improve the execution efficiency (e.g. reduce dynamic memory footprint, memory bandwidth and computational energy) and speed up the training and inference process with appropriate hardware support. We call our scheme WRPN - wide reduced-precision networks. We report results and show that WRPN scheme is better than previously reported accuracies on ILSVRC-12 dataset while being computationally less expensive compared to previously reported reduced-precision networks.
http://arxiv.org/pdf/1709.01134
Asit Mishra, Eriko Nurvitadhi, Jeffrey J Cook, Debbie Marr
cs.CV, cs.LG, cs.NE
null
null
cs.CV
20170904
20170904
[]
1709.01134
43
The WRPN quantization scheme and computation on low precision activations and weights is hard- ware friendly making it viable for deeply-embedded system deployments as well as in cloud-based training and inference servers with compute fabrics for low-precision. We compare Titan X GPU, Arria-10 FPGA and ASIC implementations using WRPN and show our scheme increases perfor8 mance and energy-efficiency for iso-accuracy across each. Overall, reducing the precision allows custom-designed compute units and lower buffering requirements to provide significant improve- ment in throughput. 9 # References
1709.01134#43
WRPN: Wide Reduced-Precision Networks
For computer vision applications, prior works have shown the efficacy of reducing numeric precision of model parameters (network weights) in deep neural networks. Activation maps, however, occupy a large memory footprint during both the training and inference step when using mini-batches of inputs. One way to reduce this large memory footprint is to reduce the precision of activations. However, past works have shown that reducing the precision of activations hurts model accuracy. We study schemes to train networks from scratch using reduced-precision activations without hurting accuracy. We reduce the precision of activation maps (along with model parameters) and increase the number of filter maps in a layer, and find that this scheme matches or surpasses the accuracy of the baseline full-precision network. As a result, one can significantly improve the execution efficiency (e.g. reduce dynamic memory footprint, memory bandwidth and computational energy) and speed up the training and inference process with appropriate hardware support. We call our scheme WRPN - wide reduced-precision networks. We report results and show that WRPN scheme is better than previously reported accuracies on ILSVRC-12 dataset while being computationally less expensive compared to previously reported reduced-precision networks.
http://arxiv.org/pdf/1709.01134
Asit Mishra, Eriko Nurvitadhi, Jeffrey J Cook, Debbie Marr
cs.CV, cs.LG, cs.NE
null
null
cs.CV
20170904
20170904
[]
1709.01134
44
9 # References [1] https://github.com/ppwwyyxx/tensorpack. [2] M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin, S. Ghemawat, I. Goodfellow, A. Harp, G. Irving, M. Isard, Y. Jia, R. Jozefowicz, L. Kaiser, M. Kudlur, J. Levenberg, D. Mané, R. Monga, S. Moore, D. Murray, C. Olah, M. Schuster, J. Shlens, B. Steiner, I. Sutskever, K. Talwar, P. Tucker, V. Vanhoucke, V. Vasudevan, F. Viégas, O. Vinyals, P. War- den, M. Wattenberg, M. Wicke, Y. Yu, and X. Zheng. TensorFlow: Large-scale machine learning on heterogeneous systems, 2015. Software available from tensorflow.org.
1709.01134#44
WRPN: Wide Reduced-Precision Networks
For computer vision applications, prior works have shown the efficacy of reducing numeric precision of model parameters (network weights) in deep neural networks. Activation maps, however, occupy a large memory footprint during both the training and inference step when using mini-batches of inputs. One way to reduce this large memory footprint is to reduce the precision of activations. However, past works have shown that reducing the precision of activations hurts model accuracy. We study schemes to train networks from scratch using reduced-precision activations without hurting accuracy. We reduce the precision of activation maps (along with model parameters) and increase the number of filter maps in a layer, and find that this scheme matches or surpasses the accuracy of the baseline full-precision network. As a result, one can significantly improve the execution efficiency (e.g. reduce dynamic memory footprint, memory bandwidth and computational energy) and speed up the training and inference process with appropriate hardware support. We call our scheme WRPN - wide reduced-precision networks. We report results and show that WRPN scheme is better than previously reported accuracies on ILSVRC-12 dataset while being computationally less expensive compared to previously reported reduced-precision networks.
http://arxiv.org/pdf/1709.01134
Asit Mishra, Eriko Nurvitadhi, Jeffrey J Cook, Debbie Marr
cs.CV, cs.LG, cs.NE
null
null
cs.CV
20170904
20170904
[]
1709.01134
45
[3] Y. Bengio, N. Léonard, and A. C. Courville. Estimating or propagating gradients through stochastic neurons for conditional computation. CoRR, abs/1308.3432, 2013. [4] M. Courbariaux and Y. Bengio. Binarynet: Training deep neural networks with weights and activations constrained to +1 or -1. CoRR, abs/1602.02830, 2016. [5] M. Courbariaux, Y. Bengio, and J. David. Binaryconnect: Training deep neural networks with binary weights during propagations. CoRR, abs/1511.00363, 2015. [6] B. Graham. Low-precision batch-normalized activations. CoRR, abs/1702.08231, 2017. [7] S. Gupta, A. Agrawal, K. Gopalakrishnan, and P. Narayanan. Deep learning with limited numerical precision. CoRR, abs/1502.02551, 2015. [8] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. CoRR, abs/1512.03385, 2015.
1709.01134#45
WRPN: Wide Reduced-Precision Networks
For computer vision applications, prior works have shown the efficacy of reducing numeric precision of model parameters (network weights) in deep neural networks. Activation maps, however, occupy a large memory footprint during both the training and inference step when using mini-batches of inputs. One way to reduce this large memory footprint is to reduce the precision of activations. However, past works have shown that reducing the precision of activations hurts model accuracy. We study schemes to train networks from scratch using reduced-precision activations without hurting accuracy. We reduce the precision of activation maps (along with model parameters) and increase the number of filter maps in a layer, and find that this scheme matches or surpasses the accuracy of the baseline full-precision network. As a result, one can significantly improve the execution efficiency (e.g. reduce dynamic memory footprint, memory bandwidth and computational energy) and speed up the training and inference process with appropriate hardware support. We call our scheme WRPN - wide reduced-precision networks. We report results and show that WRPN scheme is better than previously reported accuracies on ILSVRC-12 dataset while being computationally less expensive compared to previously reported reduced-precision networks.
http://arxiv.org/pdf/1709.01134
Asit Mishra, Eriko Nurvitadhi, Jeffrey J Cook, Debbie Marr
cs.CV, cs.LG, cs.NE
null
null
cs.CV
20170904
20170904
[]
1709.01134
47
[10] N. P. Jouppi, C. Young, N. Patil, D. Patterson, G. Agrawal, R. Bajwa, S. Bates, S. Bhatia, N. Boden, A. Borchers, R. Boyle, P.-l. Cantin, C. Chao, C. Clark, J. Coriell, M. Daley, M. Dau, J. Dean, B. Gelb, T. Vazir Ghaemmaghami, R. Gottipati, W. Gulland, R. Hagmann, C. R. Ho, D. Hogberg, J. Hu, R. Hundt, D. Hurt, J. Ibarz, A. Jaffey, A. Jaworski, A. Kaplan, H. Khaitan, A. Koch, N. Kumar, S. Lacy, J. Laudon, J. Law, D. Le, C. Leary, Z. Liu, K. Lucke, A. Lundin, G. MacKean, A. Maggiore, M. Mahony, K. Miller, R. Nagarajan, R. Narayanaswami, R. Ni, K. Nix, T. Norrie, M.
1709.01134#47
WRPN: Wide Reduced-Precision Networks
For computer vision applications, prior works have shown the efficacy of reducing numeric precision of model parameters (network weights) in deep neural networks. Activation maps, however, occupy a large memory footprint during both the training and inference step when using mini-batches of inputs. One way to reduce this large memory footprint is to reduce the precision of activations. However, past works have shown that reducing the precision of activations hurts model accuracy. We study schemes to train networks from scratch using reduced-precision activations without hurting accuracy. We reduce the precision of activation maps (along with model parameters) and increase the number of filter maps in a layer, and find that this scheme matches or surpasses the accuracy of the baseline full-precision network. As a result, one can significantly improve the execution efficiency (e.g. reduce dynamic memory footprint, memory bandwidth and computational energy) and speed up the training and inference process with appropriate hardware support. We call our scheme WRPN - wide reduced-precision networks. We report results and show that WRPN scheme is better than previously reported accuracies on ILSVRC-12 dataset while being computationally less expensive compared to previously reported reduced-precision networks.
http://arxiv.org/pdf/1709.01134
Asit Mishra, Eriko Nurvitadhi, Jeffrey J Cook, Debbie Marr
cs.CV, cs.LG, cs.NE
null
null
cs.CV
20170904
20170904
[]
1709.01134
48
M. Mahony, K. Miller, R. Nagarajan, R. Narayanaswami, R. Ni, K. Nix, T. Norrie, M. Omernick, N. Penukonda, A. Phelps, J. Ross, M. Ross, A. Salek, E. Samadiani, C. Severn, G. Sizikov, M. Snelham, J. Souter, D. Steinberg, A. Swing, M. Tan, G. Thorson, B. Tian, H. Toma, E. Tuttle, V. Vasudevan, R. Walter, W. Wang, E. Wilcox, and D. H. Yoon. In-Datacenter Performance Analysis of a Tensor Processing Unit. ArXiv e-prints, Apr. 2017.
1709.01134#48
WRPN: Wide Reduced-Precision Networks
For computer vision applications, prior works have shown the efficacy of reducing numeric precision of model parameters (network weights) in deep neural networks. Activation maps, however, occupy a large memory footprint during both the training and inference step when using mini-batches of inputs. One way to reduce this large memory footprint is to reduce the precision of activations. However, past works have shown that reducing the precision of activations hurts model accuracy. We study schemes to train networks from scratch using reduced-precision activations without hurting accuracy. We reduce the precision of activation maps (along with model parameters) and increase the number of filter maps in a layer, and find that this scheme matches or surpasses the accuracy of the baseline full-precision network. As a result, one can significantly improve the execution efficiency (e.g. reduce dynamic memory footprint, memory bandwidth and computational energy) and speed up the training and inference process with appropriate hardware support. We call our scheme WRPN - wide reduced-precision networks. We report results and show that WRPN scheme is better than previously reported accuracies on ILSVRC-12 dataset while being computationally less expensive compared to previously reported reduced-precision networks.
http://arxiv.org/pdf/1709.01134
Asit Mishra, Eriko Nurvitadhi, Jeffrey J Cook, Debbie Marr
cs.CV, cs.LG, cs.NE
null
null
cs.CV
20170904
20170904
[]
1709.01134
49
[11] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In F. Pereira, C. J. C. Burges, L. Bottou, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 25, pages 1097–1105. Curran Associates, Inc., 2012. [12] F. Li and B. Liu. Ternary weight networks. CoRR, abs/1605.04711, 2016. [13] Z. Lin, M. Courbariaux, R. Memisevic, and Y. Bengio. Neural networks with few multiplications. CoRR, abs/1510.03009, 2015. [14] N. Mellempudi, A. Kundu, D. Mudigere, D. Das, B. Kaul, and P. Dubey. Ternary Neural Networks with Fine-Grained Quantization. ArXiv e-prints, May 2017. [15] D. Miyashita, E. H. Lee, and B. Murmann. Convolutional neural networks using logarithmic data repre- sentation. CoRR, abs/1603.01025, 2016.
1709.01134#49
WRPN: Wide Reduced-Precision Networks
For computer vision applications, prior works have shown the efficacy of reducing numeric precision of model parameters (network weights) in deep neural networks. Activation maps, however, occupy a large memory footprint during both the training and inference step when using mini-batches of inputs. One way to reduce this large memory footprint is to reduce the precision of activations. However, past works have shown that reducing the precision of activations hurts model accuracy. We study schemes to train networks from scratch using reduced-precision activations without hurting accuracy. We reduce the precision of activation maps (along with model parameters) and increase the number of filter maps in a layer, and find that this scheme matches or surpasses the accuracy of the baseline full-precision network. As a result, one can significantly improve the execution efficiency (e.g. reduce dynamic memory footprint, memory bandwidth and computational energy) and speed up the training and inference process with appropriate hardware support. We call our scheme WRPN - wide reduced-precision networks. We report results and show that WRPN scheme is better than previously reported accuracies on ILSVRC-12 dataset while being computationally less expensive compared to previously reported reduced-precision networks.
http://arxiv.org/pdf/1709.01134
Asit Mishra, Eriko Nurvitadhi, Jeffrey J Cook, Debbie Marr
cs.CV, cs.LG, cs.NE
null
null
cs.CV
20170904
20170904
[]
1709.01134
50
[16] E. Nurvitadhi, G. Venkatesh, J. Sim, D. Marr, R. Huang, J. Ong Gee Hock, Y. T. Liew, K. Srivatsan, D. Moss, S. Subhaschandra, and G. Boudoukh. Can fpgas beat gpus in accelerating next-generation In Proceedings of the 2017 ACM/SIGDA International Symposium on Field- deep neural networks? Programmable Gate Arrays, FPGA ’17, pages 5–14, New York, NY, USA, 2017. ACM. [17] M. Rastegari, V. Ordonez, J. Redmon, and A. Farhadi. Xnor-net: Imagenet classification using binary convolutional neural networks. CoRR, abs/1603.05279, 2016. [18] W. Sung, S. Shin, and K. Hwang. Resiliency of deep neural networks under quantization. CoRR, abs/1511.06488, 2015. [19] C. Szegedy, S. Ioffe, and V. Vanhoucke. Inception-v4, inception-resnet and the impact of residual connec- tions on learning. CoRR, abs/1602.07261, 2016.
1709.01134#50
WRPN: Wide Reduced-Precision Networks
For computer vision applications, prior works have shown the efficacy of reducing numeric precision of model parameters (network weights) in deep neural networks. Activation maps, however, occupy a large memory footprint during both the training and inference step when using mini-batches of inputs. One way to reduce this large memory footprint is to reduce the precision of activations. However, past works have shown that reducing the precision of activations hurts model accuracy. We study schemes to train networks from scratch using reduced-precision activations without hurting accuracy. We reduce the precision of activation maps (along with model parameters) and increase the number of filter maps in a layer, and find that this scheme matches or surpasses the accuracy of the baseline full-precision network. As a result, one can significantly improve the execution efficiency (e.g. reduce dynamic memory footprint, memory bandwidth and computational energy) and speed up the training and inference process with appropriate hardware support. We call our scheme WRPN - wide reduced-precision networks. We report results and show that WRPN scheme is better than previously reported accuracies on ILSVRC-12 dataset while being computationally less expensive compared to previously reported reduced-precision networks.
http://arxiv.org/pdf/1709.01134
Asit Mishra, Eriko Nurvitadhi, Jeffrey J Cook, Debbie Marr
cs.CV, cs.LG, cs.NE
null
null
cs.CV
20170904
20170904
[]
1709.01134
51
[20] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. E. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabi- novich. Going deeper with convolutions. CoRR, abs/1409.4842, 2014. [21] Y. Umuroglu, N. J. Fraser, G. Gambardella, M. Blott, P. H. W. Leong, M. Jahre, and K. A. Vissers. FINN: A framework for fast, scalable binarized neural network inference. CoRR, abs/1612.07119, 2016. [22] V. Vanhoucke, A. Senior, and M. Z. Mao. Improving the speed of neural networks on cpus. In Deep Learning and Unsupervised Feature Learning Workshop, NIPS 2011, 2011. [23] G. Venkatesh, E. Nurvitadhi, and D. Marr. Accelerating deep convolutional networks using low-precision and sparsity. CoRR, abs/1610.00324, 2016.
1709.01134#51
WRPN: Wide Reduced-Precision Networks
For computer vision applications, prior works have shown the efficacy of reducing numeric precision of model parameters (network weights) in deep neural networks. Activation maps, however, occupy a large memory footprint during both the training and inference step when using mini-batches of inputs. One way to reduce this large memory footprint is to reduce the precision of activations. However, past works have shown that reducing the precision of activations hurts model accuracy. We study schemes to train networks from scratch using reduced-precision activations without hurting accuracy. We reduce the precision of activation maps (along with model parameters) and increase the number of filter maps in a layer, and find that this scheme matches or surpasses the accuracy of the baseline full-precision network. As a result, one can significantly improve the execution efficiency (e.g. reduce dynamic memory footprint, memory bandwidth and computational energy) and speed up the training and inference process with appropriate hardware support. We call our scheme WRPN - wide reduced-precision networks. We report results and show that WRPN scheme is better than previously reported accuracies on ILSVRC-12 dataset while being computationally less expensive compared to previously reported reduced-precision networks.
http://arxiv.org/pdf/1709.01134
Asit Mishra, Eriko Nurvitadhi, Jeffrey J Cook, Debbie Marr
cs.CV, cs.LG, cs.NE
null
null
cs.CV
20170904
20170904
[]
1709.01134
52
[24] S. Zagoruyko and N. Komodakis. Wide residual networks. CoRR, abs/1605.07146, 2016. [25] A. Zhou, A. Yao, Y. Guo, L. Xu, and Y. Chen. Incremental network quantization: Towards lossless cnns with low-precision weights. CoRR, abs/1702.03044, 2017. 10 [26] S. Zhou, Z. Ni, X. Zhou, H. Wen, Y. Wu, and Y. Zou. Dorefa-net: Training low bitwidth convolutional neural networks with low bitwidth gradients. CoRR, abs/1606.06160, 2016. [27] C. Zhu, S. Han, H. Mao, and W. J. Dally. Trained ternary quantization. CoRR, abs/1612.01064, 2016. 11
1709.01134#52
WRPN: Wide Reduced-Precision Networks
For computer vision applications, prior works have shown the efficacy of reducing numeric precision of model parameters (network weights) in deep neural networks. Activation maps, however, occupy a large memory footprint during both the training and inference step when using mini-batches of inputs. One way to reduce this large memory footprint is to reduce the precision of activations. However, past works have shown that reducing the precision of activations hurts model accuracy. We study schemes to train networks from scratch using reduced-precision activations without hurting accuracy. We reduce the precision of activation maps (along with model parameters) and increase the number of filter maps in a layer, and find that this scheme matches or surpasses the accuracy of the baseline full-precision network. As a result, one can significantly improve the execution efficiency (e.g. reduce dynamic memory footprint, memory bandwidth and computational energy) and speed up the training and inference process with appropriate hardware support. We call our scheme WRPN - wide reduced-precision networks. We report results and show that WRPN scheme is better than previously reported accuracies on ILSVRC-12 dataset while being computationally less expensive compared to previously reported reduced-precision networks.
http://arxiv.org/pdf/1709.01134
Asit Mishra, Eriko Nurvitadhi, Jeffrey J Cook, Debbie Marr
cs.CV, cs.LG, cs.NE
null
null
cs.CV
20170904
20170904
[]
1708.07860
0
7 1 0 2 g u A 5 2 ] V C . s c [ 1 v 0 6 8 7 0 . 8 0 7 1 : v i X r a # Multi-task Self-Supervised Visual Learning Carl Doersch† Andrew Zisserman†,∗ # †DeepMind # ∗VGG, Department of Engineering Science, University of Oxford # Abstract We investigate methods for combining multiple self- supervised tasks—i.e., supervised tasks where data can be collected without manual labeling—in order to train a sin- gle visual representation. First, we provide an apples-to- apples comparison of four different self-supervised tasks using the very deep ResNet-101 architecture. We then com- bine tasks to jointly train a network. We also explore lasso regularization to encourage the network to factorize the information in its representation, and methods for “har- monizing” network inputs in order to learn a more uni- fied representation. We evaluate all methods on ImageNet classification, PASCAL VOC detection, and NYU depth prediction. Our results show that deeper networks work better, and that combining tasks—even via a na¨ıve multi- head architecture—always improves performance. Our best joint network nearly matches the PASCAL performance of a model pre-trained on ImageNet classification, and matches the ImageNet network on NYU depth prediction.
1708.07860#0
Multi-task Self-Supervised Visual Learning
We investigate methods for combining multiple self-supervised tasks--i.e., supervised tasks where data can be collected without manual labeling--in order to train a single visual representation. First, we provide an apples-to-apples comparison of four different self-supervised tasks using the very deep ResNet-101 architecture. We then combine tasks to jointly train a network. We also explore lasso regularization to encourage the network to factorize the information in its representation, and methods for "harmonizing" network inputs in order to learn a more unified representation. We evaluate all methods on ImageNet classification, PASCAL VOC detection, and NYU depth prediction. Our results show that deeper networks work better, and that combining tasks--even via a naive multi-head architecture--always improves performance. Our best joint network nearly matches the PASCAL performance of a model pre-trained on ImageNet classification, and matches the ImageNet network on NYU depth prediction.
http://arxiv.org/pdf/1708.07860
Carl Doersch, Andrew Zisserman
cs.CV
Published at ICCV 2017
null
cs.CV
20170825
20170825
[ { "id": "1611.06430" }, { "id": "1602.07261" }, { "id": "1611.03530" }, { "id": "1611.06646" }, { "id": "1606.04671" }, { "id": "1612.06370" }, { "id": "1606.07419" }, { "id": "1609.02132" }, { "id": "1610.01685" } ]
1708.07747
1
# Han Xiao Zalando Research Mühlenstraße 25, 10243 Berlin [email protected] # Kashif Rasul Zalando Research Mühlenstraße 25, 10243 Berlin [email protected] # Roland Vollgraf Zalando Research Mühlenstraße 25, 10243 Berlin [email protected] # Abstract We present Fashion-MNIST, a new dataset comprising of 28 × 28 grayscale images of 70, 000 fashion products from 10 categories, with 7, 000 images The training set has 60, 000 images and the test set has per category. 10, 000 images. Fashion-MNIST is intended to serve as a direct drop- in replacement for the original MNIST dataset for benchmarking machine learning algorithms, as it shares the same image size, data format and the structure of training and testing splits. is freely available at https://github.com/zalandoresearch/fashion-mnist. # 1 Introduction
1708.07747#1
Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms
We present Fashion-MNIST, a new dataset comprising of 28x28 grayscale images of 70,000 fashion products from 10 categories, with 7,000 images per category. The training set has 60,000 images and the test set has 10,000 images. Fashion-MNIST is intended to serve as a direct drop-in replacement for the original MNIST dataset for benchmarking machine learning algorithms, as it shares the same image size, data format and the structure of training and testing splits. The dataset is freely available at https://github.com/zalandoresearch/fashion-mnist
http://arxiv.org/pdf/1708.07747
Han Xiao, Kashif Rasul, Roland Vollgraf
cs.LG, cs.CV, stat.ML
Dataset is freely available at https://github.com/zalandoresearch/fashion-mnist Benchmark is available at http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/
null
cs.LG
20170825
20170915
[ { "id": "1708.07747" }, { "id": "1702.05373" } ]
1708.07860
1
# 1. Introduction Vision is one of the most promising domains for unsu- pervised learning. Unlabeled images and video are avail- able in practically unlimited quantities, and the most promi- nent present image models—neural networks—are data starved, easily memorizing even random labels for large im- age collections [45]. Yet unsupervised algorithms are still not very effective for training neural networks: they fail to adequately capture the visual semantics needed to solve real-world tasks like object detection or geometry estima- tion the way strongly-supervised methods do. For most vi- sion problems, the current state-of-the-art approach begins by training a neural network on ImageNet [35] or a similarly large dataset which has been hand-annotated. How might we better train neural networks without man- ual labeling? Neural networks are generally trained via backpropagation on some objective function. Without la- bels, however, what objective function can measure how good the network is? Self-supervised learning answers this
1708.07860#1
Multi-task Self-Supervised Visual Learning
We investigate methods for combining multiple self-supervised tasks--i.e., supervised tasks where data can be collected without manual labeling--in order to train a single visual representation. First, we provide an apples-to-apples comparison of four different self-supervised tasks using the very deep ResNet-101 architecture. We then combine tasks to jointly train a network. We also explore lasso regularization to encourage the network to factorize the information in its representation, and methods for "harmonizing" network inputs in order to learn a more unified representation. We evaluate all methods on ImageNet classification, PASCAL VOC detection, and NYU depth prediction. Our results show that deeper networks work better, and that combining tasks--even via a naive multi-head architecture--always improves performance. Our best joint network nearly matches the PASCAL performance of a model pre-trained on ImageNet classification, and matches the ImageNet network on NYU depth prediction.
http://arxiv.org/pdf/1708.07860
Carl Doersch, Andrew Zisserman
cs.CV
Published at ICCV 2017
null
cs.CV
20170825
20170825
[ { "id": "1611.06430" }, { "id": "1602.07261" }, { "id": "1611.03530" }, { "id": "1611.06646" }, { "id": "1606.04671" }, { "id": "1612.06370" }, { "id": "1606.07419" }, { "id": "1609.02132" }, { "id": "1610.01685" } ]
1708.07747
2
# 1 Introduction The MNIST dataset comprising of 10-class handwritten digits, was first introduced by LeCun et al. [1998] in 1998. At that time one could not have foreseen the stellar rise of deep learning tech- niques and their performance. Despite the fact that today deep learning can do so much the sim- ple MNIST dataset has become the most widely used testbed in deep learning, surpassing CIFAR- 10 [Krizhevsky and Hinton, 2009] and ImageNet [Deng et al., 2009] in its popularity via Google trends1. Despite its simplicity its usage does not seem to be decreasing despite calls for it in the deep learning community. The reason MNIST is so popular has to do with its size, allowing deep learning researchers to quickly check and prototype their algorithms. This is also complemented by the fact that all machine learning libraries (e.g. scikit-learn) and deep learning frameworks (e.g. Tensorflow, Pytorch) provide helper functions and convenient examples that use MNIST out of the box.
1708.07747#2
Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms
We present Fashion-MNIST, a new dataset comprising of 28x28 grayscale images of 70,000 fashion products from 10 categories, with 7,000 images per category. The training set has 60,000 images and the test set has 10,000 images. Fashion-MNIST is intended to serve as a direct drop-in replacement for the original MNIST dataset for benchmarking machine learning algorithms, as it shares the same image size, data format and the structure of training and testing splits. The dataset is freely available at https://github.com/zalandoresearch/fashion-mnist
http://arxiv.org/pdf/1708.07747
Han Xiao, Kashif Rasul, Roland Vollgraf
cs.LG, cs.CV, stat.ML
Dataset is freely available at https://github.com/zalandoresearch/fashion-mnist Benchmark is available at http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/
null
cs.LG
20170825
20170915
[ { "id": "1708.07747" }, { "id": "1702.05373" } ]
1708.07860
2
question by proposing various tasks for networks to solve, where performance is easy to measure, i.e., performance can be captured with an objective function like those seen in supervised learning. Ideally, these tasks will be diffi- cult to solve without understanding some form of image semantics, yet any labels necessary to formulate the objec- tive function can be obtained automatically. In the last few years, a considerable number of such tasks have been pro- posed [1, 2, 6, 7, 8, 17, 20, 21, 23, 25, 26, 27, 28, 29, 31, 39, 40, 42, 43, 46, 47], such as asking a neural network to colorize grayscale images, fill in image holes, solve jigsaw puzzles made from image patches, or predict movement in videos. Neural networks pre-trained with these tasks can be re-trained to perform well on standard vision tasks (e.g. image classification, object detection, geometry estimation) with less manually-labeled data than networks which are initialized randomly. However, they still perform worse in this setting than networks pre-trained on ImageNet.
1708.07860#2
Multi-task Self-Supervised Visual Learning
We investigate methods for combining multiple self-supervised tasks--i.e., supervised tasks where data can be collected without manual labeling--in order to train a single visual representation. First, we provide an apples-to-apples comparison of four different self-supervised tasks using the very deep ResNet-101 architecture. We then combine tasks to jointly train a network. We also explore lasso regularization to encourage the network to factorize the information in its representation, and methods for "harmonizing" network inputs in order to learn a more unified representation. We evaluate all methods on ImageNet classification, PASCAL VOC detection, and NYU depth prediction. Our results show that deeper networks work better, and that combining tasks--even via a naive multi-head architecture--always improves performance. Our best joint network nearly matches the PASCAL performance of a model pre-trained on ImageNet classification, and matches the ImageNet network on NYU depth prediction.
http://arxiv.org/pdf/1708.07860
Carl Doersch, Andrew Zisserman
cs.CV
Published at ICCV 2017
null
cs.CV
20170825
20170825
[ { "id": "1611.06430" }, { "id": "1602.07261" }, { "id": "1611.03530" }, { "id": "1611.06646" }, { "id": "1606.04671" }, { "id": "1612.06370" }, { "id": "1606.07419" }, { "id": "1609.02132" }, { "id": "1610.01685" } ]
1708.07747
3
Our aim with this work is to create a good benchmark dataset which has all the accessibility of MNIST, namely its small size, straightforward encoding and permissive license. We took the ap- proach of sticking to the 10 classes 70, 000 grayscale images in the size of 28 × 28 as in the original MNIST. In fact, the only change one needs to use this dataset is to change the URL from where the MNIST dataset is fetched. Moreover, Fashion-MNIST poses a more challenging classification task than the simple MNIST digits data, whereas the latter has been trained to accuracies above 99.7% as reported in Wan et al. [2013], Ciregan et al. [2012]. We also looked at the EMNIST dataset provided by Cohen et al. [2017], an extended version of MNIST that extends the number of classes by introducing uppercase and lowercase characters. How# 1https://trends.google.com/trends/explore?date=all&q=mnist,CIFAR,ImageNet ever, to be able to use it seamlessly one needs to not only extend the deep learning framework’s MNIST helpers, but also change the underlying deep neural network to classify these extra classes. # 2 Fashion-MNIST Dataset
1708.07747#3
Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms
We present Fashion-MNIST, a new dataset comprising of 28x28 grayscale images of 70,000 fashion products from 10 categories, with 7,000 images per category. The training set has 60,000 images and the test set has 10,000 images. Fashion-MNIST is intended to serve as a direct drop-in replacement for the original MNIST dataset for benchmarking machine learning algorithms, as it shares the same image size, data format and the structure of training and testing splits. The dataset is freely available at https://github.com/zalandoresearch/fashion-mnist
http://arxiv.org/pdf/1708.07747
Han Xiao, Kashif Rasul, Roland Vollgraf
cs.LG, cs.CV, stat.ML
Dataset is freely available at https://github.com/zalandoresearch/fashion-mnist Benchmark is available at http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/
null
cs.LG
20170825
20170915
[ { "id": "1708.07747" }, { "id": "1702.05373" } ]
1708.07860
3
This paper advances self-supervision first by implement- ing four self-supervision tasks and comparing their perfor- mance using three evaluation measures. The self-supervised tasks are: relative position [7], colorization [46], the “ex- emplar” task [8], and motion segmentation [27] (described in section 2). The evaluation measures (section 5) assess a diverse set of applications that are standard for this area, in- cluding ImageNet image classification, object category de- tection on PASCAL VOC 2007, and depth prediction on NYU v2. Second, we evaluate if performance can be boosted by combining these tasks to simultaneously train a single trunk network. Combining the tasks fairly in a multi-task learn- ing objective is challenging since the tasks learn at different rates, and we discuss how we handle this problem in sec- tion 4. We find that multiple tasks work better than one, and explore which combinations give the largest boost.
1708.07860#3
Multi-task Self-Supervised Visual Learning
We investigate methods for combining multiple self-supervised tasks--i.e., supervised tasks where data can be collected without manual labeling--in order to train a single visual representation. First, we provide an apples-to-apples comparison of four different self-supervised tasks using the very deep ResNet-101 architecture. We then combine tasks to jointly train a network. We also explore lasso regularization to encourage the network to factorize the information in its representation, and methods for "harmonizing" network inputs in order to learn a more unified representation. We evaluate all methods on ImageNet classification, PASCAL VOC detection, and NYU depth prediction. Our results show that deeper networks work better, and that combining tasks--even via a naive multi-head architecture--always improves performance. Our best joint network nearly matches the PASCAL performance of a model pre-trained on ImageNet classification, and matches the ImageNet network on NYU depth prediction.
http://arxiv.org/pdf/1708.07860
Carl Doersch, Andrew Zisserman
cs.CV
Published at ICCV 2017
null
cs.CV
20170825
20170825
[ { "id": "1611.06430" }, { "id": "1602.07261" }, { "id": "1611.03530" }, { "id": "1611.06646" }, { "id": "1606.04671" }, { "id": "1612.06370" }, { "id": "1606.07419" }, { "id": "1609.02132" }, { "id": "1610.01685" } ]
1708.07747
4
# 2 Fashion-MNIST Dataset Fashion-MNIST is based on the assortment on Zalando’s website2. Every fashion product on Za- lando has a set of pictures shot by professional photographers, demonstrating different aspects of the product, i.e. front and back looks, details, looks with model and in an outfit. The original picture has a light-gray background (hexadecimal color: #fdfdfd) and stored in 762 × 1000 JPEG format. For efficiently serving different frontend components, the original picture is resampled with multiple resolutions, e.g. large, medium, small, thumbnail and tiny. We use the front look thumbnail images of 70, 000 unique products to build Fashion-MNIST. Those products come from different gender groups: men, women, kids and neutral. In particular, white- color products are not included in the dataset as they have low contrast to the background. The thumbnails (51 × 73) are then fed into the following conversion pipeline, which is visualized in Figure 1. 1. Converting the input to a PNG image. 2. Trimming any edges that are close to the color of the corner pixels. The “closeness” is defined by the distance within 5% of the maximum possible intensity in RGB space.
1708.07747#4
Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms
We present Fashion-MNIST, a new dataset comprising of 28x28 grayscale images of 70,000 fashion products from 10 categories, with 7,000 images per category. The training set has 60,000 images and the test set has 10,000 images. Fashion-MNIST is intended to serve as a direct drop-in replacement for the original MNIST dataset for benchmarking machine learning algorithms, as it shares the same image size, data format and the structure of training and testing splits. The dataset is freely available at https://github.com/zalandoresearch/fashion-mnist
http://arxiv.org/pdf/1708.07747
Han Xiao, Kashif Rasul, Roland Vollgraf
cs.LG, cs.CV, stat.ML
Dataset is freely available at https://github.com/zalandoresearch/fashion-mnist Benchmark is available at http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/
null
cs.LG
20170825
20170915
[ { "id": "1708.07747" }, { "id": "1702.05373" } ]
1708.07860
4
Third, we identify two reasons why a na¨ıve combination of self-supervision tasks might conflict, impeding perfor- mance: input channels can conflict, and learning tasks can conflict. The first sort of conflict might occur when jointly training colorization and exemplar learning: colorization re- ceives grayscale images as input, while exemplar learning receives all color channels. This puts an unnecessary burden 1 on low-level feature detectors that must operate across do- mains. The second sort of conflict might happen when one task learns semantic categorization (i.e. generalizing across instances of a class) and another learns instance matching (which should not generalize within a class). We resolve the first conflict via “input harmonization”, i.e. modifying net- work inputs so different tasks get more similar inputs. For the second conflict, we extend our mutli-task learning ar- chitecture with a lasso-regularized combination of features from different layers, which encourages the network to sep- arate features that are useful for different tasks. These ar- chitectures are described in section 3.
1708.07860#4
Multi-task Self-Supervised Visual Learning
We investigate methods for combining multiple self-supervised tasks--i.e., supervised tasks where data can be collected without manual labeling--in order to train a single visual representation. First, we provide an apples-to-apples comparison of four different self-supervised tasks using the very deep ResNet-101 architecture. We then combine tasks to jointly train a network. We also explore lasso regularization to encourage the network to factorize the information in its representation, and methods for "harmonizing" network inputs in order to learn a more unified representation. We evaluate all methods on ImageNet classification, PASCAL VOC detection, and NYU depth prediction. Our results show that deeper networks work better, and that combining tasks--even via a naive multi-head architecture--always improves performance. Our best joint network nearly matches the PASCAL performance of a model pre-trained on ImageNet classification, and matches the ImageNet network on NYU depth prediction.
http://arxiv.org/pdf/1708.07860
Carl Doersch, Andrew Zisserman
cs.CV
Published at ICCV 2017
null
cs.CV
20170825
20170825
[ { "id": "1611.06430" }, { "id": "1602.07261" }, { "id": "1611.03530" }, { "id": "1611.06646" }, { "id": "1606.04671" }, { "id": "1612.06370" }, { "id": "1606.07419" }, { "id": "1609.02132" }, { "id": "1610.01685" } ]
1708.07747
5
defined by the distance within 5% of the maximum possible intensity in RGB space. 3. Resizing the longest edge of the image to 28 by subsampling the pixels, i.e. some rows and columns are skipped over. 4. Sharpening pixels using a Gaussian operator of the radius and standard deviation of 1.0, with increasing effect near outlines. 5. Extending the shortest edge to 28 and put the image to the center of the canvas. 6. Negating the intensities of the image. 7. Converting the image to 8-bit grayscale pixels. 4 | | Hh 6 51x73 29x71 11x28 11x28 28x28 28x28 28x28 $ | RK; } A A A 51x73 51x49 28x27 28x27 28x28 28x28 28x28 4 |
1708.07747#5
Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms
We present Fashion-MNIST, a new dataset comprising of 28x28 grayscale images of 70,000 fashion products from 10 categories, with 7,000 images per category. The training set has 60,000 images and the test set has 10,000 images. Fashion-MNIST is intended to serve as a direct drop-in replacement for the original MNIST dataset for benchmarking machine learning algorithms, as it shares the same image size, data format and the structure of training and testing splits. The dataset is freely available at https://github.com/zalandoresearch/fashion-mnist
http://arxiv.org/pdf/1708.07747
Han Xiao, Kashif Rasul, Roland Vollgraf
cs.LG, cs.CV, stat.ML
Dataset is freely available at https://github.com/zalandoresearch/fashion-mnist Benchmark is available at http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/
null
cs.LG
20170825
20170915
[ { "id": "1708.07747" }, { "id": "1702.05373" } ]
1708.07860
5
We use a common deep network across all experiments, a ResNet-101-v2, so that we can compare various diverse self-supervision tasks apples-to-apples. This comparison is the first of its kind. Previous work applied self-supervision tasks over a variety of CNN architectures (usually relatively shallow), and often evaluated the representations on differ- ent tasks; and even where the evaluation tasks are the same, there are often differences in the fine-tuning algorithms. Consequently, it has not been possible to compare the per- formance of different self-supervision tasks across papers. Carrying out multiple fair comparisons, together with the implementation of the self-supervised tasks, joint training, evaluations, and optimization of a large network for several large datasets has been a significant engineering challenge. We describe how we carried out the large scale training effi- ciently in a distributed manner in section 4. This is another contribution of the paper. As shown in the experiments of section 6, by combining multiple self-supervision tasks we are able to close further the gap between self-supervised and fully supervised pre- training over all three evaluation measures. # 1.1. Related Work
1708.07860#5
Multi-task Self-Supervised Visual Learning
We investigate methods for combining multiple self-supervised tasks--i.e., supervised tasks where data can be collected without manual labeling--in order to train a single visual representation. First, we provide an apples-to-apples comparison of four different self-supervised tasks using the very deep ResNet-101 architecture. We then combine tasks to jointly train a network. We also explore lasso regularization to encourage the network to factorize the information in its representation, and methods for "harmonizing" network inputs in order to learn a more unified representation. We evaluate all methods on ImageNet classification, PASCAL VOC detection, and NYU depth prediction. Our results show that deeper networks work better, and that combining tasks--even via a naive multi-head architecture--always improves performance. Our best joint network nearly matches the PASCAL performance of a model pre-trained on ImageNet classification, and matches the ImageNet network on NYU depth prediction.
http://arxiv.org/pdf/1708.07860
Carl Doersch, Andrew Zisserman
cs.CV
Published at ICCV 2017
null
cs.CV
20170825
20170825
[ { "id": "1611.06430" }, { "id": "1602.07261" }, { "id": "1611.03530" }, { "id": "1611.06646" }, { "id": "1606.04671" }, { "id": "1612.06370" }, { "id": "1606.07419" }, { "id": "1609.02132" }, { "id": "1610.01685" } ]
1708.07747
6
| $ | RK; } A A A (1) PNG image (2) Trimming (3) Resizing (4) Sharpening (5) Extending (6) Negating(7) Grayscaling Figure 1: Diagram of the conversion process used to generate Fashion-MNIST dataset. Two exam- ples from dress and sandals categories are depicted, respectively. Each column represents a step described in section 2. Table 1: Files contained in the Fashion-MNIST dataset. Name Description # Examples 60, 000 60, 000 10, 000 10, 000 Size 25 MBytes 140 Bytes 4.2 MBytes 92 Bytes train-images-idx3-ubyte.gz Training set images train-labels-idx1-ubyte.gz t10k-images-idx3-ubyte.gz Test set images t10k-labels-idx1-ubyte.gz Training set labels Test set labels For the class labels, we use the silhouette code of the product. The silhouette code is manually labeled by the in-house fashion experts and reviewed by a separate team at Zalando. Each product
1708.07747#6
Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms
We present Fashion-MNIST, a new dataset comprising of 28x28 grayscale images of 70,000 fashion products from 10 categories, with 7,000 images per category. The training set has 60,000 images and the test set has 10,000 images. Fashion-MNIST is intended to serve as a direct drop-in replacement for the original MNIST dataset for benchmarking machine learning algorithms, as it shares the same image size, data format and the structure of training and testing splits. The dataset is freely available at https://github.com/zalandoresearch/fashion-mnist
http://arxiv.org/pdf/1708.07747
Han Xiao, Kashif Rasul, Roland Vollgraf
cs.LG, cs.CV, stat.ML
Dataset is freely available at https://github.com/zalandoresearch/fashion-mnist Benchmark is available at http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/
null
cs.LG
20170825
20170915
[ { "id": "1708.07747" }, { "id": "1702.05373" } ]
1708.07860
6
# 1.1. Related Work Self-supervision tasks for deep learning generally in- volve taking a complex signal, hiding part of it from the network, and then asking the network to fill in the missing information. The tasks can broadly be divided into those that use auxiliary information or those that only use raw pixels. Tasks that use auxiliary information such as multi-modal information beyond pixels include: predicting sound given videos [26], predicting camera motion given two images of the same scene [1, 17, 44], or predicting what robotic mo- tion caused a change in a scene [2, 29, 30, 31, 32]. However, non-visual information can be difficult to obtain: estimating motion requires IMU measurements, running robots is still expensive, and sound is complex and difficult to evaluate quantitatively. Thus, many works use raw pixels. In videos, time can be a source of supervision. One can simply predict fu- ture [39, 40], although such predictions may be difficult to 2
1708.07860#6
Multi-task Self-Supervised Visual Learning
We investigate methods for combining multiple self-supervised tasks--i.e., supervised tasks where data can be collected without manual labeling--in order to train a single visual representation. First, we provide an apples-to-apples comparison of four different self-supervised tasks using the very deep ResNet-101 architecture. We then combine tasks to jointly train a network. We also explore lasso regularization to encourage the network to factorize the information in its representation, and methods for "harmonizing" network inputs in order to learn a more unified representation. We evaluate all methods on ImageNet classification, PASCAL VOC detection, and NYU depth prediction. Our results show that deeper networks work better, and that combining tasks--even via a naive multi-head architecture--always improves performance. Our best joint network nearly matches the PASCAL performance of a model pre-trained on ImageNet classification, and matches the ImageNet network on NYU depth prediction.
http://arxiv.org/pdf/1708.07860
Carl Doersch, Andrew Zisserman
cs.CV
Published at ICCV 2017
null
cs.CV
20170825
20170825
[ { "id": "1611.06430" }, { "id": "1602.07261" }, { "id": "1611.03530" }, { "id": "1611.06646" }, { "id": "1606.04671" }, { "id": "1612.06370" }, { "id": "1606.07419" }, { "id": "1609.02132" }, { "id": "1610.01685" } ]
1708.07747
7
2Zalando is the Europe’s largest online fashion platform. http://www.zalando.com 2 contains only one silhouette code. Table 2 gives a summary of all class labels in Fashion-MNIST with examples for each class. Finally, the dataset is divided into a training and a test set. The training set receives a randomly- selected 6, 000 examples from each class. Images and labels are stored in the same file format as the MNIST data set, which is designed for storing vectors and multidimensional matrices. The result files are listed in Table 1. We sort examples by their labels while storing, resulting in smaller label files after compression comparing to the MNIST. It is also easier to retrieve examples with a certain class label. The data shuffling job is therefore left to the algorithm developer. Table 2: Class names and example images in Fashion-MNIST dataset. Examples Label Description 0 T-Shirt/Top 1 Trouser 2 Pullover 3 Dress 4 Coat 5 Sandals 6 Shirt 7 Sneaker 8 Bag 9 Ankle boots Examples
1708.07747#7
Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms
We present Fashion-MNIST, a new dataset comprising of 28x28 grayscale images of 70,000 fashion products from 10 categories, with 7,000 images per category. The training set has 60,000 images and the test set has 10,000 images. Fashion-MNIST is intended to serve as a direct drop-in replacement for the original MNIST dataset for benchmarking machine learning algorithms, as it shares the same image size, data format and the structure of training and testing splits. The dataset is freely available at https://github.com/zalandoresearch/fashion-mnist
http://arxiv.org/pdf/1708.07747
Han Xiao, Kashif Rasul, Roland Vollgraf
cs.LG, cs.CV, stat.ML
Dataset is freely available at https://github.com/zalandoresearch/fashion-mnist Benchmark is available at http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/
null
cs.LG
20170825
20170915
[ { "id": "1708.07747" }, { "id": "1702.05373" } ]
1708.07860
7
2 evaluate. One way to simplify the problem is to ask a net- work to temporally order a set of frames sampled from a video [23]. Another is to note that objects generally appear across many frames: thus, we can train features to remain invariant as a video progresses [11, 24, 42, 43, 47]. Finally, motion cues can separate foreground objects from back- ground. Neural networks can be asked to re-produce these motion-based boundaries without seeing motion [21, 27]. Self-supervised learning can also work with a single im- age. One can hide a part of the image and ask the network to make predictions about the hidden part. The network can be tasked with generating pixels, either by filling in holes [6, 28], or recovering color after images have been converted to grayscale [20, 46]. Again, evaluating the qual- ity of generated pixels is difficult. To simplify the task, one can extract multiple patches at random from an image, and then ask the network to position the patches relative to each other [7, 25]. Finally, one can form a surrogate “class” by taking a single image and altering it many times via trans- lations, rotations, and color shifts [8], to create a synthetic categorization problem.
1708.07860#7
Multi-task Self-Supervised Visual Learning
We investigate methods for combining multiple self-supervised tasks--i.e., supervised tasks where data can be collected without manual labeling--in order to train a single visual representation. First, we provide an apples-to-apples comparison of four different self-supervised tasks using the very deep ResNet-101 architecture. We then combine tasks to jointly train a network. We also explore lasso regularization to encourage the network to factorize the information in its representation, and methods for "harmonizing" network inputs in order to learn a more unified representation. We evaluate all methods on ImageNet classification, PASCAL VOC detection, and NYU depth prediction. Our results show that deeper networks work better, and that combining tasks--even via a naive multi-head architecture--always improves performance. Our best joint network nearly matches the PASCAL performance of a model pre-trained on ImageNet classification, and matches the ImageNet network on NYU depth prediction.
http://arxiv.org/pdf/1708.07860
Carl Doersch, Andrew Zisserman
cs.CV
Published at ICCV 2017
null
cs.CV
20170825
20170825
[ { "id": "1611.06430" }, { "id": "1602.07261" }, { "id": "1611.03530" }, { "id": "1611.06646" }, { "id": "1606.04671" }, { "id": "1612.06370" }, { "id": "1606.07419" }, { "id": "1609.02132" }, { "id": "1610.01685" } ]
1708.07747
8
Examples # 3 Experiments We provide some classification results in Table 3 to form a benchmark on this data set. All al- gorithms are repeated 5 times by shuffling the training data and the average accuracy on the test set is reported. The benchmark on the MNIST dataset is also included for a side-by-side comparison. A more comprehensive table with explanations on the algorithms can be found on https://github.com/zalandoresearch/fashion-mnist. Table 3: Benchmark on Fashion-MNIST (Fashion) and MNIST. Test Accuracy Classifier Parameter Fashion MNIST 0.873 0.861 0.886 0.798 0.792 0.789 DecisionTreeClassifier criterion=entropy max_depth=10 splitter=best criterion=entropy max_depth=10 splitter=random criterion=entropy max_depth=50 splitter=best Continued on next page
1708.07747#8
Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms
We present Fashion-MNIST, a new dataset comprising of 28x28 grayscale images of 70,000 fashion products from 10 categories, with 7,000 images per category. The training set has 60,000 images and the test set has 10,000 images. Fashion-MNIST is intended to serve as a direct drop-in replacement for the original MNIST dataset for benchmarking machine learning algorithms, as it shares the same image size, data format and the structure of training and testing splits. The dataset is freely available at https://github.com/zalandoresearch/fashion-mnist
http://arxiv.org/pdf/1708.07747
Han Xiao, Kashif Rasul, Roland Vollgraf
cs.LG, cs.CV, stat.ML
Dataset is freely available at https://github.com/zalandoresearch/fashion-mnist Benchmark is available at http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/
null
cs.LG
20170825
20170915
[ { "id": "1708.07747" }, { "id": "1702.05373" } ]
1708.07860
8
Our work is also related to multi-task learning. Several recent works have trained deep visual representations us- ing multiple tasks [9, 12, 22, 37], including one work [18] which combines no less than 7 tasks. Usually the goal is to create a single representation that works well for every task, and perhaps share knowledge between tasks. Surpris- ingly, however, previous work has shown little transfer be- tween diverse tasks. Kokkinos [18], for example, found a slight dip in performance with 7 tasks versus 2. Note that our work is not primarily concerned with the performance on the self-supervised tasks we combine: we evaluate on a separate set of semantic “evaluation tasks.” Some previ- ous self-supervised learning literature has suggested perfor- mance gains from combining self-supervised tasks [32, 44], although these works used relatively similar tasks within relatively restricted domains where extra information was provided besides pixels. In this work, we find that pre- training on multiple diverse self-supervised tasks using only pixels yields strong performance. # 2. Self-Supervised Tasks
1708.07860#8
Multi-task Self-Supervised Visual Learning
We investigate methods for combining multiple self-supervised tasks--i.e., supervised tasks where data can be collected without manual labeling--in order to train a single visual representation. First, we provide an apples-to-apples comparison of four different self-supervised tasks using the very deep ResNet-101 architecture. We then combine tasks to jointly train a network. We also explore lasso regularization to encourage the network to factorize the information in its representation, and methods for "harmonizing" network inputs in order to learn a more unified representation. We evaluate all methods on ImageNet classification, PASCAL VOC detection, and NYU depth prediction. Our results show that deeper networks work better, and that combining tasks--even via a naive multi-head architecture--always improves performance. Our best joint network nearly matches the PASCAL performance of a model pre-trained on ImageNet classification, and matches the ImageNet network on NYU depth prediction.
http://arxiv.org/pdf/1708.07860
Carl Doersch, Andrew Zisserman
cs.CV
Published at ICCV 2017
null
cs.CV
20170825
20170825
[ { "id": "1611.06430" }, { "id": "1602.07261" }, { "id": "1611.03530" }, { "id": "1611.06646" }, { "id": "1606.04671" }, { "id": "1612.06370" }, { "id": "1606.07419" }, { "id": "1609.02132" }, { "id": "1610.01685" } ]
1708.07860
9
# 2. Self-Supervised Tasks Too many self-supervised tasks have been proposed in recent years for us to evaluate every possible combina- tion. Hence, we chose representative self-supervised tasks to reimplement and investigate in combination. We aimed for tasks that were conceptually simple, yet also as diverse as possible. Intuitively, a diverse set of tasks should lead to a diverse set of features, which will therefore be more likely to span the space of features needed for general se- mantic image understanding. In this section, we will briefly describe the four tasks we investigated. Where possible, we followed the procedures established in previous works, although in many cases modifications were necessary for our multi-task setup.
1708.07860#9
Multi-task Self-Supervised Visual Learning
We investigate methods for combining multiple self-supervised tasks--i.e., supervised tasks where data can be collected without manual labeling--in order to train a single visual representation. First, we provide an apples-to-apples comparison of four different self-supervised tasks using the very deep ResNet-101 architecture. We then combine tasks to jointly train a network. We also explore lasso regularization to encourage the network to factorize the information in its representation, and methods for "harmonizing" network inputs in order to learn a more unified representation. We evaluate all methods on ImageNet classification, PASCAL VOC detection, and NYU depth prediction. Our results show that deeper networks work better, and that combining tasks--even via a naive multi-head architecture--always improves performance. Our best joint network nearly matches the PASCAL performance of a model pre-trained on ImageNet classification, and matches the ImageNet network on NYU depth prediction.
http://arxiv.org/pdf/1708.07860
Carl Doersch, Andrew Zisserman
cs.CV
Published at ICCV 2017
null
cs.CV
20170825
20170825
[ { "id": "1611.06430" }, { "id": "1602.07261" }, { "id": "1611.03530" }, { "id": "1611.06646" }, { "id": "1606.04671" }, { "id": "1612.06370" }, { "id": "1606.07419" }, { "id": "1609.02132" }, { "id": "1610.01685" } ]
1708.07747
10
Table 3 – continued from previous page Test Accuracy Parameter Classifier criterion=entropy max_depth=100 splitter=best criterion=gini max_depth=10 splitter=best criterion=entropy max_depth=50 splitter=random criterion=entropy max_depth=100 splitter=random criterion=gini max_depth=100 splitter=best criterion=gini max_depth=50 splitter=best criterion=gini max_depth=10 splitter=random criterion=gini max_depth=50 splitter=random criterion=gini max_depth=100 splitter=random ExtraTreeClassifier criterion=gini max_depth=10 splitter=best criterion=entropy max_depth=100 splitter=best criterion=entropy max_depth=10 splitter=best criterion=entropy max_depth=50 splitter=best criterion=gini max_depth=100 splitter=best criterion=gini max_depth=50 splitter=best criterion=entropy max_depth=50 splitter=random criterion=entropy max_depth=100 splitter=random criterion=gini max_depth=50 splitter=random criterion=gini max_depth=100 splitter=random criterion=gini
1708.07747#10
Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms
We present Fashion-MNIST, a new dataset comprising of 28x28 grayscale images of 70,000 fashion products from 10 categories, with 7,000 images per category. The training set has 60,000 images and the test set has 10,000 images. Fashion-MNIST is intended to serve as a direct drop-in replacement for the original MNIST dataset for benchmarking machine learning algorithms, as it shares the same image size, data format and the structure of training and testing splits. The dataset is freely available at https://github.com/zalandoresearch/fashion-mnist
http://arxiv.org/pdf/1708.07747
Han Xiao, Kashif Rasul, Roland Vollgraf
cs.LG, cs.CV, stat.ML
Dataset is freely available at https://github.com/zalandoresearch/fashion-mnist Benchmark is available at http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/
null
cs.LG
20170825
20170915
[ { "id": "1708.07747" }, { "id": "1702.05373" } ]
1708.07860
10
Relative Position [7]: This task begins by sampling two patches at random from a single image and feeding them both to the network without context. The network’s goal is to predict where one patch was relative to the other in the original image. The trunk is used to produce a representa- tion separately for both patches, which are then fed into a head which combines the representations and makes a pre- diction. The patch locations are sampled from a grid, and pairs are always taken from adjacent grid points (includ- ing diagonals). Thus, there are only eight possible relative positions for a pair, meaning the network output is a sim- ple eight-way softmax classification. Importantly, networks can learn to detect chromatic aberration to solve the task, a low-level image property that isn’t relevant to semantic tasks. Hence, [7] employs “color dropping”, i.e., randomly dropping 2 of the 3 color channels and replacing them with noise. We reproduce color dropping, though our harmoniza- tion experiments explore other approaches to dealing with chromatic aberration that clash less with other tasks.
1708.07860#10
Multi-task Self-Supervised Visual Learning
We investigate methods for combining multiple self-supervised tasks--i.e., supervised tasks where data can be collected without manual labeling--in order to train a single visual representation. First, we provide an apples-to-apples comparison of four different self-supervised tasks using the very deep ResNet-101 architecture. We then combine tasks to jointly train a network. We also explore lasso regularization to encourage the network to factorize the information in its representation, and methods for "harmonizing" network inputs in order to learn a more unified representation. We evaluate all methods on ImageNet classification, PASCAL VOC detection, and NYU depth prediction. Our results show that deeper networks work better, and that combining tasks--even via a naive multi-head architecture--always improves performance. Our best joint network nearly matches the PASCAL performance of a model pre-trained on ImageNet classification, and matches the ImageNet network on NYU depth prediction.
http://arxiv.org/pdf/1708.07860
Carl Doersch, Andrew Zisserman
cs.CV
Published at ICCV 2017
null
cs.CV
20170825
20170825
[ { "id": "1611.06430" }, { "id": "1602.07261" }, { "id": "1611.03530" }, { "id": "1611.06646" }, { "id": "1606.04671" }, { "id": "1612.06370" }, { "id": "1606.07419" }, { "id": "1609.02132" }, { "id": "1610.01685" } ]
1708.07747
11
splitter=random criterion=gini max_depth=50 splitter=random criterion=gini max_depth=100 splitter=random criterion=gini max_depth=10 splitter=random criterion=entropy max_depth=10 splitter=random GaussianNB priors=[0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1] GradientBoostingClassifier n_estimators=100 loss=deviance max_depth=10 n_estimators=50 loss=deviance max_depth=10 n_estimators=100 loss=deviance max_depth=3 n_estimators=10 loss=deviance max_depth=10 n_estimators=50 loss=deviance max_depth=3 n_estimators=10 loss=deviance max_depth=50 n_estimators=10 loss=deviance max_depth=3 KNeighborsClassifier weights=distance n_neighbors=5 p=1 weights=distance n_neighbors=9 p=1 weights=uniform n_neighbors=9 p=1 weights=uniform n_neighbors=5 p=1 weights=distance
1708.07747#11
Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms
We present Fashion-MNIST, a new dataset comprising of 28x28 grayscale images of 70,000 fashion products from 10 categories, with 7,000 images per category. The training set has 60,000 images and the test set has 10,000 images. Fashion-MNIST is intended to serve as a direct drop-in replacement for the original MNIST dataset for benchmarking machine learning algorithms, as it shares the same image size, data format and the structure of training and testing splits. The dataset is freely available at https://github.com/zalandoresearch/fashion-mnist
http://arxiv.org/pdf/1708.07747
Han Xiao, Kashif Rasul, Roland Vollgraf
cs.LG, cs.CV, stat.ML
Dataset is freely available at https://github.com/zalandoresearch/fashion-mnist Benchmark is available at http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/
null
cs.LG
20170825
20170915
[ { "id": "1708.07747" }, { "id": "1702.05373" } ]
1708.07860
11
Colorization [46]: Given a grayscale image (the L chan- nel of the Lab color space), the network must predict the color at every pixel (specifically, the ab components of Lab). The color is predicted at a lower resolution than the image (a stride of 8 in our case, a stride of 4 was used in [46]), and furthermore, the colors are vector quantized into 313 different categories. Thus, there is a 313-way softmax clas- sification for every 8-by-8 pixel region of the image. Our implementation closely follows [46].
1708.07860#11
Multi-task Self-Supervised Visual Learning
We investigate methods for combining multiple self-supervised tasks--i.e., supervised tasks where data can be collected without manual labeling--in order to train a single visual representation. First, we provide an apples-to-apples comparison of four different self-supervised tasks using the very deep ResNet-101 architecture. We then combine tasks to jointly train a network. We also explore lasso regularization to encourage the network to factorize the information in its representation, and methods for "harmonizing" network inputs in order to learn a more unified representation. We evaluate all methods on ImageNet classification, PASCAL VOC detection, and NYU depth prediction. Our results show that deeper networks work better, and that combining tasks--even via a naive multi-head architecture--always improves performance. Our best joint network nearly matches the PASCAL performance of a model pre-trained on ImageNet classification, and matches the ImageNet network on NYU depth prediction.
http://arxiv.org/pdf/1708.07860
Carl Doersch, Andrew Zisserman
cs.CV
Published at ICCV 2017
null
cs.CV
20170825
20170825
[ { "id": "1611.06430" }, { "id": "1602.07261" }, { "id": "1611.03530" }, { "id": "1611.06646" }, { "id": "1606.04671" }, { "id": "1612.06370" }, { "id": "1606.07419" }, { "id": "1609.02132" }, { "id": "1610.01685" } ]
1708.07747
12
p=1 weights=uniform n_neighbors=9 p=1 weights=uniform n_neighbors=5 p=1 weights=distance n_neighbors=5 p=2 weights=distance n_neighbors=9 p=2 weights=uniform n_neighbors=5 p=2 weights=uniform n_neighbors=9 p=2 weights=distance n_neighbors=1 p=2 weights=uniform n_neighbors=1 p=2 weights=uniform n_neighbors=1 p=1 weights=distance n_neighbors=1 p=1 LinearSVC loss=hinge C=1 multi_class=ovr penalty=l2 loss=hinge C=1 multi_class=crammer_singer penalty=l2 loss=squared_hinge C=1 multi_class=crammer_singer penalty=l2 loss=squared_hinge C=1 multi_class=crammer_singer penalty=l1
1708.07747#12
Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms
We present Fashion-MNIST, a new dataset comprising of 28x28 grayscale images of 70,000 fashion products from 10 categories, with 7,000 images per category. The training set has 60,000 images and the test set has 10,000 images. Fashion-MNIST is intended to serve as a direct drop-in replacement for the original MNIST dataset for benchmarking machine learning algorithms, as it shares the same image size, data format and the structure of training and testing splits. The dataset is freely available at https://github.com/zalandoresearch/fashion-mnist
http://arxiv.org/pdf/1708.07747
Han Xiao, Kashif Rasul, Roland Vollgraf
cs.LG, cs.CV, stat.ML
Dataset is freely available at https://github.com/zalandoresearch/fashion-mnist Benchmark is available at http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/
null
cs.LG
20170825
20170915
[ { "id": "1708.07747" }, { "id": "1702.05373" } ]
1708.07860
12
Exemplar [8]: The original implementation of this task created pseudo-classes, where each class was generated by taking a patch from a single image and augmenting it via translation, rotation, scaling, and color shifts [8]. The net- work was trained to discriminate between pseudo-classes. Unfortunately, this approach isn’t scalable to large datasets, since the number of categories (and therefore, the number of parameters in the final fully-connected layer) scales lin- early in the number of images. However, the approach can be extended to allow an infinite number of classes by us- ing a triplet loss, similar to [42], instead of a classifica- tion loss per class. Specifically, we randomly sample two patches x1 and x2 from the same pseudo-class, and a third patch x3 from a different pseudo-class (i.e. from a differ- ent image). The network is trained with a loss of the form max(D(f (x1), f (x2)) − D(f (x1), f (x3)) + M, 0), where D is the cosine distance, f (x) is network features for x (in- cluding a small head) for patch x, and M is a margin which we set to 0.5. 3
1708.07860#12
Multi-task Self-Supervised Visual Learning
We investigate methods for combining multiple self-supervised tasks--i.e., supervised tasks where data can be collected without manual labeling--in order to train a single visual representation. First, we provide an apples-to-apples comparison of four different self-supervised tasks using the very deep ResNet-101 architecture. We then combine tasks to jointly train a network. We also explore lasso regularization to encourage the network to factorize the information in its representation, and methods for "harmonizing" network inputs in order to learn a more unified representation. We evaluate all methods on ImageNet classification, PASCAL VOC detection, and NYU depth prediction. Our results show that deeper networks work better, and that combining tasks--even via a naive multi-head architecture--always improves performance. Our best joint network nearly matches the PASCAL performance of a model pre-trained on ImageNet classification, and matches the ImageNet network on NYU depth prediction.
http://arxiv.org/pdf/1708.07860
Carl Doersch, Andrew Zisserman
cs.CV
Published at ICCV 2017
null
cs.CV
20170825
20170825
[ { "id": "1611.06430" }, { "id": "1602.07261" }, { "id": "1611.03530" }, { "id": "1611.06646" }, { "id": "1606.04671" }, { "id": "1612.06370" }, { "id": "1606.07419" }, { "id": "1609.02132" }, { "id": "1610.01685" } ]
1708.07860
13
3 Motion Segmentation [27]: Given a single frame of video, this task asks the network to classify which pixels will move in subsequent frames. The “ground truth” mask of moving pixels is extracted using standard dense tracking algorithms. We follow Pathak et al. [27], except that we replace their tracking algorithm with Improved Dense Tra- jectories [41]. Keypoints are tracked over 10 frames, and any pixel not labeled as camera motion by that algorithm is treated as foreground. The label image is downsampled by a factor of 8. The resulting segmentations look qualitatively similar to those given in Pathak et al. [27]. The network is trained via a per-pixel cross-entropy with the label image. Datasets: The three image-based tasks are all trained on ImageNet, as is common in prior work. The motion seg- mentation task uses the SoundNet dataset [3]. It is an open problem whether performance can be improved by differ- ent choices of dataset, or indeed by training on much larger datasets. # 3. Architectures
1708.07860#13
Multi-task Self-Supervised Visual Learning
We investigate methods for combining multiple self-supervised tasks--i.e., supervised tasks where data can be collected without manual labeling--in order to train a single visual representation. First, we provide an apples-to-apples comparison of four different self-supervised tasks using the very deep ResNet-101 architecture. We then combine tasks to jointly train a network. We also explore lasso regularization to encourage the network to factorize the information in its representation, and methods for "harmonizing" network inputs in order to learn a more unified representation. We evaluate all methods on ImageNet classification, PASCAL VOC detection, and NYU depth prediction. Our results show that deeper networks work better, and that combining tasks--even via a naive multi-head architecture--always improves performance. Our best joint network nearly matches the PASCAL performance of a model pre-trained on ImageNet classification, and matches the ImageNet network on NYU depth prediction.
http://arxiv.org/pdf/1708.07860
Carl Doersch, Andrew Zisserman
cs.CV
Published at ICCV 2017
null
cs.CV
20170825
20170825
[ { "id": "1611.06430" }, { "id": "1602.07261" }, { "id": "1611.03530" }, { "id": "1611.06646" }, { "id": "1606.04671" }, { "id": "1612.06370" }, { "id": "1606.07419" }, { "id": "1609.02132" }, { "id": "1610.01685" } ]
1708.07747
14
0.789 0.788 0.787 0.787 0.785 0.783 0.783 0.779 0.777 0.775 0.775 0.772 0.772 0.769 0.768 0.752 0.752 0.748 0.745 0.739 0.737 0.511 0.880 0.872 0.862 0.849 0.840 0.795 0.782 0.854 0.854 0.853 0.852 0.852 0.849 0.849 0.847 0.839 0.839 0.838 0.838 0.836 0.835 0.834 0.833 0.833 0.820 0.779 0.776 0.764 0.758 # loss=squared_hinge C=1 multi_class=ovr penalty=l2 # loss=squared_hinge C=10 multi_class=ovr penalty=l2 # loss=squared_hinge C=100 multi_class=ovr penalty=l2 # loss=hinge C=10 multi_class=ovr penalty=l2 # loss=hinge C=100 multi_class=ovr penalty=l2
1708.07747#14
Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms
We present Fashion-MNIST, a new dataset comprising of 28x28 grayscale images of 70,000 fashion products from 10 categories, with 7,000 images per category. The training set has 60,000 images and the test set has 10,000 images. Fashion-MNIST is intended to serve as a direct drop-in replacement for the original MNIST dataset for benchmarking machine learning algorithms, as it shares the same image size, data format and the structure of training and testing splits. The dataset is freely available at https://github.com/zalandoresearch/fashion-mnist
http://arxiv.org/pdf/1708.07747
Han Xiao, Kashif Rasul, Roland Vollgraf
cs.LG, cs.CV, stat.ML
Dataset is freely available at https://github.com/zalandoresearch/fashion-mnist Benchmark is available at http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/
null
cs.LG
20170825
20170915
[ { "id": "1708.07747" }, { "id": "1702.05373" } ]
1708.07860
14
# 3. Architectures In this section we describe three architectures: first, the (na¨ıve) multi-task network that has a common trunk and a head for each task (figure 1a); second, the lasso extension of this architecture (figure 1b) that enables the training to determine the combination of layers to use for each self- supervised task; and third, a method for harmonizing input channels across self-supervision tasks. # 3.1. Common Trunk Our architecture begins with Resnet-101 v2 [15], as im- plemented in TensorFlow-Slim [13]. We keep the entire ar- chitecture up to the end of block 3, and use the same block3 representation solve all tasks and evaluations (see figure 1a). Thus, our “trunk” has an output with 1024 channels, and consists of 88 convolution layers with roughly 30 million parameters. Block 4 contains an additional 13 conv layers and 20 million parameters, but we don’t use it to save com- putation.
1708.07860#14
Multi-task Self-Supervised Visual Learning
We investigate methods for combining multiple self-supervised tasks--i.e., supervised tasks where data can be collected without manual labeling--in order to train a single visual representation. First, we provide an apples-to-apples comparison of four different self-supervised tasks using the very deep ResNet-101 architecture. We then combine tasks to jointly train a network. We also explore lasso regularization to encourage the network to factorize the information in its representation, and methods for "harmonizing" network inputs in order to learn a more unified representation. We evaluate all methods on ImageNet classification, PASCAL VOC detection, and NYU depth prediction. Our results show that deeper networks work better, and that combining tasks--even via a naive multi-head architecture--always improves performance. Our best joint network nearly matches the PASCAL performance of a model pre-trained on ImageNet classification, and matches the ImageNet network on NYU depth prediction.
http://arxiv.org/pdf/1708.07860
Carl Doersch, Andrew Zisserman
cs.CV
Published at ICCV 2017
null
cs.CV
20170825
20170825
[ { "id": "1611.06430" }, { "id": "1602.07261" }, { "id": "1611.03530" }, { "id": "1611.06646" }, { "id": "1606.04671" }, { "id": "1612.06370" }, { "id": "1606.07419" }, { "id": "1609.02132" }, { "id": "1610.01685" } ]
1708.07860
15
Each task has a separate loss, and has extra layers in a “head,” which may have a complicated structure. For instance, the relative position and exemplar tasks have a siamese architecture. We implement this by passing all patches through the trunk as a single batch, and then re- arranging the elements in the batch to make pairs (or triplets) of representations to be processed by the head. At each training iteration, only one of the heads is active. How- ever, gradients are averaged across many iterations where different heads are active, meaning that the overall loss is a sum of the losses of different tasks. a) b) Figure 1. The structure of our multi-task network. It is based on ResNet-101, with block 3 having 23 residual units. a) Naive shared-trunk approach, where each “head” is attached to the output of block 3. b) the lasso architecture, where each “head” receives a linear combination of unit outputs within block3, weighted by the matrix α, which is trained to be sparse. # 3.2. Separating features via Lasso
1708.07860#15
Multi-task Self-Supervised Visual Learning
We investigate methods for combining multiple self-supervised tasks--i.e., supervised tasks where data can be collected without manual labeling--in order to train a single visual representation. First, we provide an apples-to-apples comparison of four different self-supervised tasks using the very deep ResNet-101 architecture. We then combine tasks to jointly train a network. We also explore lasso regularization to encourage the network to factorize the information in its representation, and methods for "harmonizing" network inputs in order to learn a more unified representation. We evaluate all methods on ImageNet classification, PASCAL VOC detection, and NYU depth prediction. Our results show that deeper networks work better, and that combining tasks--even via a naive multi-head architecture--always improves performance. Our best joint network nearly matches the PASCAL performance of a model pre-trained on ImageNet classification, and matches the ImageNet network on NYU depth prediction.
http://arxiv.org/pdf/1708.07860
Carl Doersch, Andrew Zisserman
cs.CV
Published at ICCV 2017
null
cs.CV
20170825
20170825
[ { "id": "1611.06430" }, { "id": "1602.07261" }, { "id": "1611.03530" }, { "id": "1611.06646" }, { "id": "1606.04671" }, { "id": "1612.06370" }, { "id": "1606.07419" }, { "id": "1609.02132" }, { "id": "1610.01685" } ]
1708.07860
16
# 3.2. Separating features via Lasso Different tasks require different features; this applies for both the self-supervised training tasks and the evaluation tasks. For example, information about fine-grained breeds of dogs is useful for, e.g., ImageNet classification, and also colorization. However, fine-grained information is less use- ful for tasks like PASCAL object detection, or for relative positioning of patches. Furthermore, some tasks require only image patches (such as relative positioning) whilst oth- ers can make use of entire images (such as colorization), and consequently features may be learnt at different scales. This suggests that, while training on self-supervised tasks, it might be advantageous to separate out groups of features that are useful for some tasks but not others. This would help us with evaluation tasks: we expect that any given evaluation task will be more similar to some self-supervised tasks than to others. Thus, if the features are factorized into different tasks, then the network can select from the discov- ered feature groups while training on the evaluation tasks.
1708.07860#16
Multi-task Self-Supervised Visual Learning
We investigate methods for combining multiple self-supervised tasks--i.e., supervised tasks where data can be collected without manual labeling--in order to train a single visual representation. First, we provide an apples-to-apples comparison of four different self-supervised tasks using the very deep ResNet-101 architecture. We then combine tasks to jointly train a network. We also explore lasso regularization to encourage the network to factorize the information in its representation, and methods for "harmonizing" network inputs in order to learn a more unified representation. We evaluate all methods on ImageNet classification, PASCAL VOC detection, and NYU depth prediction. Our results show that deeper networks work better, and that combining tasks--even via a naive multi-head architecture--always improves performance. Our best joint network nearly matches the PASCAL performance of a model pre-trained on ImageNet classification, and matches the ImageNet network on NYU depth prediction.
http://arxiv.org/pdf/1708.07860
Carl Doersch, Andrew Zisserman
cs.CV
Published at ICCV 2017
null
cs.CV
20170825
20170825
[ { "id": "1611.06430" }, { "id": "1602.07261" }, { "id": "1611.03530" }, { "id": "1611.06646" }, { "id": "1606.04671" }, { "id": "1612.06370" }, { "id": "1606.07419" }, { "id": "1609.02132" }, { "id": "1610.01685" } ]
1708.07747
17
Test Accuracy Parameter Classifier loss=hinge C=10 multi_class=crammer_singer penalty=l1 loss=hinge C=10 multi_class=crammer_singer penalty=l2 loss=squared_hinge C=10 multi_class=crammer_singer penalty=l2 loss=squared_hinge C=10 multi_class=crammer_singer penalty=l1 loss=hinge C=100 multi_class=crammer_singer penalty=l1 loss=hinge C=100 multi_class=crammer_singer penalty=l2 loss=squared_hinge C=100 multi_class=crammer_singer penalty=l1 loss=squared_hinge C=100 multi_class=crammer_singer penalty=l2 LogisticRegression C=1 multi_class=ovr penalty=l1 C=1 multi_class=ovr penalty=l2 C=10 multi_class=ovr penalty=l2 C=10 multi_class=ovr penalty=l1 C=100 multi_class=ovr penalty=l2 MLPClassifier activation=relu hidden_layer_sizes=[100] activation=relu
1708.07747#17
Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms
We present Fashion-MNIST, a new dataset comprising of 28x28 grayscale images of 70,000 fashion products from 10 categories, with 7,000 images per category. The training set has 60,000 images and the test set has 10,000 images. Fashion-MNIST is intended to serve as a direct drop-in replacement for the original MNIST dataset for benchmarking machine learning algorithms, as it shares the same image size, data format and the structure of training and testing splits. The dataset is freely available at https://github.com/zalandoresearch/fashion-mnist
http://arxiv.org/pdf/1708.07747
Han Xiao, Kashif Rasul, Roland Vollgraf
cs.LG, cs.CV, stat.ML
Dataset is freely available at https://github.com/zalandoresearch/fashion-mnist Benchmark is available at http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/
null
cs.LG
20170825
20170915
[ { "id": "1708.07747" }, { "id": "1702.05373" } ]
1708.07860
17
Inspired by recent works that extract information across network layers for the sake of transfer learning [14, 22, 36], we propose a mechanism which allows a network to choose which layers are fed into each task. The simplest approach might be to use a task-specific skip layer which selects a sin- gle layer in ResNet-101 (out of a set of equal-sized candi- date layers) and feeds it directly into the task’s head. How- ever, a hard selection operation isn’t differentiable, meaning that the network couldn’t learn which layer to feed into a task. Furthermore, some tasks might need information from multiple layers. Hence, we relax the hard selection process, and instead pass a linear combination of skip layers to each head. Concretely, each task has a set of coefficients, one for each of the 23 candidate layers in block 3. The representation that’s fed into each task head is a sum of the layer activations weighted by these task-specific coefficients. We impose a lasso (L1) penalty to encourage the combination to be sparse, which therefore encourages the network to con- centrate all of the information
1708.07860#17
Multi-task Self-Supervised Visual Learning
We investigate methods for combining multiple self-supervised tasks--i.e., supervised tasks where data can be collected without manual labeling--in order to train a single visual representation. First, we provide an apples-to-apples comparison of four different self-supervised tasks using the very deep ResNet-101 architecture. We then combine tasks to jointly train a network. We also explore lasso regularization to encourage the network to factorize the information in its representation, and methods for "harmonizing" network inputs in order to learn a more unified representation. We evaluate all methods on ImageNet classification, PASCAL VOC detection, and NYU depth prediction. Our results show that deeper networks work better, and that combining tasks--even via a naive multi-head architecture--always improves performance. Our best joint network nearly matches the PASCAL performance of a model pre-trained on ImageNet classification, and matches the ImageNet network on NYU depth prediction.
http://arxiv.org/pdf/1708.07860
Carl Doersch, Andrew Zisserman
cs.CV
Published at ICCV 2017
null
cs.CV
20170825
20170825
[ { "id": "1611.06430" }, { "id": "1602.07261" }, { "id": "1611.03530" }, { "id": "1611.06646" }, { "id": "1606.04671" }, { "id": "1612.06370" }, { "id": "1606.07419" }, { "id": "1609.02132" }, { "id": "1610.01685" } ]
1708.07747
18
C=100 multi_class=ovr penalty=l2 MLPClassifier activation=relu hidden_layer_sizes=[100] activation=relu hidden_layer_sizes=[100, 10] activation=tanh hidden_layer_sizes=[100] activation=tanh hidden_layer_sizes=[100, 10] activation=relu hidden_layer_sizes=[10, 10] activation=relu hidden_layer_sizes=[10] activation=tanh hidden_layer_sizes=[10, 10] activation=tanh hidden_layer_sizes=[10] PassiveAggressiveClassifier C=1 C=100 C=10 Perceptron penalty=l1 penalty=l2 penalty=elasticnet RandomForestClassifier n_estimators=100 criterion=entropy max_depth=100 n_estimators=100 criterion=gini max_depth=100 n_estimators=50 criterion=entropy max_depth=100 n_estimators=100 criterion=entropy max_depth=50 n_estimators=50 criterion=entropy max_depth=50 n_estimators=100 criterion=gini max_depth=50 n_estimators=50 criterion=gini max_depth=50
1708.07747#18
Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms
We present Fashion-MNIST, a new dataset comprising of 28x28 grayscale images of 70,000 fashion products from 10 categories, with 7,000 images per category. The training set has 60,000 images and the test set has 10,000 images. Fashion-MNIST is intended to serve as a direct drop-in replacement for the original MNIST dataset for benchmarking machine learning algorithms, as it shares the same image size, data format and the structure of training and testing splits. The dataset is freely available at https://github.com/zalandoresearch/fashion-mnist
http://arxiv.org/pdf/1708.07747
Han Xiao, Kashif Rasul, Roland Vollgraf
cs.LG, cs.CV, stat.ML
Dataset is freely available at https://github.com/zalandoresearch/fashion-mnist Benchmark is available at http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/
null
cs.LG
20170825
20170915
[ { "id": "1708.07747" }, { "id": "1702.05373" } ]
1708.07747
19
max_depth=50 n_estimators=100 criterion=gini max_depth=50 n_estimators=50 criterion=gini max_depth=50 n_estimators=50 criterion=gini max_depth=100 n_estimators=10 criterion=entropy max_depth=50 n_estimators=10 criterion=entropy max_depth=100 n_estimators=10 criterion=gini max_depth=50 n_estimators=10 criterion=gini max_depth=100 n_estimators=50 criterion=entropy max_depth=10 n_estimators=100 criterion=entropy max_depth=10 n_estimators=100 criterion=gini max_depth=10 n_estimators=50 criterion=gini max_depth=10 n_estimators=10 criterion=entropy max_depth=10
1708.07747#19
Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms
We present Fashion-MNIST, a new dataset comprising of 28x28 grayscale images of 70,000 fashion products from 10 categories, with 7,000 images per category. The training set has 60,000 images and the test set has 10,000 images. Fashion-MNIST is intended to serve as a direct drop-in replacement for the original MNIST dataset for benchmarking machine learning algorithms, as it shares the same image size, data format and the structure of training and testing splits. The dataset is freely available at https://github.com/zalandoresearch/fashion-mnist
http://arxiv.org/pdf/1708.07747
Han Xiao, Kashif Rasul, Roland Vollgraf
cs.LG, cs.CV, stat.ML
Dataset is freely available at https://github.com/zalandoresearch/fashion-mnist Benchmark is available at http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/
null
cs.LG
20170825
20170915
[ { "id": "1708.07747" }, { "id": "1702.05373" } ]
1708.07860
19
Mathematically, we create a matrix α with N rows and M columns, where N is the number of self-supervised tasks, and M is the number of residual units in block 3. The representation passed to the head for task n is then: M y Qnym * UNitm m=1 () where Unit,, is the output of residual unit m. We en- force that ye a2, = 1 for all tasks n, to control the output variance (note that the entries in a can be negative, so a simple sum is insufficient). To ensure sparsity, we add an L1 penalty on the entries of a to the objective function. We create a similar a matrix for the set of evaluation tasks. # 3.3. Harmonizing network inputs Each self-supervised task pre-processes its data differ- ently, so the low-level image statistics are often very dif- ferent across tasks. This puts a heavy burden on the trunk network, since its features must generalize across these sta- tistical differences, which may impede learning. Further- more, it gives the network an opportunity to cheat: the net- work might recognize which task it must solve, and only represent information which is relevant to that task, instead of truly multi-task features. This problem is especially bad for relative position, which pre-processes its input data by 4
1708.07860#19
Multi-task Self-Supervised Visual Learning
We investigate methods for combining multiple self-supervised tasks--i.e., supervised tasks where data can be collected without manual labeling--in order to train a single visual representation. First, we provide an apples-to-apples comparison of four different self-supervised tasks using the very deep ResNet-101 architecture. We then combine tasks to jointly train a network. We also explore lasso regularization to encourage the network to factorize the information in its representation, and methods for "harmonizing" network inputs in order to learn a more unified representation. We evaluate all methods on ImageNet classification, PASCAL VOC detection, and NYU depth prediction. Our results show that deeper networks work better, and that combining tasks--even via a naive multi-head architecture--always improves performance. Our best joint network nearly matches the PASCAL performance of a model pre-trained on ImageNet classification, and matches the ImageNet network on NYU depth prediction.
http://arxiv.org/pdf/1708.07860
Carl Doersch, Andrew Zisserman
cs.CV
Published at ICCV 2017
null
cs.CV
20170825
20170825
[ { "id": "1611.06430" }, { "id": "1602.07261" }, { "id": "1611.03530" }, { "id": "1611.06646" }, { "id": "1606.04671" }, { "id": "1612.06370" }, { "id": "1606.07419" }, { "id": "1609.02132" }, { "id": "1610.01685" } ]
1708.07860
20
4 Parameter Server RMSProp RMSProp Synchronous | | Synchronous RMSProp Synchronous Gradients Workers (GPU) Figure 2. Distributed training setup. Several GPU machines are allocated for each task, and gradients from each task are synchro- nized and aggregated with separate RMSProp optimizers. discarding 2 of the 3 color channels, selected at random, and replacing them with noise. Chromatic aberration is also hard to detect in grayscale images. Hence, to “harmonize,” we replace relative position’s preprocessing with the same preprocessing used for colorization: images are converted to Lab, and the a and b channels are discarded (we replicate the L channel 3 times so that the network can be evaluated on color images). # 3.4. Self-supervised network architecture imple- mentation details This section provides more details on the “heads” used in our self-supervised tasks. The bulk of the changes rela- tive to the original methods (that used shallower networks) involve replacing simple convolutions with residual units. Vanishing gradients can be a problem with networks as deep as ours, and residual networks can help alleviate this prob- lem. We did relatively little experimentation with architec- tures for the heads, due to the high computational cost of restarting training from scratch.
1708.07860#20
Multi-task Self-Supervised Visual Learning
We investigate methods for combining multiple self-supervised tasks--i.e., supervised tasks where data can be collected without manual labeling--in order to train a single visual representation. First, we provide an apples-to-apples comparison of four different self-supervised tasks using the very deep ResNet-101 architecture. We then combine tasks to jointly train a network. We also explore lasso regularization to encourage the network to factorize the information in its representation, and methods for "harmonizing" network inputs in order to learn a more unified representation. We evaluate all methods on ImageNet classification, PASCAL VOC detection, and NYU depth prediction. Our results show that deeper networks work better, and that combining tasks--even via a naive multi-head architecture--always improves performance. Our best joint network nearly matches the PASCAL performance of a model pre-trained on ImageNet classification, and matches the ImageNet network on NYU depth prediction.
http://arxiv.org/pdf/1708.07860
Carl Doersch, Andrew Zisserman
cs.CV
Published at ICCV 2017
null
cs.CV
20170825
20170825
[ { "id": "1611.06430" }, { "id": "1602.07261" }, { "id": "1611.03530" }, { "id": "1611.06646" }, { "id": "1606.04671" }, { "id": "1612.06370" }, { "id": "1606.07419" }, { "id": "1609.02132" }, { "id": "1610.01685" } ]
1708.07860
21
Relative Position: Given a batch of patches, we begin by running ResNet-v2-101 at a stride of 8. Most block 3 con- volutions produce outputs at stride 16, so running the net- work at stride 8 requires using convolutions that are dilated, or “atrous”, such that each neuron receives input from other neurons that are stride 16 apart in the previous layer. For further details, see the public implementation of ResNet-v2- 101 striding in TF-Slim. Our patches are 96-by-96, mean- ing that we get a trunk feature map which is 12 × 12 × 1024 per patch. For the head, we apply two more residual units. The first has an output with 1024 channels, a bottleneck with 128 channels, and a stride of 2; the second has an out5
1708.07860#21
Multi-task Self-Supervised Visual Learning
We investigate methods for combining multiple self-supervised tasks--i.e., supervised tasks where data can be collected without manual labeling--in order to train a single visual representation. First, we provide an apples-to-apples comparison of four different self-supervised tasks using the very deep ResNet-101 architecture. We then combine tasks to jointly train a network. We also explore lasso regularization to encourage the network to factorize the information in its representation, and methods for "harmonizing" network inputs in order to learn a more unified representation. We evaluate all methods on ImageNet classification, PASCAL VOC detection, and NYU depth prediction. Our results show that deeper networks work better, and that combining tasks--even via a naive multi-head architecture--always improves performance. Our best joint network nearly matches the PASCAL performance of a model pre-trained on ImageNet classification, and matches the ImageNet network on NYU depth prediction.
http://arxiv.org/pdf/1708.07860
Carl Doersch, Andrew Zisserman
cs.CV
Published at ICCV 2017
null
cs.CV
20170825
20170825
[ { "id": "1611.06430" }, { "id": "1602.07261" }, { "id": "1611.03530" }, { "id": "1611.06646" }, { "id": "1606.04671" }, { "id": "1612.06370" }, { "id": "1606.07419" }, { "id": "1609.02132" }, { "id": "1610.01685" } ]
1708.07860
22
put size of 512 channels, bottleneck with 128 channels, and stride 2. This gives us a representation of 3×3×512 for each patch. We flatten this representation for each patch, and concatenate the representations for patches that are paired. We then have 3 “fully-connected” residual units (equiva- lent to a convolutional residual unit where the spatial shape of the input and output is 1 × 1). These are all identi- cal, with input dimensionality and output dimensionality of 3*3*512=4608 and a bottleneck dimensionality of 512. The final fully connected layer has dimensionality 8 producing softmax outputs.
1708.07860#22
Multi-task Self-Supervised Visual Learning
We investigate methods for combining multiple self-supervised tasks--i.e., supervised tasks where data can be collected without manual labeling--in order to train a single visual representation. First, we provide an apples-to-apples comparison of four different self-supervised tasks using the very deep ResNet-101 architecture. We then combine tasks to jointly train a network. We also explore lasso regularization to encourage the network to factorize the information in its representation, and methods for "harmonizing" network inputs in order to learn a more unified representation. We evaluate all methods on ImageNet classification, PASCAL VOC detection, and NYU depth prediction. Our results show that deeper networks work better, and that combining tasks--even via a naive multi-head architecture--always improves performance. Our best joint network nearly matches the PASCAL performance of a model pre-trained on ImageNet classification, and matches the ImageNet network on NYU depth prediction.
http://arxiv.org/pdf/1708.07860
Carl Doersch, Andrew Zisserman
cs.CV
Published at ICCV 2017
null
cs.CV
20170825
20170825
[ { "id": "1611.06430" }, { "id": "1602.07261" }, { "id": "1611.03530" }, { "id": "1611.06646" }, { "id": "1606.04671" }, { "id": "1612.06370" }, { "id": "1606.07419" }, { "id": "1609.02132" }, { "id": "1610.01685" } ]
1708.07860
23
Colorization: As with relative position, we run the ResNet-v2-101 trunk at stride 8 via dilated convolutions. Our input images are 256 × 256, meaning that we have a 32 × 32 × 1024 feature map. Obtaining good performance when colorization is combined with other tasks seems to re- quire a large number of parameters in the head. Hence, we use two standard convolution layers with a ReLU nonlinear- ity: the first has a 2×2 kernel and 4096 output channels, and the second has a 1×1 kernel with 4096 channels. Both have stride 1. The final output logits are produced by a 1x1 con- volution with stride 1 and 313 output channels. The head has a total of roughly 35M parameters. Preliminary exper- iments with a smaller number of parameters showed that adding colorization degraded performance. We hypothesize that this is because the network’s knowledge of color was pushed down into block 3 when the head was small, and thus the representations at the end of block 3 contained too much information about color.
1708.07860#23
Multi-task Self-Supervised Visual Learning
We investigate methods for combining multiple self-supervised tasks--i.e., supervised tasks where data can be collected without manual labeling--in order to train a single visual representation. First, we provide an apples-to-apples comparison of four different self-supervised tasks using the very deep ResNet-101 architecture. We then combine tasks to jointly train a network. We also explore lasso regularization to encourage the network to factorize the information in its representation, and methods for "harmonizing" network inputs in order to learn a more unified representation. We evaluate all methods on ImageNet classification, PASCAL VOC detection, and NYU depth prediction. Our results show that deeper networks work better, and that combining tasks--even via a naive multi-head architecture--always improves performance. Our best joint network nearly matches the PASCAL performance of a model pre-trained on ImageNet classification, and matches the ImageNet network on NYU depth prediction.
http://arxiv.org/pdf/1708.07860
Carl Doersch, Andrew Zisserman
cs.CV
Published at ICCV 2017
null
cs.CV
20170825
20170825
[ { "id": "1611.06430" }, { "id": "1602.07261" }, { "id": "1611.03530" }, { "id": "1611.06646" }, { "id": "1606.04671" }, { "id": "1612.06370" }, { "id": "1606.07419" }, { "id": "1609.02132" }, { "id": "1610.01685" } ]
1708.07747
24
Test Accuracy Parameter Classifier Fashion MNIST 0.914 0.911 0.910 0.913 0.912 0.912 0.914 0.913 0.911 0.973 0.976 0.978 0.972 0.966 0.957 0.929 0.927 0.926 0.898 0.873 0.868 0.815 0.815 0.815 0.814 0.814 0.814 0.813 0.813 0.813 0.897 0.891 0.890 0.890 0.879 0.873 0.839 0.829 0.827 0.678 0.671 0.664 loss=squared_hinge penalty=elasticnet loss=hinge penalty=l1 loss=log penalty=l1 loss=perceptron penalty=l2 loss=perceptron penalty=elasticnet loss=squared_hinge penalty=l2 loss=modified_huber penalty=elasticnet loss=log penalty=l2 loss=squared_hinge penalty=l1 SVC C=10 kernel=rbf C=10 kernel=poly C=100 kernel=poly C=100 kernel=rbf C=1
1708.07747#24
Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms
We present Fashion-MNIST, a new dataset comprising of 28x28 grayscale images of 70,000 fashion products from 10 categories, with 7,000 images per category. The training set has 60,000 images and the test set has 10,000 images. Fashion-MNIST is intended to serve as a direct drop-in replacement for the original MNIST dataset for benchmarking machine learning algorithms, as it shares the same image size, data format and the structure of training and testing splits. The dataset is freely available at https://github.com/zalandoresearch/fashion-mnist
http://arxiv.org/pdf/1708.07747
Han Xiao, Kashif Rasul, Roland Vollgraf
cs.LG, cs.CV, stat.ML
Dataset is freely available at https://github.com/zalandoresearch/fashion-mnist Benchmark is available at http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/
null
cs.LG
20170825
20170915
[ { "id": "1708.07747" }, { "id": "1702.05373" } ]
1708.07860
24
Exemplar: As with relative position, we run the ResNet- v2-101 trunk at stride 8 via dilated convolutions. We resize our images to 256×256 and sample patches that are 96×96. Thus we have a feature map which is 12 × 12 × 1024. As with relative position, we apply two residual units, the first with an output with 1024 channels, a bottleneck with 128 channels, and a stride of 2; the second has an output size of 512 channels, bottleneck with 128 channels, and stride 2. Thus, we have a 3 × 3 × 512-dimensional feature, which is used directly to compute the distances needed for our loss. Motion Segmentation: We reshape all images to 240 × 320, to better approximate the aspect ratios that are com- mon in our dataset. As with relative position, we run the ResNet-v2-101 trunk at stride 8 via dilated convolutions. We expected that, like colorization, motion segmentation could benefit from a large head. Thus, we have two 1 × 1 conv layers each with dimension 4096, followed by another 1×1 conv layer which produces a single value, which is treated as a logit and used a per-pixel classification. Pre- liminary experiments with smaller heads have shown that such a large head is not necessarily important. # 4. Training the Network
1708.07860#24
Multi-task Self-Supervised Visual Learning
We investigate methods for combining multiple self-supervised tasks--i.e., supervised tasks where data can be collected without manual labeling--in order to train a single visual representation. First, we provide an apples-to-apples comparison of four different self-supervised tasks using the very deep ResNet-101 architecture. We then combine tasks to jointly train a network. We also explore lasso regularization to encourage the network to factorize the information in its representation, and methods for "harmonizing" network inputs in order to learn a more unified representation. We evaluate all methods on ImageNet classification, PASCAL VOC detection, and NYU depth prediction. Our results show that deeper networks work better, and that combining tasks--even via a naive multi-head architecture--always improves performance. Our best joint network nearly matches the PASCAL performance of a model pre-trained on ImageNet classification, and matches the ImageNet network on NYU depth prediction.
http://arxiv.org/pdf/1708.07860
Carl Doersch, Andrew Zisserman
cs.CV
Published at ICCV 2017
null
cs.CV
20170825
20170825
[ { "id": "1611.06430" }, { "id": "1602.07261" }, { "id": "1611.03530" }, { "id": "1611.06646" }, { "id": "1606.04671" }, { "id": "1612.06370" }, { "id": "1606.07419" }, { "id": "1609.02132" }, { "id": "1610.01685" } ]
1708.07860
25
# 4. Training the Network Training a network with nearly 100 hidden layers re- quires considerable compute power, so we distribute it across several machines. As shown in figure 2, each ma- chine trains the network on a single task. Parameters for the ResNet-101 trunk are shared across all replicas. There are also several task-specific layers, or heads, which are shared only between machines that are working on the same task. Each worker repeatedly computes losses which are then backpropagated to produce gradients.
1708.07860#25
Multi-task Self-Supervised Visual Learning
We investigate methods for combining multiple self-supervised tasks--i.e., supervised tasks where data can be collected without manual labeling--in order to train a single visual representation. First, we provide an apples-to-apples comparison of four different self-supervised tasks using the very deep ResNet-101 architecture. We then combine tasks to jointly train a network. We also explore lasso regularization to encourage the network to factorize the information in its representation, and methods for "harmonizing" network inputs in order to learn a more unified representation. We evaluate all methods on ImageNet classification, PASCAL VOC detection, and NYU depth prediction. Our results show that deeper networks work better, and that combining tasks--even via a naive multi-head architecture--always improves performance. Our best joint network nearly matches the PASCAL performance of a model pre-trained on ImageNet classification, and matches the ImageNet network on NYU depth prediction.
http://arxiv.org/pdf/1708.07860
Carl Doersch, Andrew Zisserman
cs.CV
Published at ICCV 2017
null
cs.CV
20170825
20170825
[ { "id": "1611.06430" }, { "id": "1602.07261" }, { "id": "1611.03530" }, { "id": "1611.06646" }, { "id": "1606.04671" }, { "id": "1612.06370" }, { "id": "1606.07419" }, { "id": "1609.02132" }, { "id": "1610.01685" } ]
1708.07747
26
# 4 Conclusions This paper introduced Fashion-MNIST, a fashion product images dataset intended to be a drop- in replacement of MNIST and whilst providing a more challenging alternative for benchmarking machine learning algorithm. The images in Fashion-MNIST are converted to a format that matches that of the MNIST dataset, making it immediately compatible with any machine learning package capable of working with the original MNIST dataset. # References D. Ciregan, U. Meier, and J. Schmidhuber. Multi-column deep neural networks for image classifi- cation. In Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on, pages 3642–3649. IEEE, 2012. G. Cohen, S. Afshar, J. Tapson, and A. van Schaik. Emnist: an extension of mnist to handwritten letters. arXiv preprint arXiv:1702.05373, 2017. J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. Imagenet: A large-scale hierarchical im- age database. In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pages 248–255. IEEE, 2009. A. Krizhevsky and G. Hinton. Learning multiple layers of features from tiny images. 2009.
1708.07747#26
Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms
We present Fashion-MNIST, a new dataset comprising of 28x28 grayscale images of 70,000 fashion products from 10 categories, with 7,000 images per category. The training set has 60,000 images and the test set has 10,000 images. Fashion-MNIST is intended to serve as a direct drop-in replacement for the original MNIST dataset for benchmarking machine learning algorithms, as it shares the same image size, data format and the structure of training and testing splits. The dataset is freely available at https://github.com/zalandoresearch/fashion-mnist
http://arxiv.org/pdf/1708.07747
Han Xiao, Kashif Rasul, Roland Vollgraf
cs.LG, cs.CV, stat.ML
Dataset is freely available at https://github.com/zalandoresearch/fashion-mnist Benchmark is available at http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/
null
cs.LG
20170825
20170915
[ { "id": "1708.07747" }, { "id": "1702.05373" } ]
1708.07860
26
Given many workers operating independently, gradients are usually aggregated in one of two ways. The first op- tion is asynchronous training, where a centralized parame- ter server receives gradients from workers, applies the up- dates immediately, and sends back the up-to-date parame- ters [5, 33]. We found this approach to be unstable, since gradients may be stale if some machines run slowly. The other approach is synchronous training, where the parame- ter server accumulates gradients from all workers, applies the accumulated update while all workers wait, and then sends back identical parameters to all workers [4], prevent- ing stale gradients. “Backup workers” help prevent slow workers from slowing down training. However, in a mul- titask setup, some tasks are faster than others. Thus, slow tasks will not only slow down the computation, but their gradients are more likely to be thrown out. Hence, we used a hybrid approach: we accumulate gra- dients from all workers that are working on a single task, and then have the parameter servers apply the aggregated gradients from a single task when ready, without synchro- nizing with other tasks. Our experiments found that this approach resulted in faster learning than either purely syn- chronous or purely asynchronous training, and in particular, was more stable than asynchronous training.
1708.07860#26
Multi-task Self-Supervised Visual Learning
We investigate methods for combining multiple self-supervised tasks--i.e., supervised tasks where data can be collected without manual labeling--in order to train a single visual representation. First, we provide an apples-to-apples comparison of four different self-supervised tasks using the very deep ResNet-101 architecture. We then combine tasks to jointly train a network. We also explore lasso regularization to encourage the network to factorize the information in its representation, and methods for "harmonizing" network inputs in order to learn a more unified representation. We evaluate all methods on ImageNet classification, PASCAL VOC detection, and NYU depth prediction. Our results show that deeper networks work better, and that combining tasks--even via a naive multi-head architecture--always improves performance. Our best joint network nearly matches the PASCAL performance of a model pre-trained on ImageNet classification, and matches the ImageNet network on NYU depth prediction.
http://arxiv.org/pdf/1708.07860
Carl Doersch, Andrew Zisserman
cs.CV
Published at ICCV 2017
null
cs.CV
20170825
20170825
[ { "id": "1611.06430" }, { "id": "1602.07261" }, { "id": "1611.03530" }, { "id": "1611.06646" }, { "id": "1606.04671" }, { "id": "1612.06370" }, { "id": "1606.07419" }, { "id": "1609.02132" }, { "id": "1610.01685" } ]
1708.07747
27
A. Krizhevsky and G. Hinton. Learning multiple layers of features from tiny images. 2009. Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998. L. Wan, M. Zeiler, S. Zhang, Y. L. Cun, and R. Fergus. Regularization of neural networks using dropconnect. In Proceedings of the 30th international conference on machine learning (ICML- 13), pages 1058–1066, 2013. 6
1708.07747#27
Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms
We present Fashion-MNIST, a new dataset comprising of 28x28 grayscale images of 70,000 fashion products from 10 categories, with 7,000 images per category. The training set has 60,000 images and the test set has 10,000 images. Fashion-MNIST is intended to serve as a direct drop-in replacement for the original MNIST dataset for benchmarking machine learning algorithms, as it shares the same image size, data format and the structure of training and testing splits. The dataset is freely available at https://github.com/zalandoresearch/fashion-mnist
http://arxiv.org/pdf/1708.07747
Han Xiao, Kashif Rasul, Roland Vollgraf
cs.LG, cs.CV, stat.ML
Dataset is freely available at https://github.com/zalandoresearch/fashion-mnist Benchmark is available at http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/
null
cs.LG
20170825
20170915
[ { "id": "1708.07747" }, { "id": "1702.05373" } ]
1708.07860
27
We also used the RMSProp optimizer, which has been shown to improve convergence in many vision tasks versus stochastic gradient descent. RMSProp re-scales the gradi- ents for each parameter such that multiplying the loss by a constant factor does not change how quickly the network learns. This is a useful property in multi-task learning, since different loss functions may be scaled differently. Hence, we used a separate RMSProp optimizer for each task. That is, for each task, we keep separate moving averages of the squared gradients, which are used to scale the task’s accu- mulated updates before applying them to the parameters. For all experiments, we train on 64 GPUs in parallel, and save checkpoints every roughly 2.4K GPU (NVIDIA K40) hours. These checkpoints are then used as initialization for our evaluation tasks. 6 # 5. Evaluation Here we describe the three evaluation tasks that we trans- fer our representation to: image classification, object cate- gory detection, and pixel-wise depth prediction.
1708.07860#27
Multi-task Self-Supervised Visual Learning
We investigate methods for combining multiple self-supervised tasks--i.e., supervised tasks where data can be collected without manual labeling--in order to train a single visual representation. First, we provide an apples-to-apples comparison of four different self-supervised tasks using the very deep ResNet-101 architecture. We then combine tasks to jointly train a network. We also explore lasso regularization to encourage the network to factorize the information in its representation, and methods for "harmonizing" network inputs in order to learn a more unified representation. We evaluate all methods on ImageNet classification, PASCAL VOC detection, and NYU depth prediction. Our results show that deeper networks work better, and that combining tasks--even via a naive multi-head architecture--always improves performance. Our best joint network nearly matches the PASCAL performance of a model pre-trained on ImageNet classification, and matches the ImageNet network on NYU depth prediction.
http://arxiv.org/pdf/1708.07860
Carl Doersch, Andrew Zisserman
cs.CV
Published at ICCV 2017
null
cs.CV
20170825
20170825
[ { "id": "1611.06430" }, { "id": "1602.07261" }, { "id": "1611.03530" }, { "id": "1611.06646" }, { "id": "1606.04671" }, { "id": "1612.06370" }, { "id": "1606.07419" }, { "id": "1609.02132" }, { "id": "1610.01685" } ]
1708.07860
28
ImageNet with Frozen Weights: We add a single linear classification layer (a softmax) to the network at the end of block 3, and train on the full ImageNet training set. We keep all pre-trained weights frozen during training, so we can evaluate raw features. We evaluate on the ImageNet validation set. The training set is augmented in translation and color, following [38], but during evaluation, we don’t use multi-crop or mirroring augmentation. This evaluation is similar to evaluations used elsewhere (particularly Zhang et al. [46]). Performing well requires good representation of fine-grained object attributes (to distinguish, for example, breeds of dogs). We report top-5 recall in all charts (except Table 1, which reports top-1 to be consistent with previous works). For most experiments we use only the output of the final “unit” of block 3, and use max pooling to obtain a 3 × 3 × 1024 feature vector, which is flattened and used as the input to the one-layer classifier. For the lasso ex- periments, however, we use a weighted combination of the (frozen) features from all block 3 layers, and we learn the weight for each layer, following the structure described in section 3.2.
1708.07860#28
Multi-task Self-Supervised Visual Learning
We investigate methods for combining multiple self-supervised tasks--i.e., supervised tasks where data can be collected without manual labeling--in order to train a single visual representation. First, we provide an apples-to-apples comparison of four different self-supervised tasks using the very deep ResNet-101 architecture. We then combine tasks to jointly train a network. We also explore lasso regularization to encourage the network to factorize the information in its representation, and methods for "harmonizing" network inputs in order to learn a more unified representation. We evaluate all methods on ImageNet classification, PASCAL VOC detection, and NYU depth prediction. Our results show that deeper networks work better, and that combining tasks--even via a naive multi-head architecture--always improves performance. Our best joint network nearly matches the PASCAL performance of a model pre-trained on ImageNet classification, and matches the ImageNet network on NYU depth prediction.
http://arxiv.org/pdf/1708.07860
Carl Doersch, Andrew Zisserman
cs.CV
Published at ICCV 2017
null
cs.CV
20170825
20170825
[ { "id": "1611.06430" }, { "id": "1602.07261" }, { "id": "1611.03530" }, { "id": "1611.06646" }, { "id": "1606.04671" }, { "id": "1612.06370" }, { "id": "1606.07419" }, { "id": "1609.02132" }, { "id": "1610.01685" } ]
1708.07860
29
PASCAL VOC 2007 Detection: We use Faster- RCNN [34], which trains a single network base with multiple heads for object proposals, box classification, and box localization. Performing well requires the network to accurately represent object categories and locations, with penalties for missing parts which might be hard to recognize (e.g., a cat’s body is harder to recognize than its head). We fine-tune all network weights. For our ImageNet pre-trained ResNet-101 model, we transfer all layers up through block 3 from the pre-trained model into the trunk, and transfer block 4 into the proposal categorization head, as is standard. We do the same with our self-supervised network, except that we initialize the proposal categoriza- tion head randomly. Following Doersch et al. [7], we use multi-scale data augmentation for all methods, including baselines. All other settings were left at their defaults. We train on the VOC 2007 trainval set, and evaluate Mean Average Precision on the VOC 2007 test set. For the lasso experiments, we feed our lasso combination of block 3 layers into the heads, rather than the final output of block 3.
1708.07860#29
Multi-task Self-Supervised Visual Learning
We investigate methods for combining multiple self-supervised tasks--i.e., supervised tasks where data can be collected without manual labeling--in order to train a single visual representation. First, we provide an apples-to-apples comparison of four different self-supervised tasks using the very deep ResNet-101 architecture. We then combine tasks to jointly train a network. We also explore lasso regularization to encourage the network to factorize the information in its representation, and methods for "harmonizing" network inputs in order to learn a more unified representation. We evaluate all methods on ImageNet classification, PASCAL VOC detection, and NYU depth prediction. Our results show that deeper networks work better, and that combining tasks--even via a naive multi-head architecture--always improves performance. Our best joint network nearly matches the PASCAL performance of a model pre-trained on ImageNet classification, and matches the ImageNet network on NYU depth prediction.
http://arxiv.org/pdf/1708.07860
Carl Doersch, Andrew Zisserman
cs.CV
Published at ICCV 2017
null
cs.CV
20170825
20170825
[ { "id": "1611.06430" }, { "id": "1602.07261" }, { "id": "1611.03530" }, { "id": "1611.06646" }, { "id": "1606.04671" }, { "id": "1612.06370" }, { "id": "1606.07419" }, { "id": "1609.02132" }, { "id": "1610.01685" } ]
1708.07860
30
NYU V2 Depth Prediction: Depth prediction measures how well a network represents geometry, and how well that information can be localized to pixel accuracy. We use a modified version of the architecture proposed in Laina et al. [19]. We use the “up projection” operator defined in that work, as well as the reverse Huber loss. We replaced the ResNet-50 architecture with our ResNet-101 architecture, and feed the block 3 outputs directly into the up-projection layers (block 4 was not used in our setup). This means we need only 3 levels of up projection, rather than 4. Our up projection filter sizes were 512, 256, and 128. As with our PASCAL experiments, we initialize all layers up to block 3 using the weights from our self-supervised pre-training, and fine-tune all weights. We selected one measure—percent of pixels where relative error is below 1.25—as a representa- tive measure (others available in appendix A). Relative er- , dp , where dgt is groundtruth ror is defined as max dgt depth and dp is predicted depth. For the lasso experiments, we feed our lasso combination of block3 layers into the up projection layers, rather than the final output of block 3.
1708.07860#30
Multi-task Self-Supervised Visual Learning
We investigate methods for combining multiple self-supervised tasks--i.e., supervised tasks where data can be collected without manual labeling--in order to train a single visual representation. First, we provide an apples-to-apples comparison of four different self-supervised tasks using the very deep ResNet-101 architecture. We then combine tasks to jointly train a network. We also explore lasso regularization to encourage the network to factorize the information in its representation, and methods for "harmonizing" network inputs in order to learn a more unified representation. We evaluate all methods on ImageNet classification, PASCAL VOC detection, and NYU depth prediction. Our results show that deeper networks work better, and that combining tasks--even via a naive multi-head architecture--always improves performance. Our best joint network nearly matches the PASCAL performance of a model pre-trained on ImageNet classification, and matches the ImageNet network on NYU depth prediction.
http://arxiv.org/pdf/1708.07860
Carl Doersch, Andrew Zisserman
cs.CV
Published at ICCV 2017
null
cs.CV
20170825
20170825
[ { "id": "1611.06430" }, { "id": "1602.07261" }, { "id": "1611.03530" }, { "id": "1611.06646" }, { "id": "1606.04671" }, { "id": "1612.06370" }, { "id": "1606.07419" }, { "id": "1609.02132" }, { "id": "1610.01685" } ]
1708.07860
31
# 6. Results: Comparisons and Combinations ImageNet Baseline: As an “upper bound” on perfor- mance, we train a full ResNet-101 model on ImageNet, which serves as a point of comparison for all our evalua- tions. Note that just under half of the parameters of this network are in block 4, which are not pre-trained in our self-supervised experiments (they are transferred from the ImageNet network only for the Pascal evaluations). We use the standard learning rate schedule of Szegedy et al. [38] for ImageNet training (multiply the learning rate by 0.94 every 2 epochs), but we don’t use such a schedule for our self-supervised tasks. # 6.1. Comparing individual self-supervision tasks Table 1 shows the performance of individual tasks for the three evaluation measures. Compared to previously- published results, our performance is significantly higher in all cases, most likely due to the additional depth of ResNet (cf. AlexNet) and additional training time. Note, our ImageNet-trained baseline for Faster-RCNN is also above the previously published result using ResNet (69.9 in [34] cf. 74.2 for ours), mostly due to the addition of multi- scale augmentation for the training images following [7].
1708.07860#31
Multi-task Self-Supervised Visual Learning
We investigate methods for combining multiple self-supervised tasks--i.e., supervised tasks where data can be collected without manual labeling--in order to train a single visual representation. First, we provide an apples-to-apples comparison of four different self-supervised tasks using the very deep ResNet-101 architecture. We then combine tasks to jointly train a network. We also explore lasso regularization to encourage the network to factorize the information in its representation, and methods for "harmonizing" network inputs in order to learn a more unified representation. We evaluate all methods on ImageNet classification, PASCAL VOC detection, and NYU depth prediction. Our results show that deeper networks work better, and that combining tasks--even via a naive multi-head architecture--always improves performance. Our best joint network nearly matches the PASCAL performance of a model pre-trained on ImageNet classification, and matches the ImageNet network on NYU depth prediction.
http://arxiv.org/pdf/1708.07860
Carl Doersch, Andrew Zisserman
cs.CV
Published at ICCV 2017
null
cs.CV
20170825
20170825
[ { "id": "1611.06430" }, { "id": "1602.07261" }, { "id": "1611.03530" }, { "id": "1611.06646" }, { "id": "1606.04671" }, { "id": "1612.06370" }, { "id": "1606.07419" }, { "id": "1609.02132" }, { "id": "1610.01685" } ]
1708.07860
32
Of the self-supervised pre-training methods, relative po- sition and colorization are the top performers, with relative position winning on PASCAL and NYU, and colorization winning on ImageNet-frozen. Remarkably, relative posi- tion performs on-par with ImageNet pre-training on depth prediction, and the gap is just 7.5% mAP on PASCAL. The only task where the gap remains large is the ImageNet eval- uation itself, which is not surprising since the ImageNet pre- training and evaluation use the same labels. Motion seg- mentation and exemplar training are somewhat worse than the others, with exemplar worst on Pascal and NYU, and motion segmentation worst on ImageNet. 7 90 ImageNet Recall@5 # Random init == oe retatve Postion 80 Colorization <e Exemplar 70 se Motion Segmentation = ImageNet Supervised 60 50 40 Percent recall 30 20 PASCAL VOC 2007 mAP = = Random Init Te en 70 — Colorization a Exemplar == Motion Segmentation 65 > ImageNet Supervised 60 55 Percent mAP 50 45 NYU Depth V2 Percent Below 1.25 = = Random Init <e Relative Position Colorization 80 m= Exemplar = Motion Segmentation —@ ImageNet Supervised 85 75 70 Percent pixels below 1.25 65
1708.07860#32
Multi-task Self-Supervised Visual Learning
We investigate methods for combining multiple self-supervised tasks--i.e., supervised tasks where data can be collected without manual labeling--in order to train a single visual representation. First, we provide an apples-to-apples comparison of four different self-supervised tasks using the very deep ResNet-101 architecture. We then combine tasks to jointly train a network. We also explore lasso regularization to encourage the network to factorize the information in its representation, and methods for "harmonizing" network inputs in order to learn a more unified representation. We evaluate all methods on ImageNet classification, PASCAL VOC detection, and NYU depth prediction. Our results show that deeper networks work better, and that combining tasks--even via a naive multi-head architecture--always improves performance. Our best joint network nearly matches the PASCAL performance of a model pre-trained on ImageNet classification, and matches the ImageNet network on NYU depth prediction.
http://arxiv.org/pdf/1708.07860
Carl Doersch, Andrew Zisserman
cs.CV
Published at ICCV 2017
null
cs.CV
20170825
20170825
[ { "id": "1611.06430" }, { "id": "1602.07261" }, { "id": "1611.03530" }, { "id": "1611.06646" }, { "id": "1606.04671" }, { "id": "1612.06370" }, { "id": "1606.07419" }, { "id": "1609.02132" }, { "id": "1610.01685" } ]
1708.07860
33
Figure 3. Comparison of performance for different self- supervised methods over time. X-axis is compute time on the self-supervised task (∼2.4K GPU hours per tick). “Random Init” shows performance with no pre-training. Figure 3 shows how the performance changes as pre- training time increases (time is on the x-axis). After 16.8K GPU hours, performance is plateauing but has not com- pletely saturated, suggesting that results can be improved slightly given more time. Interestingly, on the ImageNet- frozen evaluation, where colorization is winning, the gap relative to relative position is growing. Also, while most algorithms slowly improve performance with training time, ImageNet top1 Ours Prev. 36.21 31.7[46] 39.62 32.6[46] 31.51 - 27.62 - 66.82 51.0[46] ImageNet top5 Ours 59.21 62.48 53.08 48.29 85.10 PASCAL Prev. 61.7 [7] 46.9[46] - 52.2[27] 69.9[34] Ours 66.75 65.47 60.94 61.13 74.17
1708.07860#33
Multi-task Self-Supervised Visual Learning
We investigate methods for combining multiple self-supervised tasks--i.e., supervised tasks where data can be collected without manual labeling--in order to train a single visual representation. First, we provide an apples-to-apples comparison of four different self-supervised tasks using the very deep ResNet-101 architecture. We then combine tasks to jointly train a network. We also explore lasso regularization to encourage the network to factorize the information in its representation, and methods for "harmonizing" network inputs in order to learn a more unified representation. We evaluate all methods on ImageNet classification, PASCAL VOC detection, and NYU depth prediction. Our results show that deeper networks work better, and that combining tasks--even via a naive multi-head architecture--always improves performance. Our best joint network nearly matches the PASCAL performance of a model pre-trained on ImageNet classification, and matches the ImageNet network on NYU depth prediction.
http://arxiv.org/pdf/1708.07860
Carl Doersch, Andrew Zisserman
cs.CV
Published at ICCV 2017
null
cs.CV
20170825
20170825
[ { "id": "1611.06430" }, { "id": "1602.07261" }, { "id": "1611.03530" }, { "id": "1611.06646" }, { "id": "1606.04671" }, { "id": "1612.06370" }, { "id": "1606.07419" }, { "id": "1609.02132" }, { "id": "1610.01685" } ]
1708.07860
34
Table 1. Comparison of our implementation with previous results on our evaluation tasks: ImageNet with frozen features (left), and PASCAL VOC 2007 mAP with fine-tuning (middle), and NYU depth (right, not used in previous works). Unlike elsewhere in this paper, ImageNet performance is reported here in terms of top 1 accuracy (versus recall at 5 elsewhere). Our ImageNet pre-training performance on ImageNet is lower than the performance He et al. [15] (78.25) reported for ResNet-101 since we remove block 4. exemplar training doesn’t fit this pattern: its performance falls steadily on ImageNet, and undulates on PASCAL and NYU. Even stranger, performance for exemplar is seem- ingly anti-correlated between Pascal and NYU from check- point to checkpoint. A possible explanation is that exemplar training encourages features that aren’t invariant beyond the training transformations (e.g. they aren’t invariant to object deformation or out-of-plane rotation), but are instead sensi- tive to the details of textures and low-level shapes. If these irrelevant details become prominent in the representation, they may serve as distractors for the evaluation classifiers.
1708.07860#34
Multi-task Self-Supervised Visual Learning
We investigate methods for combining multiple self-supervised tasks--i.e., supervised tasks where data can be collected without manual labeling--in order to train a single visual representation. First, we provide an apples-to-apples comparison of four different self-supervised tasks using the very deep ResNet-101 architecture. We then combine tasks to jointly train a network. We also explore lasso regularization to encourage the network to factorize the information in its representation, and methods for "harmonizing" network inputs in order to learn a more unified representation. We evaluate all methods on ImageNet classification, PASCAL VOC detection, and NYU depth prediction. Our results show that deeper networks work better, and that combining tasks--even via a naive multi-head architecture--always improves performance. Our best joint network nearly matches the PASCAL performance of a model pre-trained on ImageNet classification, and matches the ImageNet network on NYU depth prediction.
http://arxiv.org/pdf/1708.07860
Carl Doersch, Andrew Zisserman
cs.CV
Published at ICCV 2017
null
cs.CV
20170825
20170825
[ { "id": "1611.06430" }, { "id": "1602.07261" }, { "id": "1611.03530" }, { "id": "1611.06646" }, { "id": "1606.04671" }, { "id": "1612.06370" }, { "id": "1606.07419" }, { "id": "1609.02132" }, { "id": "1610.01685" } ]
1708.07860
35
Note that the random baseline performance is low rela- tive to a shallower network, especially the ImageNet-frozen evaluation (a linear classifier on random AlexNet’s conv5 features has top-5 recall of 27.1%, cf. 10.5% for ResNet). All our pre-trained nets far outperform the random baseline. Pre-training RP RP+Col RP+Ex RP+MS RP+Col+Ex RP+Col+Ex+MS INet Labels ImageNet 59.21 66.64 65.24 63.73 68.65 69.30 85.10 PASCAL NYU 80.54 79.87 78.70 78.72 80.17 79.25 80.06 66.75 68.75 69.44 68.81 69.48 70.53 74.17 Table 2. Comparison of various combinations of self-supervised tasks. Checkpoints were taken after 16.8K GPU hours, equiva- lent to checkpoint 7 in Figure 3. Abbreviation key: RP: Relative Position; Col: Colorization; Ex: Exemplar Nets; MS: Motion Seg- mentation. Metrics: ImageNet: Recall@5; PASCAL: mAP; NYU: % Pixels below 1.25.
1708.07860#35
Multi-task Self-Supervised Visual Learning
We investigate methods for combining multiple self-supervised tasks--i.e., supervised tasks where data can be collected without manual labeling--in order to train a single visual representation. First, we provide an apples-to-apples comparison of four different self-supervised tasks using the very deep ResNet-101 architecture. We then combine tasks to jointly train a network. We also explore lasso regularization to encourage the network to factorize the information in its representation, and methods for "harmonizing" network inputs in order to learn a more unified representation. We evaluate all methods on ImageNet classification, PASCAL VOC detection, and NYU depth prediction. Our results show that deeper networks work better, and that combining tasks--even via a naive multi-head architecture--always improves performance. Our best joint network nearly matches the PASCAL performance of a model pre-trained on ImageNet classification, and matches the ImageNet network on NYU depth prediction.
http://arxiv.org/pdf/1708.07860
Carl Doersch, Andrew Zisserman
cs.CV
Published at ICCV 2017
null
cs.CV
20170825
20170825
[ { "id": "1611.06430" }, { "id": "1602.07261" }, { "id": "1611.03530" }, { "id": "1611.06646" }, { "id": "1606.04671" }, { "id": "1612.06370" }, { "id": "1606.07419" }, { "id": "1609.02132" }, { "id": "1610.01685" } ]
1708.07860
36
The fact that representations learnt by the various self- supervised methods have different strengths and weak- nesses suggests that the features differ. Therefore, combin- ing methods may yield further improvements. On the other hand, the lower-performing tasks might drag-down the per- formance of the best ones. Resolving this uncertainty is a key motivator for the next section. Implementation Details: Unfortunately, intermittent net- work congestion can slow down experiments, so we don’t measure wall time directly. Instead, we estimate compute time for a given task by multiplying the per-task training step count by a constant factor, which is fixed across all ex- periments, representing the average step time when network congestion is minimal. We add training cost across all tasks used in an experiment, and snapshot when the total cost crosses a threshold. For relative position, 1 epoch through the ImageNet train set takes roughly 350 GPU hours; for colorization it takes roughly 90 hours; for exemplar nets roughly 60 hours. For motion segmentation, one epoch through our video dataset takes roughly 400 GPU hours.
1708.07860#36
Multi-task Self-Supervised Visual Learning
We investigate methods for combining multiple self-supervised tasks--i.e., supervised tasks where data can be collected without manual labeling--in order to train a single visual representation. First, we provide an apples-to-apples comparison of four different self-supervised tasks using the very deep ResNet-101 architecture. We then combine tasks to jointly train a network. We also explore lasso regularization to encourage the network to factorize the information in its representation, and methods for "harmonizing" network inputs in order to learn a more unified representation. We evaluate all methods on ImageNet classification, PASCAL VOC detection, and NYU depth prediction. Our results show that deeper networks work better, and that combining tasks--even via a naive multi-head architecture--always improves performance. Our best joint network nearly matches the PASCAL performance of a model pre-trained on ImageNet classification, and matches the ImageNet network on NYU depth prediction.
http://arxiv.org/pdf/1708.07860
Carl Doersch, Andrew Zisserman
cs.CV
Published at ICCV 2017
null
cs.CV
20170825
20170825
[ { "id": "1611.06430" }, { "id": "1602.07261" }, { "id": "1611.03530" }, { "id": "1611.06646" }, { "id": "1606.04671" }, { "id": "1612.06370" }, { "id": "1606.07419" }, { "id": "1609.02132" }, { "id": "1610.01685" } ]
1708.07860
37
# 6.2. Na¨ıve multi-task combination of supervision tasks selfTable 2 shows results for combining self-supervised pre-training tasks. Beginning with one of our strongest performers—relative position—we see that adding any of our other tasks helps performance on ImageNet and Pas- cal. Adding either colorization or exemplar leads to more than 6 points gain on ImageNet. Furthermore, it seems that the boosts are complementary: adding both colorization and exemplar gives a further 2% boost. Our best-performing method was a combination of all four self-supervised tasks. To further probe how well our representation localizes objects, we evaluated the PASCAL detector at a more strin- gent overlap criterion: 75% IoU (versus standard VOC 2007 criterion of 50% IoU). Our model gets 43.91% mAP in this setting, versus the standard ImageNet model’s performance of 44.27%, a gap of less than half a percent. Thus, the self- supervised approach may be especially useful when accu- rate localization is important. The depth evaluation performance shows far less varia- tion over the single and combinations tasks than the other evaluations. All methods are on par with ImageNet pre- training, with relative position exceeding this value slightly, 8
1708.07860#37
Multi-task Self-Supervised Visual Learning
We investigate methods for combining multiple self-supervised tasks--i.e., supervised tasks where data can be collected without manual labeling--in order to train a single visual representation. First, we provide an apples-to-apples comparison of four different self-supervised tasks using the very deep ResNet-101 architecture. We then combine tasks to jointly train a network. We also explore lasso regularization to encourage the network to factorize the information in its representation, and methods for "harmonizing" network inputs in order to learn a more unified representation. We evaluate all methods on ImageNet classification, PASCAL VOC detection, and NYU depth prediction. Our results show that deeper networks work better, and that combining tasks--even via a naive multi-head architecture--always improves performance. Our best joint network nearly matches the PASCAL performance of a model pre-trained on ImageNet classification, and matches the ImageNet network on NYU depth prediction.
http://arxiv.org/pdf/1708.07860
Carl Doersch, Andrew Zisserman
cs.CV
Published at ICCV 2017
null
cs.CV
20170825
20170825
[ { "id": "1611.06430" }, { "id": "1602.07261" }, { "id": "1611.03530" }, { "id": "1611.06646" }, { "id": "1606.04671" }, { "id": "1612.06370" }, { "id": "1606.07419" }, { "id": "1609.02132" }, { "id": "1610.01685" } ]
1708.07860
38
8 ImageNet Recall@5 90 == Random init oe eciative Position 80 —} RP+Col me RPLEX 70 ee RP+Msg —— RP+Col+Ex P— RP+Col+Ex+Msg 8 60 = ImageNet Supervised 2 250 o 2 @ 40 a 30 20 D————E—EEEEE——————EEEE7E 0 2 4 6 8 10 5 PASCAL VOC 2007 mAP ——oO = = Random Init ‘<* Relative Position 70 I} RP+Col te RPHEX em RP+Msg 65 — RP+Col+Ex ao RPL Col+EX+ Msg < =@ ImageNet Supervised € 60 2 o 2 55 o a 50 45 NYU Depth V2 Percent Below 1.25 85 == Random init e Relative Position jm RP+Col 1 80 mm RPHEX N se RPHMsg aq sm RP+Col+Ex z = RPL Col+EX+ Msg 375 =O Imagenet Supervised 2 a o x 270 e o 2 & 65 60 0 2 4 6 8 10 Figure 4. Comparison of performance for different multi-task self-supervised methods over time. X-axis is compute time on the self-supervised task (∼2.4K GPU hours per tick). “Random Init” shows performance with no pre-training.
1708.07860#38
Multi-task Self-Supervised Visual Learning
We investigate methods for combining multiple self-supervised tasks--i.e., supervised tasks where data can be collected without manual labeling--in order to train a single visual representation. First, we provide an apples-to-apples comparison of four different self-supervised tasks using the very deep ResNet-101 architecture. We then combine tasks to jointly train a network. We also explore lasso regularization to encourage the network to factorize the information in its representation, and methods for "harmonizing" network inputs in order to learn a more unified representation. We evaluate all methods on ImageNet classification, PASCAL VOC detection, and NYU depth prediction. Our results show that deeper networks work better, and that combining tasks--even via a naive multi-head architecture--always improves performance. Our best joint network nearly matches the PASCAL performance of a model pre-trained on ImageNet classification, and matches the ImageNet network on NYU depth prediction.
http://arxiv.org/pdf/1708.07860
Carl Doersch, Andrew Zisserman
cs.CV
Published at ICCV 2017
null
cs.CV
20170825
20170825
[ { "id": "1611.06430" }, { "id": "1602.07261" }, { "id": "1611.03530" }, { "id": "1611.06646" }, { "id": "1606.04671" }, { "id": "1612.06370" }, { "id": "1606.07419" }, { "id": "1609.02132" }, { "id": "1610.01685" } ]
1708.07860
39
and the combination with exemplar or motion segmentation leading to a slight drop. Combining relative position with with either exemplar or motion segmentation leads to a con- siderable improvement over those tasks alone. Finally, figure 4 shows how the performance of these methods improves with more training. One might expect that more tasks would result in slower training, since more must be learned. Surprisingly, however the combination of 9 Pre-training RP RP / H RP+Col RP+Col / H ImageNet 59.21 62.33 66.64 68.08 PASCAL NYU 80.23 80.39 79.87 79.69 66.75 66.15 68.75 68.26 Table 3. Comparison of methods with and without harmonization, where relative position training is converted to grayscale to mimic the inputs to the colorization network. H denotes an experiment done with harmonization. Rel. Position Exemplar Color Mot. Seg. Rel. Position Exemplar Color Mot. Seg. Net Frozen Pascal07 NyuDepth Net Frozen Pascal07 NyuDepth
1708.07860#39
Multi-task Self-Supervised Visual Learning
We investigate methods for combining multiple self-supervised tasks--i.e., supervised tasks where data can be collected without manual labeling--in order to train a single visual representation. First, we provide an apples-to-apples comparison of four different self-supervised tasks using the very deep ResNet-101 architecture. We then combine tasks to jointly train a network. We also explore lasso regularization to encourage the network to factorize the information in its representation, and methods for "harmonizing" network inputs in order to learn a more unified representation. We evaluate all methods on ImageNet classification, PASCAL VOC detection, and NYU depth prediction. Our results show that deeper networks work better, and that combining tasks--even via a naive multi-head architecture--always improves performance. Our best joint network nearly matches the PASCAL performance of a model pre-trained on ImageNet classification, and matches the ImageNet network on NYU depth prediction.
http://arxiv.org/pdf/1708.07860
Carl Doersch, Andrew Zisserman
cs.CV
Published at ICCV 2017
null
cs.CV
20170825
20170825
[ { "id": "1611.06430" }, { "id": "1602.07261" }, { "id": "1611.03530" }, { "id": "1611.06646" }, { "id": "1606.04671" }, { "id": "1612.06370" }, { "id": "1606.07419" }, { "id": "1609.02132" }, { "id": "1610.01685" } ]
1708.07860
40
Rel. Position Exemplar Color Mot. Seg. Rel. Position Exemplar Color Mot. Seg. Net Frozen Pascal07 NyuDepth Net Frozen Pascal07 NyuDepth Figure 5. Weights learned via the lasso technique. Each row shows one task: self-supervised tasks on top, evaluation tasks on bottom. Each square shows |α| for one ResNet “Unit” (shallow- est layers at the left). Whiter colors indicate higher |α|, with a nonlinear scale to make smaller nonzero values easily visible. all four tasks performs the best or nearly the best even at our earliest checkpoint. # 6.3. Mediated combination of self-supervision tasks
1708.07860#40
Multi-task Self-Supervised Visual Learning
We investigate methods for combining multiple self-supervised tasks--i.e., supervised tasks where data can be collected without manual labeling--in order to train a single visual representation. First, we provide an apples-to-apples comparison of four different self-supervised tasks using the very deep ResNet-101 architecture. We then combine tasks to jointly train a network. We also explore lasso regularization to encourage the network to factorize the information in its representation, and methods for "harmonizing" network inputs in order to learn a more unified representation. We evaluate all methods on ImageNet classification, PASCAL VOC detection, and NYU depth prediction. Our results show that deeper networks work better, and that combining tasks--even via a naive multi-head architecture--always improves performance. Our best joint network nearly matches the PASCAL performance of a model pre-trained on ImageNet classification, and matches the ImageNet network on NYU depth prediction.
http://arxiv.org/pdf/1708.07860
Carl Doersch, Andrew Zisserman
cs.CV
Published at ICCV 2017
null
cs.CV
20170825
20170825
[ { "id": "1611.06430" }, { "id": "1602.07261" }, { "id": "1611.03530" }, { "id": "1611.06646" }, { "id": "1606.04671" }, { "id": "1612.06370" }, { "id": "1606.07419" }, { "id": "1609.02132" }, { "id": "1610.01685" } ]
1708.07860
41
all four tasks performs the best or nearly the best even at our earliest checkpoint. # 6.3. Mediated combination of self-supervision tasks Harmonization: We train two versions of a network on relative position and colorization: one using harmonization to make the relative position inputs look more like coloriza- tion, and one without it (equivalent to RP+Col in section 6.2 above). As a baseline, we make the same modification to a network trained only on relative position alone: i.e., we convert its inputs to grayscale. In this baseline, we don’t expect any performance boost over the original relative po- sition task, because there are no other tasks to harmonize with. Results are shown in Table 3. However, on the Im- ageNet evaluation there is an improvement when we pre- train using only relative position (due to the change from adding noise to the other two channels to using grayscale input (three equal channels)), and this improvement follows through to the the combined relative position and coloriza- tion tasks. The other two evaluation tasks do not show any improvement with harmonization. This suggests that our networks are actually quite good at dealing with stark differ- ences between pre-training data domains when the features are fine-tuned at test time.
1708.07860#41
Multi-task Self-Supervised Visual Learning
We investigate methods for combining multiple self-supervised tasks--i.e., supervised tasks where data can be collected without manual labeling--in order to train a single visual representation. First, we provide an apples-to-apples comparison of four different self-supervised tasks using the very deep ResNet-101 architecture. We then combine tasks to jointly train a network. We also explore lasso regularization to encourage the network to factorize the information in its representation, and methods for "harmonizing" network inputs in order to learn a more unified representation. We evaluate all methods on ImageNet classification, PASCAL VOC detection, and NYU depth prediction. Our results show that deeper networks work better, and that combining tasks--even via a naive multi-head architecture--always improves performance. Our best joint network nearly matches the PASCAL performance of a model pre-trained on ImageNet classification, and matches the ImageNet network on NYU depth prediction.
http://arxiv.org/pdf/1708.07860
Carl Doersch, Andrew Zisserman
cs.CV
Published at ICCV 2017
null
cs.CV
20170825
20170825
[ { "id": "1611.06430" }, { "id": "1602.07261" }, { "id": "1611.03530" }, { "id": "1611.06646" }, { "id": "1606.04671" }, { "id": "1612.06370" }, { "id": "1606.07419" }, { "id": "1609.02132" }, { "id": "1610.01685" } ]
1708.07860
42
Net structure No Lasso Eval Only Lasso Pre-train Only Lasso Pre-train & Eval Lasso ImageNet 69.30 70.18 68.09 69.44 PASCAL NYU 79.25 79.41 78.96 79.45 70.53 68.86 68.49 68.98 Table 4. Comparison of performance with and without the lasso technique for factorizing representations, for a network trained on all four self-supervised tasks for 16.8K GPU-hours. “No Lasso” is equivalent to table 2’s RP+Col+Ex+MS. “Eval Only” uses the same pre-trained network, with lasso used only on the evaluation task, while “Pre-train Only” uses it only during pre-training. The final row uses lasso always.
1708.07860#42
Multi-task Self-Supervised Visual Learning
We investigate methods for combining multiple self-supervised tasks--i.e., supervised tasks where data can be collected without manual labeling--in order to train a single visual representation. First, we provide an apples-to-apples comparison of four different self-supervised tasks using the very deep ResNet-101 architecture. We then combine tasks to jointly train a network. We also explore lasso regularization to encourage the network to factorize the information in its representation, and methods for "harmonizing" network inputs in order to learn a more unified representation. We evaluate all methods on ImageNet classification, PASCAL VOC detection, and NYU depth prediction. Our results show that deeper networks work better, and that combining tasks--even via a naive multi-head architecture--always improves performance. Our best joint network nearly matches the PASCAL performance of a model pre-trained on ImageNet classification, and matches the ImageNet network on NYU depth prediction.
http://arxiv.org/pdf/1708.07860
Carl Doersch, Andrew Zisserman
cs.CV
Published at ICCV 2017
null
cs.CV
20170825
20170825
[ { "id": "1611.06430" }, { "id": "1602.07261" }, { "id": "1611.03530" }, { "id": "1611.06646" }, { "id": "1606.04671" }, { "id": "1612.06370" }, { "id": "1606.07419" }, { "id": "1609.02132" }, { "id": "1610.01685" } ]
1708.07860
43
Lasso training: As a first sanity check, Figure 5 plots the α matrix learned using all four self-supervised tasks. Dif- ferent tasks do indeed select different layers. Somewhat surprisingly, however, there are strong correlations between the selected layers: most tasks want a combination of low- level information and high-level, semantic information. The depth evaluation network selects relatively high-level infor- mation, but evaluating on ImageNet-frozen and PASCAL makes the network select information from several levels, often not the ones that the pre-training tasks use. This sug- gests that, although there are useful features in the learned representation, the final output space for the representation is still losing some information that’s useful for evaluation tasks, suggesting a possible area for future work.
1708.07860#43
Multi-task Self-Supervised Visual Learning
We investigate methods for combining multiple self-supervised tasks--i.e., supervised tasks where data can be collected without manual labeling--in order to train a single visual representation. First, we provide an apples-to-apples comparison of four different self-supervised tasks using the very deep ResNet-101 architecture. We then combine tasks to jointly train a network. We also explore lasso regularization to encourage the network to factorize the information in its representation, and methods for "harmonizing" network inputs in order to learn a more unified representation. We evaluate all methods on ImageNet classification, PASCAL VOC detection, and NYU depth prediction. Our results show that deeper networks work better, and that combining tasks--even via a naive multi-head architecture--always improves performance. Our best joint network nearly matches the PASCAL performance of a model pre-trained on ImageNet classification, and matches the ImageNet network on NYU depth prediction.
http://arxiv.org/pdf/1708.07860
Carl Doersch, Andrew Zisserman
cs.CV
Published at ICCV 2017
null
cs.CV
20170825
20170825
[ { "id": "1611.06430" }, { "id": "1602.07261" }, { "id": "1611.03530" }, { "id": "1611.06646" }, { "id": "1606.04671" }, { "id": "1612.06370" }, { "id": "1606.07419" }, { "id": "1609.02132" }, { "id": "1610.01685" } ]
1708.07860
44
The final performance of this network is shown in Ta- ble 4. There are four cases: no lasso, lasso only on the evaluation tasks, lasso only at pre-training time, and lasso in both self-supervised training and evaluation. Unsurpris- ingly, using lasso only for pre-training performs poorly since not all information reaches the final layer. Surpris- ingly, however, using the lasso both for self-supervised training and evaluation is not very effective, contrary to previous results advocating that features should be selected from multiple layers for task transfer [14, 22, 36]. Perhaps the multi-task nature of our pre-training forces more infor- mation to propagate through the entire network, so explic- itly extracting information from lower layers is unnecessary. # 7. Summary and extensions
1708.07860#44
Multi-task Self-Supervised Visual Learning
We investigate methods for combining multiple self-supervised tasks--i.e., supervised tasks where data can be collected without manual labeling--in order to train a single visual representation. First, we provide an apples-to-apples comparison of four different self-supervised tasks using the very deep ResNet-101 architecture. We then combine tasks to jointly train a network. We also explore lasso regularization to encourage the network to factorize the information in its representation, and methods for "harmonizing" network inputs in order to learn a more unified representation. We evaluate all methods on ImageNet classification, PASCAL VOC detection, and NYU depth prediction. Our results show that deeper networks work better, and that combining tasks--even via a naive multi-head architecture--always improves performance. Our best joint network nearly matches the PASCAL performance of a model pre-trained on ImageNet classification, and matches the ImageNet network on NYU depth prediction.
http://arxiv.org/pdf/1708.07860
Carl Doersch, Andrew Zisserman
cs.CV
Published at ICCV 2017
null
cs.CV
20170825
20170825
[ { "id": "1611.06430" }, { "id": "1602.07261" }, { "id": "1611.03530" }, { "id": "1611.06646" }, { "id": "1606.04671" }, { "id": "1612.06370" }, { "id": "1606.07419" }, { "id": "1609.02132" }, { "id": "1610.01685" } ]
1708.07860
45
# 7. Summary and extensions (i) Deeper net- works improve self-supervision over shallow networks; (ii) Combining self-supervision tasks always improves perfor- mance over the tasks alone; (iii) The gap between Ima- geNet pre-trained and self-supervision pre-trained with four tasks is nearly closed for the VOC detection evaluation, and completely closed for NYU depth, (iv) Harmonization and lasso weightings only have minimal effects; and, finally, (v) Combining self-supervised tasks leads to faster training. 10 There are many opportunities for further improvements: we can add augmentation (as in the exemplar task) to all tasks; we could add more self-supervision tasks (indeed new ones have appeared during the preparation of this pa- per, e.g. [10]); we could add further evaluation tasks – in- deed depth prediction was not very informative, and replac- ing it by an alternative shape measurement task such as sur- face normal prediction may be more reliable; and we can experiment with methods for dynamically weighting the im- portance of tasks in the optimization.
1708.07860#45
Multi-task Self-Supervised Visual Learning
We investigate methods for combining multiple self-supervised tasks--i.e., supervised tasks where data can be collected without manual labeling--in order to train a single visual representation. First, we provide an apples-to-apples comparison of four different self-supervised tasks using the very deep ResNet-101 architecture. We then combine tasks to jointly train a network. We also explore lasso regularization to encourage the network to factorize the information in its representation, and methods for "harmonizing" network inputs in order to learn a more unified representation. We evaluate all methods on ImageNet classification, PASCAL VOC detection, and NYU depth prediction. Our results show that deeper networks work better, and that combining tasks--even via a naive multi-head architecture--always improves performance. Our best joint network nearly matches the PASCAL performance of a model pre-trained on ImageNet classification, and matches the ImageNet network on NYU depth prediction.
http://arxiv.org/pdf/1708.07860
Carl Doersch, Andrew Zisserman
cs.CV
Published at ICCV 2017
null
cs.CV
20170825
20170825
[ { "id": "1611.06430" }, { "id": "1602.07261" }, { "id": "1611.03530" }, { "id": "1611.06646" }, { "id": "1606.04671" }, { "id": "1612.06370" }, { "id": "1606.07419" }, { "id": "1609.02132" }, { "id": "1610.01685" } ]
1708.07860
46
It would also be interesting to repeat these experiments with a deep network such as VGG-16 where consecutive layers are less correlated, or with even deeper networks (ResNet-152, DenseNet [16] and beyond) to tease out the match between self-supervision tasks and network depth. For the lasso, it might be worth investigating block level weightings using a group sparsity regularizer. For the future, given the performance improvements demonstrated in this paper, there is a possibility that self- supervision will eventually augment or replace fully super- vised pre-training. Acknowledgements: Thanks to Relja Arandjelovi´c, Jo˜ao Carreira, Viorica P˘atr˘aucean and Karen Simonyan for helpful dis- cussions. # A. Additional metrics for depth prediction
1708.07860#46
Multi-task Self-Supervised Visual Learning
We investigate methods for combining multiple self-supervised tasks--i.e., supervised tasks where data can be collected without manual labeling--in order to train a single visual representation. First, we provide an apples-to-apples comparison of four different self-supervised tasks using the very deep ResNet-101 architecture. We then combine tasks to jointly train a network. We also explore lasso regularization to encourage the network to factorize the information in its representation, and methods for "harmonizing" network inputs in order to learn a more unified representation. We evaluate all methods on ImageNet classification, PASCAL VOC detection, and NYU depth prediction. Our results show that deeper networks work better, and that combining tasks--even via a naive multi-head architecture--always improves performance. Our best joint network nearly matches the PASCAL performance of a model pre-trained on ImageNet classification, and matches the ImageNet network on NYU depth prediction.
http://arxiv.org/pdf/1708.07860
Carl Doersch, Andrew Zisserman
cs.CV
Published at ICCV 2017
null
cs.CV
20170825
20170825
[ { "id": "1611.06430" }, { "id": "1602.07261" }, { "id": "1611.03530" }, { "id": "1611.06646" }, { "id": "1606.04671" }, { "id": "1612.06370" }, { "id": "1606.07419" }, { "id": "1609.02132" }, { "id": "1610.01685" } ]
1708.07860
47
# A. Additional metrics for depth prediction Previous literature on depth prediction has established several measures of accuracy, since different errors may be more costly in different contexts. The measure used in the main paper was percent of pixels where relative depth—i.e., max —is less than 1.25. This measures how of- ten the estimated depth is very close to being correct. It is also standard to measure more relaxed thresholds of rela- tive depth: 1.252 and 1.253. Furthermore, we can measure average errors across all pixels. Mean Absolute Error is the mean squared difference between ground truth and pre- dicted values. Unlike the previous metrics, with Mean Ab- solute Error the worst predictions receive the highest penal- ties. Mean Relative Error weights the prediction error by the inverse of ground truth depth. Thus, errors on nearby parts of the scene are penalized more, which may be more relevant for, e.g., robot navigation.
1708.07860#47
Multi-task Self-Supervised Visual Learning
We investigate methods for combining multiple self-supervised tasks--i.e., supervised tasks where data can be collected without manual labeling--in order to train a single visual representation. First, we provide an apples-to-apples comparison of four different self-supervised tasks using the very deep ResNet-101 architecture. We then combine tasks to jointly train a network. We also explore lasso regularization to encourage the network to factorize the information in its representation, and methods for "harmonizing" network inputs in order to learn a more unified representation. We evaluate all methods on ImageNet classification, PASCAL VOC detection, and NYU depth prediction. Our results show that deeper networks work better, and that combining tasks--even via a naive multi-head architecture--always improves performance. Our best joint network nearly matches the PASCAL performance of a model pre-trained on ImageNet classification, and matches the ImageNet network on NYU depth prediction.
http://arxiv.org/pdf/1708.07860
Carl Doersch, Andrew Zisserman
cs.CV
Published at ICCV 2017
null
cs.CV
20170825
20170825
[ { "id": "1611.06430" }, { "id": "1602.07261" }, { "id": "1611.03530" }, { "id": "1611.06646" }, { "id": "1606.04671" }, { "id": "1612.06370" }, { "id": "1606.07419" }, { "id": "1609.02132" }, { "id": "1610.01685" } ]
1708.07860
48
Tables 5, 6, 7, and 8 are extended versions of ta- bles1, 2, 3, 4, respectively. For the most part, the additional measures tell the same story as the measure for depth re- ported in the main paper. Different self-supervised signals seem to perform similarly relative to one another: exemplar and relative position work best; color and motion segmen- tation work worse (table 5). Combinations still perform as well as the best method alone (table 6). Finally, it remains uncertain whether harmonization or the lasso technique provide a boost on depth prediction (tables 7 and 8). # References [1] P. Agrawal, J. Carreira, and J. Malik. Learning to see by moving. In ICCV, 2015.
1708.07860#48
Multi-task Self-Supervised Visual Learning
We investigate methods for combining multiple self-supervised tasks--i.e., supervised tasks where data can be collected without manual labeling--in order to train a single visual representation. First, we provide an apples-to-apples comparison of four different self-supervised tasks using the very deep ResNet-101 architecture. We then combine tasks to jointly train a network. We also explore lasso regularization to encourage the network to factorize the information in its representation, and methods for "harmonizing" network inputs in order to learn a more unified representation. We evaluate all methods on ImageNet classification, PASCAL VOC detection, and NYU depth prediction. Our results show that deeper networks work better, and that combining tasks--even via a naive multi-head architecture--always improves performance. Our best joint network nearly matches the PASCAL performance of a model pre-trained on ImageNet classification, and matches the ImageNet network on NYU depth prediction.
http://arxiv.org/pdf/1708.07860
Carl Doersch, Andrew Zisserman
cs.CV
Published at ICCV 2017
null
cs.CV
20170825
20170825
[ { "id": "1611.06430" }, { "id": "1602.07261" }, { "id": "1611.03530" }, { "id": "1611.06646" }, { "id": "1606.04671" }, { "id": "1612.06370" }, { "id": "1606.07419" }, { "id": "1609.02132" }, { "id": "1610.01685" } ]
1708.07860
49
# References [1] P. Agrawal, J. Carreira, and J. Malik. Learning to see by moving. In ICCV, 2015. [2] P. Agrawal, A. Nair, P. Abbeel, J. Malik, and S. Levine. Learning to poke by poking: Experiential learning of intu- itive physics. arXiv preprint arXiv:1606.07419, 2016. [3] Y. Aytar, C. Vondrick, and A. Torralba. Soundnet: Learning sound representations from unlabeled video. In NIPS, 2016. [4] J. Chen, R. Monga, S. Bengio, and R. Jozefowicz. Revisit- ing distributed synchronous SGD. In ICLR Workshop Track, 2016. [5] J. Dean, G. Corrado, R. Monga, K. Chen, M. Devin, M. Mao, A. Senior, P. Tucker, K. Yang, Q. V. Le, et al. Large scale distributed deep networks. In NIPS, 2012. Semi-supervised learning with context-conditional generative adversarial net- works. arXiv preprint arXiv:1611.06430, 2016.
1708.07860#49
Multi-task Self-Supervised Visual Learning
We investigate methods for combining multiple self-supervised tasks--i.e., supervised tasks where data can be collected without manual labeling--in order to train a single visual representation. First, we provide an apples-to-apples comparison of four different self-supervised tasks using the very deep ResNet-101 architecture. We then combine tasks to jointly train a network. We also explore lasso regularization to encourage the network to factorize the information in its representation, and methods for "harmonizing" network inputs in order to learn a more unified representation. We evaluate all methods on ImageNet classification, PASCAL VOC detection, and NYU depth prediction. Our results show that deeper networks work better, and that combining tasks--even via a naive multi-head architecture--always improves performance. Our best joint network nearly matches the PASCAL performance of a model pre-trained on ImageNet classification, and matches the ImageNet network on NYU depth prediction.
http://arxiv.org/pdf/1708.07860
Carl Doersch, Andrew Zisserman
cs.CV
Published at ICCV 2017
null
cs.CV
20170825
20170825
[ { "id": "1611.06430" }, { "id": "1602.07261" }, { "id": "1611.03530" }, { "id": "1611.06646" }, { "id": "1606.04671" }, { "id": "1612.06370" }, { "id": "1606.07419" }, { "id": "1609.02132" }, { "id": "1610.01685" } ]
1708.07860
50
Semi-supervised learning with context-conditional generative adversarial net- works. arXiv preprint arXiv:1611.06430, 2016. [7] C. Doersch, A. Gupta, and A. A. Efros. Unsupervised vi- sual representation learning by context prediction. In ICCV, 2015. [8] A. Dosovitskiy, J. T. Springenberg, M. Riedmiller, and T. Brox. Discriminative unsupervised feature learning with convolutional neural networks. In NIPS, 2014. [9] D. Eigen and R. Fergus. Predicting depth, surface normals and semantic labels with a common multi-scale convolu- tional architecture. In ICCV, 2015. [10] B. Fernando, H. Bilen, E. Gavves, and S. Gould. Self- supervised video representation learning with odd-one-out networks. arXiv preprint arXiv:1611.06646, 2016. [11] P. F¨oldi´ak. Learning invariance from transformation se- quences. Neural Computation, 3(2):194–200, 1991. [12] G. Gkioxari, R. Girshick, and J. Malik. Contextual action recognition with R*CNN. In ICCV, 2015.
1708.07860#50
Multi-task Self-Supervised Visual Learning
We investigate methods for combining multiple self-supervised tasks--i.e., supervised tasks where data can be collected without manual labeling--in order to train a single visual representation. First, we provide an apples-to-apples comparison of four different self-supervised tasks using the very deep ResNet-101 architecture. We then combine tasks to jointly train a network. We also explore lasso regularization to encourage the network to factorize the information in its representation, and methods for "harmonizing" network inputs in order to learn a more unified representation. We evaluate all methods on ImageNet classification, PASCAL VOC detection, and NYU depth prediction. Our results show that deeper networks work better, and that combining tasks--even via a naive multi-head architecture--always improves performance. Our best joint network nearly matches the PASCAL performance of a model pre-trained on ImageNet classification, and matches the ImageNet network on NYU depth prediction.
http://arxiv.org/pdf/1708.07860
Carl Doersch, Andrew Zisserman
cs.CV
Published at ICCV 2017
null
cs.CV
20170825
20170825
[ { "id": "1611.06430" }, { "id": "1602.07261" }, { "id": "1611.03530" }, { "id": "1611.06646" }, { "id": "1606.04671" }, { "id": "1612.06370" }, { "id": "1606.07419" }, { "id": "1609.02132" }, { "id": "1610.01685" } ]