doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1705.07565 | 54 | We also visualize the distribution of parametersâ sensitivity scores Lqâs estimated by Layer-wise OBS in Figure 3, and ï¬nd that parameters of little impact on the layer output dominate. This further veriï¬es our hypothesis that deep neural networks usually contain a lot of redundant parameters. As shown in the ï¬gure, the distribution of parametersâ sensitivity scores in Layer-wise OBS are heavy-tailed. This means that a lot of parameters can be pruned with minor impact on the prediction outcome.
13
â -â Before Pruning â Lwc ââ L-OBS Error 0 15 15 22.5 30 Retraining Iterations (105)
Figure 4: Retraining pattern of LWC and L-OBS. L-OBS has a better start point and totally resume original performance after 740 iterations for LeNet-5.
Random pruning gets the poorest result as expected but can still preserve prediction accuracy when the pruning ratio is smaller than 30%. This also indicates the high redundancy of the network. | 1705.07565#54 | Learning to Prune Deep Neural Networks via Layer-wise Optimal Brain Surgeon | How to develop slim and accurate deep neural networks has become crucial for
real- world applications, especially for those employed in embedded systems.
Though previous work along this research line has shown some promising results,
most existing methods either fail to significantly compress a well-trained deep
network or require a heavy retraining process for the pruned deep network to
re-boost its prediction performance. In this paper, we propose a new layer-wise
pruning method for deep neural networks. In our proposed method, parameters of
each individual layer are pruned independently based on second order
derivatives of a layer-wise error function with respect to the corresponding
parameters. We prove that the final prediction performance drop after pruning
is bounded by a linear combination of the reconstructed errors caused at each
layer. Therefore, there is a guarantee that one only needs to perform a light
retraining process on the pruned network to resume its original prediction
performance. We conduct extensive experiments on benchmark datasets to
demonstrate the effectiveness of our pruning method compared with several
state-of-the-art baseline methods. | http://arxiv.org/pdf/1705.07565 | Xin Dong, Shangyu Chen, Sinno Jialin Pan | cs.NE, cs.CV, cs.LG | null | null | cs.NE | 20170522 | 20171109 | [
{
"id": "1607.03250"
},
{
"id": "1608.08710"
},
{
"id": "1511.06067"
},
{
"id": "1701.04465"
},
{
"id": "1603.04467"
},
{
"id": "1607.05423"
}
] |
1705.07565 | 55 | Random pruning gets the poorest result as expected but can still preserve prediction accuracy when the pruning ratio is smaller than 30%. This also indicates the high redundancy of the network.
Compared with LWC and ApoZW, L-OBS is able to preserve original accuracy until pruning ratio reaches about 96% which we call as âpruning inï¬ection pointâ. As mentioned in Section 3.4, the reason on this âpruning inï¬ection pointâ is that the distribution of parametersâ sensitivity scores is heavy-tailed and sensitivity scores after âpruning inï¬ection pointâ would be considerable all at once. The percentage of parameters with sensitivity smaller than 0.001 is about 92% which matches well with pruning ratio at inï¬ection point.
L-OBS can not only preserve modelsâ performance when pruning one single layer, but also ensures tiny drop of performance when pruning all layers in a model. This claim holds because of the theoretical guarantee on the overall prediction performance of the pruned deep neural network in terms of reconstructed errors for each layer in Section 3.3. As shown in Figure 4, L-OBS is able to resume original performance after 740 iterations for LeNet-5 with compression ratio of 7%.
# How To Set Tolerable Error Threshold | 1705.07565#55 | Learning to Prune Deep Neural Networks via Layer-wise Optimal Brain Surgeon | How to develop slim and accurate deep neural networks has become crucial for
real- world applications, especially for those employed in embedded systems.
Though previous work along this research line has shown some promising results,
most existing methods either fail to significantly compress a well-trained deep
network or require a heavy retraining process for the pruned deep network to
re-boost its prediction performance. In this paper, we propose a new layer-wise
pruning method for deep neural networks. In our proposed method, parameters of
each individual layer are pruned independently based on second order
derivatives of a layer-wise error function with respect to the corresponding
parameters. We prove that the final prediction performance drop after pruning
is bounded by a linear combination of the reconstructed errors caused at each
layer. Therefore, there is a guarantee that one only needs to perform a light
retraining process on the pruned network to resume its original prediction
performance. We conduct extensive experiments on benchmark datasets to
demonstrate the effectiveness of our pruning method compared with several
state-of-the-art baseline methods. | http://arxiv.org/pdf/1705.07565 | Xin Dong, Shangyu Chen, Sinno Jialin Pan | cs.NE, cs.CV, cs.LG | null | null | cs.NE | 20170522 | 20171109 | [
{
"id": "1607.03250"
},
{
"id": "1608.08710"
},
{
"id": "1511.06067"
},
{
"id": "1701.04465"
},
{
"id": "1603.04467"
},
{
"id": "1607.05423"
}
] |
1705.07565 | 56 | # How To Set Tolerable Error Threshold
One of the most important bounds we proved is that there is a theoretical guarantee on the overall prediction performance of the pruned deep neural network in terms of reconstructed errors for each pruning operation in each layer. This bound enables us to prune a whole model layer by layer without concerns because the accumulated error of ultimate network output is bounded by the weighted sum of layer-wise errors. As long as we control layer-wise errors, we can control the accumulated error. Although L-OBS allows users to control the accumulated error of ultimate network output 2â = Sa ||Y! â Y"||p, this error is used to measure difference between network outputs before and after pruning, and is not strictly inversely proportional to the final accuracy. In practice, one can increase tolerable error threshold ⬠from a relative small initial value to incrementally prune more and more parameters to monitor model performance, and make a trade-off between compression ratio and performance drop. The corresponding relation (in the first layer of LeNet-300-100) between the tolerable error threshold and the pruning ratio is shown in Figure[5]
# Iterative Layer-wise OBS | 1705.07565#56 | Learning to Prune Deep Neural Networks via Layer-wise Optimal Brain Surgeon | How to develop slim and accurate deep neural networks has become crucial for
real- world applications, especially for those employed in embedded systems.
Though previous work along this research line has shown some promising results,
most existing methods either fail to significantly compress a well-trained deep
network or require a heavy retraining process for the pruned deep network to
re-boost its prediction performance. In this paper, we propose a new layer-wise
pruning method for deep neural networks. In our proposed method, parameters of
each individual layer are pruned independently based on second order
derivatives of a layer-wise error function with respect to the corresponding
parameters. We prove that the final prediction performance drop after pruning
is bounded by a linear combination of the reconstructed errors caused at each
layer. Therefore, there is a guarantee that one only needs to perform a light
retraining process on the pruned network to resume its original prediction
performance. We conduct extensive experiments on benchmark datasets to
demonstrate the effectiveness of our pruning method compared with several
state-of-the-art baseline methods. | http://arxiv.org/pdf/1705.07565 | Xin Dong, Shangyu Chen, Sinno Jialin Pan | cs.NE, cs.CV, cs.LG | null | null | cs.NE | 20170522 | 20171109 | [
{
"id": "1607.03250"
},
{
"id": "1608.08710"
},
{
"id": "1511.06067"
},
{
"id": "1701.04465"
},
{
"id": "1603.04467"
},
{
"id": "1607.05423"
}
] |
1705.07565 | 57 | # Iterative Layer-wise OBS
As mentioned in Section 4.1, to achieve better compression ratio, L-OBS can be quite ï¬exibly adopted to its iterative version, which performs pruning and light retraining alternatively. Speciï¬cally, the two-stage iterative L-OBS applied to LeNet-300-100, LeNet-5 and VGG-16 in this work follows the retrain the model and reboot following work ï¬ow: pre-train a well-trained model performance in a degree lightly retrain model. In practice, if required compression ratio is beyond the âpruning inï¬ection pointâ, users have to deploy iterative L-OBS though ultimate compression ratio is not of too much importance. Experimental results are shown in Tabel 3, 4 and 5,
14
0.007 0.006 0.005 0.004 0.003 0.002 0.001 0.000 0 20 Error Threshold 40 Sparsity (%) 60 80 100
Figure 5: The corresponding relation between tolerable error threshold and pruning ratio.
where CR(n) means ratio of the number of preserved parameters to the number of original parameters after the n-th pruning.
Table 3: For LeNet-300-100, iterative L-OBS(two-stage) achieves compression ratio of 1.5% | 1705.07565#57 | Learning to Prune Deep Neural Networks via Layer-wise Optimal Brain Surgeon | How to develop slim and accurate deep neural networks has become crucial for
real- world applications, especially for those employed in embedded systems.
Though previous work along this research line has shown some promising results,
most existing methods either fail to significantly compress a well-trained deep
network or require a heavy retraining process for the pruned deep network to
re-boost its prediction performance. In this paper, we propose a new layer-wise
pruning method for deep neural networks. In our proposed method, parameters of
each individual layer are pruned independently based on second order
derivatives of a layer-wise error function with respect to the corresponding
parameters. We prove that the final prediction performance drop after pruning
is bounded by a linear combination of the reconstructed errors caused at each
layer. Therefore, there is a guarantee that one only needs to perform a light
retraining process on the pruned network to resume its original prediction
performance. We conduct extensive experiments on benchmark datasets to
demonstrate the effectiveness of our pruning method compared with several
state-of-the-art baseline methods. | http://arxiv.org/pdf/1705.07565 | Xin Dong, Shangyu Chen, Sinno Jialin Pan | cs.NE, cs.CV, cs.LG | null | null | cs.NE | 20170522 | 20171109 | [
{
"id": "1607.03250"
},
{
"id": "1608.08710"
},
{
"id": "1511.06067"
},
{
"id": "1701.04465"
},
{
"id": "1603.04467"
},
{
"id": "1607.05423"
}
] |
1705.07565 | 58 | Table 3: For LeNet-300-100, iterative L-OBS(two-stage) achieves compression ratio of 1.5%
Layer Weights CR1 CR2 fc1 fc2 fc3 235K 30K 1K 7% 20% 70% 1% 4% 54% Total 266K 8.7% 1.5%
Table 4: For LeNet-5, iterative L-OBS(two-stage) achieves compression ratio of 0.9%
Layer Weights CR1 CR2 conv1 conv2 fc1 fc2 0.5K 25K 400K 5K 60% 60% 6% 30% 20% 1% 0.9% 8% Total 431K 9.5% 0.9%
Table 5: For VGG-16, iterative L-OBS(two-stage) achieves compression ratio of 7.5% conv2_2 | 1705.07565#58 | Learning to Prune Deep Neural Networks via Layer-wise Optimal Brain Surgeon | How to develop slim and accurate deep neural networks has become crucial for
real- world applications, especially for those employed in embedded systems.
Though previous work along this research line has shown some promising results,
most existing methods either fail to significantly compress a well-trained deep
network or require a heavy retraining process for the pruned deep network to
re-boost its prediction performance. In this paper, we propose a new layer-wise
pruning method for deep neural networks. In our proposed method, parameters of
each individual layer are pruned independently based on second order
derivatives of a layer-wise error function with respect to the corresponding
parameters. We prove that the final prediction performance drop after pruning
is bounded by a linear combination of the reconstructed errors caused at each
layer. Therefore, there is a guarantee that one only needs to perform a light
retraining process on the pruned network to resume its original prediction
performance. We conduct extensive experiments on benchmark datasets to
demonstrate the effectiveness of our pruning method compared with several
state-of-the-art baseline methods. | http://arxiv.org/pdf/1705.07565 | Xin Dong, Shangyu Chen, Sinno Jialin Pan | cs.NE, cs.CV, cs.LG | null | null | cs.NE | 20170522 | 20171109 | [
{
"id": "1607.03250"
},
{
"id": "1608.08710"
},
{
"id": "1511.06067"
},
{
"id": "1701.04465"
},
{
"id": "1603.04467"
},
{
"id": "1607.05423"
}
] |
1705.07565 | 59 | Table 5: For VGG-16, iterative L-OBS(two-stage) achieves compression ratio of 7.5% conv2_2
Layer conv1_1 conv1_2 conv2_1 conv3_1 conv3_2 conv3_3 conv4_1 Weights 2K 37K 74K 148K 295K 590K 590K 1M CR1 70% 50% 70% 70% 60% 60% 60% 50% CR2 58% 36% 42% 32% 53% 34% 39% 43% Layer conv4_2 conv4_3 conv5_1 conv5_2 conv5_3 fc6 fc7 fc8 Weights 2M 2M 2M 2M 2M 103M 17M 4M CR1 50% 50% 70% 70% 60% 8% 10% 30% CR2 24% 30% 35% 43% 32% 2% 5% 17%
15 | 1705.07565#59 | Learning to Prune Deep Neural Networks via Layer-wise Optimal Brain Surgeon | How to develop slim and accurate deep neural networks has become crucial for
real- world applications, especially for those employed in embedded systems.
Though previous work along this research line has shown some promising results,
most existing methods either fail to significantly compress a well-trained deep
network or require a heavy retraining process for the pruned deep network to
re-boost its prediction performance. In this paper, we propose a new layer-wise
pruning method for deep neural networks. In our proposed method, parameters of
each individual layer are pruned independently based on second order
derivatives of a layer-wise error function with respect to the corresponding
parameters. We prove that the final prediction performance drop after pruning
is bounded by a linear combination of the reconstructed errors caused at each
layer. Therefore, there is a guarantee that one only needs to perform a light
retraining process on the pruned network to resume its original prediction
performance. We conduct extensive experiments on benchmark datasets to
demonstrate the effectiveness of our pruning method compared with several
state-of-the-art baseline methods. | http://arxiv.org/pdf/1705.07565 | Xin Dong, Shangyu Chen, Sinno Jialin Pan | cs.NE, cs.CV, cs.LG | null | null | cs.NE | 20170522 | 20171109 | [
{
"id": "1607.03250"
},
{
"id": "1608.08710"
},
{
"id": "1511.06067"
},
{
"id": "1701.04465"
},
{
"id": "1603.04467"
},
{
"id": "1607.05423"
}
] |
1705.07485 | 0 | 7 1 0 2
y a M 3 2 ] G L . s c [
2 v 5 8 4 7 0 . 5 0 7 1 : v i X r a
# Shake-Shake regularization
# Xavier Gastaldi [email protected]
# Abstract
The method introduced in this paper aims at helping deep learning practition- ers faced with an overï¬t problem. The idea is to replace, in a multi-branch network, the standard summation of parallel branches with a stochastic afï¬ne combination. Applied to 3-branch residual networks, shake-shake regularization improves on the best single shot published results on CIFAR-10 and CIFAR- 100 by reaching test errors of 2.86% and 15.85%. Experiments on architec- tures without skip connections or Batch Normalization show encouraging re- sults and open the door to a large set of applications. Code is available at https://github.com/xgastaldi/shake-shake.
# Introduction | 1705.07485#0 | Shake-Shake regularization | The method introduced in this paper aims at helping deep learning
practitioners faced with an overfit problem. The idea is to replace, in a
multi-branch network, the standard summation of parallel branches with a
stochastic affine combination. Applied to 3-branch residual networks,
shake-shake regularization improves on the best single shot published results
on CIFAR-10 and CIFAR-100 by reaching test errors of 2.86% and 15.85%.
Experiments on architectures without skip connections or Batch Normalization
show encouraging results and open the door to a large set of applications. Code
is available at https://github.com/xgastaldi/shake-shake | http://arxiv.org/pdf/1705.07485 | Xavier Gastaldi | cs.LG, cs.CV | null | null | cs.LG | 20170521 | 20170523 | [
{
"id": "1612.01490"
},
{
"id": "1609.05672"
},
{
"id": "1608.03983"
},
{
"id": "1511.06807"
},
{
"id": "1611.05431"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
}
] |
1705.07485 | 1 | # Introduction
Deep residual nets (He et al., 2016a) were ï¬rst introduced in the ILSVRC & COCO 2015 competitions (Russakovsky et al., 2015; Lin et al., 2014), where they won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation. Since then, signiï¬cant effort has been put into trying to improve their performance. Scientists have investigated the impact of pushing depth (He et al., 2016b; Huang et al., 2016a), width (Zagoruyko & Komodakis, 2016) and cardinality (Xie et al., 2016; Szegedy et al., 2016; Abdi & Nahavandi, 2016). | 1705.07485#1 | Shake-Shake regularization | The method introduced in this paper aims at helping deep learning
practitioners faced with an overfit problem. The idea is to replace, in a
multi-branch network, the standard summation of parallel branches with a
stochastic affine combination. Applied to 3-branch residual networks,
shake-shake regularization improves on the best single shot published results
on CIFAR-10 and CIFAR-100 by reaching test errors of 2.86% and 15.85%.
Experiments on architectures without skip connections or Batch Normalization
show encouraging results and open the door to a large set of applications. Code
is available at https://github.com/xgastaldi/shake-shake | http://arxiv.org/pdf/1705.07485 | Xavier Gastaldi | cs.LG, cs.CV | null | null | cs.LG | 20170521 | 20170523 | [
{
"id": "1612.01490"
},
{
"id": "1609.05672"
},
{
"id": "1608.03983"
},
{
"id": "1511.06807"
},
{
"id": "1611.05431"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
}
] |
1705.07485 | 2 | While residual networks are powerful models, they still overï¬t on small datasets. A large number of techniques have been proposed to tackle this problem, including weight decay (Nowlan & Hinton, 1992), early stopping, and dropout (Srivastava et al., 2014). While not directly presented as a regularization method, Batch Normalization (Ioffe & Szegedy, 2015) regularizes the network by computing statistics that ï¬uctuate with each mini-batch. Similarly, Stochastic Gradient Descent (SGD) (Bottou, 1998; Sutskever et al., 2013) can also be interpreted as Gradient Descent using noisy gradients and the generalization performance of neural networks often depends on the size of the mini-batch (see Keskar et al. (2017)). | 1705.07485#2 | Shake-Shake regularization | The method introduced in this paper aims at helping deep learning
practitioners faced with an overfit problem. The idea is to replace, in a
multi-branch network, the standard summation of parallel branches with a
stochastic affine combination. Applied to 3-branch residual networks,
shake-shake regularization improves on the best single shot published results
on CIFAR-10 and CIFAR-100 by reaching test errors of 2.86% and 15.85%.
Experiments on architectures without skip connections or Batch Normalization
show encouraging results and open the door to a large set of applications. Code
is available at https://github.com/xgastaldi/shake-shake | http://arxiv.org/pdf/1705.07485 | Xavier Gastaldi | cs.LG, cs.CV | null | null | cs.LG | 20170521 | 20170523 | [
{
"id": "1612.01490"
},
{
"id": "1609.05672"
},
{
"id": "1608.03983"
},
{
"id": "1511.06807"
},
{
"id": "1611.05431"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
}
] |
1705.07485 | 3 | Pre-2015, most computer vision classiï¬cation architectures used dropout to combat overï¬t but the introduction of Batch Normalization reduced its effectiveness (see Ioffe & Szegedy (2015); Zagoruyko & Komodakis (2016); Huang et al. (2016b)). Searching for other regularization methods, researchers started to look at the possibilities speciï¬cally offered by multi-branch networks. Some of them noticed that, given the right conditions, it was possible to randomly drop some of the information paths during training (Huang et al., 2016b; Larsson et al., 2016).
Like these last 2 works, the method proposed in this document aims at improving the generalization ability of multi-branch networks by replacing the standard summation of parallel branches with a stochastic afï¬ne combination.
# 1.1 Motivation
Data augmentation techniques have traditionally been applied to input images only. However, for a computer, there is no real difference between an input image and an intermediate representation. As a consequence, it might be possible to apply data augmentation techniques to internal representations.
Shake-Shake regularization was created as an attempt to produce this sort of effect by stochastically "blending" 2 viable tensors.
# 1.2 Model description on 3-branch ResNets | 1705.07485#3 | Shake-Shake regularization | The method introduced in this paper aims at helping deep learning
practitioners faced with an overfit problem. The idea is to replace, in a
multi-branch network, the standard summation of parallel branches with a
stochastic affine combination. Applied to 3-branch residual networks,
shake-shake regularization improves on the best single shot published results
on CIFAR-10 and CIFAR-100 by reaching test errors of 2.86% and 15.85%.
Experiments on architectures without skip connections or Batch Normalization
show encouraging results and open the door to a large set of applications. Code
is available at https://github.com/xgastaldi/shake-shake | http://arxiv.org/pdf/1705.07485 | Xavier Gastaldi | cs.LG, cs.CV | null | null | cs.LG | 20170521 | 20170523 | [
{
"id": "1612.01490"
},
{
"id": "1609.05672"
},
{
"id": "1608.03983"
},
{
"id": "1511.06807"
},
{
"id": "1611.05431"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
}
] |
1705.07485 | 4 | # 1.2 Model description on 3-branch ResNets
Let xi denote the tensor of inputs into residual block i. W (1) are sets of weights associated with the 2 residual units. F denotes the residual function, e.g. a stack of two 3x3 convolutional layers. xi+1 denotes the tensor of outputs from residual block i.
A typical pre-activation ResNet with 2 residual branches would follow this equation:
xi+1 = xi + F(xi, W (1) i ) + F(xi, W (2) i ) (1)
Proposed modiï¬cation: If αi is a random variable following a uniform distribution between 0 and 1, then during training:
xi+1 = xi + αiF(xi, W (1) i ) + (1 â αi)F(xi, W (2) i ) (2)
Following the same logic as for dropout, all αi are set to the expected value of 0.5 at test time.
This method can be seen as a form of drop-path (Larsson et al., 2016) where residual branches are scaled-down instead of being completely dropped (i.e. multiplied by 0). | 1705.07485#4 | Shake-Shake regularization | The method introduced in this paper aims at helping deep learning
practitioners faced with an overfit problem. The idea is to replace, in a
multi-branch network, the standard summation of parallel branches with a
stochastic affine combination. Applied to 3-branch residual networks,
shake-shake regularization improves on the best single shot published results
on CIFAR-10 and CIFAR-100 by reaching test errors of 2.86% and 15.85%.
Experiments on architectures without skip connections or Batch Normalization
show encouraging results and open the door to a large set of applications. Code
is available at https://github.com/xgastaldi/shake-shake | http://arxiv.org/pdf/1705.07485 | Xavier Gastaldi | cs.LG, cs.CV | null | null | cs.LG | 20170521 | 20170523 | [
{
"id": "1612.01490"
},
{
"id": "1609.05672"
},
{
"id": "1608.03983"
},
{
"id": "1511.06807"
},
{
"id": "1611.05431"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
}
] |
1705.07485 | 5 | Replacing binary variables with enhancement or reduction coefï¬cients is also explored in dropout variants like shakeout (Kang et al., 2016) and whiteout (Yinan et al., 2016). However, where these methods perform an element-wise multiplication between an input tensor and a noise tensor, shake-shake regularization multiplies the whole image tensor with just one scalar αi (or 1 â αi).
1.3 Training procedure
a; â rand(0,1) Cony 3x3 Cony 3x3 Cony 3x3 Cony 3x3 ca ea Mul(p;) Mul(1-p) addition B,;â rand(0,1)
Figure 1: Left: Forward training pass. Center: Backward training pass. Right: At test time.
As shown in Figure 1, all scaling coefï¬cients are overwritten with new random numbers before each forward pass. The key to making this work is to repeat this coefï¬cient update operation before each backward pass. This results in a stochastic blend of forward and backward ï¬ows during training. | 1705.07485#5 | Shake-Shake regularization | The method introduced in this paper aims at helping deep learning
practitioners faced with an overfit problem. The idea is to replace, in a
multi-branch network, the standard summation of parallel branches with a
stochastic affine combination. Applied to 3-branch residual networks,
shake-shake regularization improves on the best single shot published results
on CIFAR-10 and CIFAR-100 by reaching test errors of 2.86% and 15.85%.
Experiments on architectures without skip connections or Batch Normalization
show encouraging results and open the door to a large set of applications. Code
is available at https://github.com/xgastaldi/shake-shake | http://arxiv.org/pdf/1705.07485 | Xavier Gastaldi | cs.LG, cs.CV | null | null | cs.LG | 20170521 | 20170523 | [
{
"id": "1612.01490"
},
{
"id": "1609.05672"
},
{
"id": "1608.03983"
},
{
"id": "1511.06807"
},
{
"id": "1611.05431"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
}
] |
1705.07485 | 6 | Related to this idea are the works of An (1996) and Neelakantan et al. (2015). These authors showed that adding noise to the gradient during training helps training and generalization of complicated neural networks. Shake-Shake regularization can be seen as an extension of this concept where gradient noise is replaced by a form of gradient augmentation.
2
# Improving on the best single shot published results on CIFAR
# 2.1 CIFAR-10
# 2.1.1 Implementation details | 1705.07485#6 | Shake-Shake regularization | The method introduced in this paper aims at helping deep learning
practitioners faced with an overfit problem. The idea is to replace, in a
multi-branch network, the standard summation of parallel branches with a
stochastic affine combination. Applied to 3-branch residual networks,
shake-shake regularization improves on the best single shot published results
on CIFAR-10 and CIFAR-100 by reaching test errors of 2.86% and 15.85%.
Experiments on architectures without skip connections or Batch Normalization
show encouraging results and open the door to a large set of applications. Code
is available at https://github.com/xgastaldi/shake-shake | http://arxiv.org/pdf/1705.07485 | Xavier Gastaldi | cs.LG, cs.CV | null | null | cs.LG | 20170521 | 20170523 | [
{
"id": "1612.01490"
},
{
"id": "1609.05672"
},
{
"id": "1608.03983"
},
{
"id": "1511.06807"
},
{
"id": "1611.05431"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
}
] |
1705.07485 | 7 | The Shake-Shake code is based on fb.resnet.torch1 and is available at https://github.com/ xgastaldi/shake-shake. The ï¬rst layer is a 3x3 Conv with 16 ï¬lters, followed by 3 stages each having 4 residual blocks. The feature map size is 32, 16 and 8 for each stage. Width is doubled when downsampling. The network ends with a 8x8 average pooling and a fully connected layer (total 26 lay- ers deep). Residual paths have the following structure: ReLU-Conv3x3-BN-ReLU-Conv3x3-BN-Mul. The skip connections represent the identity function except during downsampling where a slightly customized structure consisting of 2 concatenated ï¬ows is used. Each of the 2 ï¬ows has the following components: 1x1 average pooling with step 2 followed by a 1x1 convolution. The input of one of the two ï¬ows is shifted by 1 pixel right and 1 pixel down to make the average pooling sample from a different position. The concatenation of the two ï¬ows doubles the width. Models were trained on the CIFAR-10 (Krizhevsky, | 1705.07485#7 | Shake-Shake regularization | The method introduced in this paper aims at helping deep learning
practitioners faced with an overfit problem. The idea is to replace, in a
multi-branch network, the standard summation of parallel branches with a
stochastic affine combination. Applied to 3-branch residual networks,
shake-shake regularization improves on the best single shot published results
on CIFAR-10 and CIFAR-100 by reaching test errors of 2.86% and 15.85%.
Experiments on architectures without skip connections or Batch Normalization
show encouraging results and open the door to a large set of applications. Code
is available at https://github.com/xgastaldi/shake-shake | http://arxiv.org/pdf/1705.07485 | Xavier Gastaldi | cs.LG, cs.CV | null | null | cs.LG | 20170521 | 20170523 | [
{
"id": "1612.01490"
},
{
"id": "1609.05672"
},
{
"id": "1608.03983"
},
{
"id": "1511.06807"
},
{
"id": "1611.05431"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
}
] |
1705.07485 | 8 | a different position. The concatenation of the two ï¬ows doubles the width. Models were trained on the CIFAR-10 (Krizhevsky, 2009) 50k training set and evaluated on the 10k test set. Standard translation and ï¬ipping data augmentation is applied on the 32x32 input image. Due to the introduced stochasticity, all models were trained for 1800 epochs. Training starts with a learning rate of 0.2 and is annealed using a Cosine function without restart (see Loshchilov & Hutter (2016)). All models were trained on 2 GPUs with a mini-batch size of 128. Other implementation details are as in fb.resnet.torch. | 1705.07485#8 | Shake-Shake regularization | The method introduced in this paper aims at helping deep learning
practitioners faced with an overfit problem. The idea is to replace, in a
multi-branch network, the standard summation of parallel branches with a
stochastic affine combination. Applied to 3-branch residual networks,
shake-shake regularization improves on the best single shot published results
on CIFAR-10 and CIFAR-100 by reaching test errors of 2.86% and 15.85%.
Experiments on architectures without skip connections or Batch Normalization
show encouraging results and open the door to a large set of applications. Code
is available at https://github.com/xgastaldi/shake-shake | http://arxiv.org/pdf/1705.07485 | Xavier Gastaldi | cs.LG, cs.CV | null | null | cs.LG | 20170521 | 20170523 | [
{
"id": "1612.01490"
},
{
"id": "1609.05672"
},
{
"id": "1608.03983"
},
{
"id": "1511.06807"
},
{
"id": "1611.05431"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
}
] |
1705.07485 | 9 | # Inï¬uence of Forward and Backward training procedures
The base network is a 26 2x32d ResNet (i.e. the network has a depth of 26, 2 residual branches and the ï¬rst residual block has a width of 32). "Shake" means that all scaling coefï¬cients are overwritten with new random numbers before the pass. "Even" means that all scaling coefï¬cients are set to 0.5 before the pass. "Keep" means that we keep, for the backward pass, the scaling coefï¬cients used during the forward pass. "Batch" means that, for each residual block i, we apply the same scaling coefï¬cient for all the images in the mini-batch. "Image" means that, for each residual block i, we apply a different scaling coefï¬cient for each image in the mini-batch (see Image level update procedure below). | 1705.07485#9 | Shake-Shake regularization | The method introduced in this paper aims at helping deep learning
practitioners faced with an overfit problem. The idea is to replace, in a
multi-branch network, the standard summation of parallel branches with a
stochastic affine combination. Applied to 3-branch residual networks,
shake-shake regularization improves on the best single shot published results
on CIFAR-10 and CIFAR-100 by reaching test errors of 2.86% and 15.85%.
Experiments on architectures without skip connections or Batch Normalization
show encouraging results and open the door to a large set of applications. Code
is available at https://github.com/xgastaldi/shake-shake | http://arxiv.org/pdf/1705.07485 | Xavier Gastaldi | cs.LG, cs.CV | null | null | cs.LG | 20170521 | 20170523 | [
{
"id": "1612.01490"
},
{
"id": "1609.05672"
},
{
"id": "1608.03983"
},
{
"id": "1511.06807"
},
{
"id": "1611.05431"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
}
] |
1705.07485 | 10 | Image level update procedure: Let x0 denote the original input mini-batch tensor of dimensions 128x3x32x32. The ï¬rst dimension « stacks » 128 images of dimensions 3x32x32. Inside the second stage of a 26 2x32d model, this tensor is transformed into a mini-batch tensor xi of dimensions 128x64x16x16. Applying Shake-Shake regularization at the Image level means slicing this tensor along the ï¬rst dimension and, for each of the 128 slices, multiplying the jth slice (of dimensions 64x16x16) with a scalar αi.j (or 1 â αi.j). | 1705.07485#10 | Shake-Shake regularization | The method introduced in this paper aims at helping deep learning
practitioners faced with an overfit problem. The idea is to replace, in a
multi-branch network, the standard summation of parallel branches with a
stochastic affine combination. Applied to 3-branch residual networks,
shake-shake regularization improves on the best single shot published results
on CIFAR-10 and CIFAR-100 by reaching test errors of 2.86% and 15.85%.
Experiments on architectures without skip connections or Batch Normalization
show encouraging results and open the door to a large set of applications. Code
is available at https://github.com/xgastaldi/shake-shake | http://arxiv.org/pdf/1705.07485 | Xavier Gastaldi | cs.LG, cs.CV | null | null | cs.LG | 20170521 | 20170523 | [
{
"id": "1612.01490"
},
{
"id": "1609.05672"
},
{
"id": "1608.03983"
},
{
"id": "1511.06807"
},
{
"id": "1611.05431"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
}
] |
1705.07485 | 11 | The numbers in Table 1 represent the average of 3 runs except for the 96d models which were run 5 times. What can be observed in Table 1 and Figure 2 is that "Shake-Keep" or "S-K" models (i.e. "Shake" Backward) do not have a particularly strong effect on the error rate. The network seems to be able to see through the perturbations when the weight update is done with the same ratios as during the forward pass. "Even-Shake" only works when applied at the "Image" level. "Shake-Even" and "Shake-Shake" models all produce strong results at 32d but the better training curves of "Shake-Shake" models start to make a difference when the number of ï¬lters of the ï¬rst residual block is increased to 64d. Applying coefï¬cients at the "Image" level seems to improve regularization.
# 2.2 CIFAR-100 | 1705.07485#11 | Shake-Shake regularization | The method introduced in this paper aims at helping deep learning
practitioners faced with an overfit problem. The idea is to replace, in a
multi-branch network, the standard summation of parallel branches with a
stochastic affine combination. Applied to 3-branch residual networks,
shake-shake regularization improves on the best single shot published results
on CIFAR-10 and CIFAR-100 by reaching test errors of 2.86% and 15.85%.
Experiments on architectures without skip connections or Batch Normalization
show encouraging results and open the door to a large set of applications. Code
is available at https://github.com/xgastaldi/shake-shake | http://arxiv.org/pdf/1705.07485 | Xavier Gastaldi | cs.LG, cs.CV | null | null | cs.LG | 20170521 | 20170523 | [
{
"id": "1612.01490"
},
{
"id": "1609.05672"
},
{
"id": "1608.03983"
},
{
"id": "1511.06807"
},
{
"id": "1611.05431"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
}
] |
1705.07485 | 12 | # 2.2 CIFAR-100
The network architecture chosen for CIFAR-100 is a ResNeXt without pre-activation (this model gives slightly better results on CIFAR-100 than the model used for CIFAR-10). Hyperparameters are the same as in Xie et al. (2016) except for the learning rate which is annealed using a Cosine function and the number of epochs which is increased to 1800. The network in Table 2 is a ResNeXt-29 2x4x64d (2 residual branches with 4 grouped convolutions, each with 64 channels). Due to the
# 1https://github.com/facebook/fb.resnet.torch
3
Table 1: Error rates (%) on CIFAR-10. Results that surpass all competing methods by more than 0.1% are bold and the overall best result is blue.
Model Forward Backward Level 26 2x32d 26 2x64d 26 2x96d Even Even n/a 4.27 3.76 3.58 Even Shake Shake Shake Shake Keep Even Shake Batch Batch Batch Batch 4.44 4.11 3.47 3.67 - - 3.30 3.07 - - - - Even Shake Shake Shake Shake Keep Even Shake Image Image Image Image 4.11 4.09 3.47 3.55 - - 3.20 2.98 - - - 2.86 | 1705.07485#12 | Shake-Shake regularization | The method introduced in this paper aims at helping deep learning
practitioners faced with an overfit problem. The idea is to replace, in a
multi-branch network, the standard summation of parallel branches with a
stochastic affine combination. Applied to 3-branch residual networks,
shake-shake regularization improves on the best single shot published results
on CIFAR-10 and CIFAR-100 by reaching test errors of 2.86% and 15.85%.
Experiments on architectures without skip connections or Batch Normalization
show encouraging results and open the door to a large set of applications. Code
is available at https://github.com/xgastaldi/shake-shake | http://arxiv.org/pdf/1705.07485 | Xavier Gastaldi | cs.LG, cs.CV | null | null | cs.LG | 20170521 | 20170523 | [
{
"id": "1612.01490"
},
{
"id": "1609.05672"
},
{
"id": "1608.03983"
},
{
"id": "1511.06807"
},
{
"id": "1611.05431"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
}
] |
1705.07485 | 13 | Error rate 0 500 1000 1500 epoch
Error rate 0 500 1000 1500 epoch
0 500 1000 1500 0 500 1000 1500 epoch epoch
Figure 2: Left: Training curves of a selection of 32d models. Right: Training curves (dark) and test curves (light) of the 96d models.
combination of the larger model (34.4M parameters) and the long training time, fewer tests were performed than on CIFAR-10.
Table 2: Error rates (%) on CIFAR-100. Results that surpass all competing methods by more than 0.5% are bold and the overall best result is blue.
Model Forward Backward Level Runs 29 2x4x64d Even Even n/a 2 16.34 Shake Shake Even Shake Image Image 3 1 15.85 15.97
Interestingly, a key hyperparameter on CIFAR-100 is the batch size which, compared to CIFAR-10, has to be reduced from 128 to 32 if using 2 GPUs.2 Without this reduction, the E-E-B network does not produce competitive results. As shown in Table 2, the increased regularization produced by the smaller batch size impacts the training procedure selection and makes S-E-I a slightly better choice.
# 2As per notes in https://github.com/facebookresearch/ResNeXt
4
# 2.3 Comparisons with state-of-the-art results | 1705.07485#13 | Shake-Shake regularization | The method introduced in this paper aims at helping deep learning
practitioners faced with an overfit problem. The idea is to replace, in a
multi-branch network, the standard summation of parallel branches with a
stochastic affine combination. Applied to 3-branch residual networks,
shake-shake regularization improves on the best single shot published results
on CIFAR-10 and CIFAR-100 by reaching test errors of 2.86% and 15.85%.
Experiments on architectures without skip connections or Batch Normalization
show encouraging results and open the door to a large set of applications. Code
is available at https://github.com/xgastaldi/shake-shake | http://arxiv.org/pdf/1705.07485 | Xavier Gastaldi | cs.LG, cs.CV | null | null | cs.LG | 20170521 | 20170523 | [
{
"id": "1612.01490"
},
{
"id": "1609.05672"
},
{
"id": "1608.03983"
},
{
"id": "1511.06807"
},
{
"id": "1611.05431"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
}
] |
1705.07485 | 14 | # 2As per notes in https://github.com/facebookresearch/ResNeXt
4
# 2.3 Comparisons with state-of-the-art results
At the time of writing, the best single shot model on CIFAR-10 is a DenseNet-BC k=40 (3.46% error rate) with 25.6M parameters. The second best model is a ResNeXt-29, 16x64d (3.58% error rate) with 68.1M parameters. A small 26 2x32d "Shake-Even-Image" model with 2.9M parameters obtains approximately the same error rate. This is roughly 9 times less parameters than the DenseNet model and 23 times less parameters than the ResNeXt model. A 26 2x96d "Shake-Shake-Image" ResNet with 26.2M parameters, reaches a test error of 2.86% (Average of 5 runs - Median 2.87%, Min = 2.72%, Max = 2.95%). | 1705.07485#14 | Shake-Shake regularization | The method introduced in this paper aims at helping deep learning
practitioners faced with an overfit problem. The idea is to replace, in a
multi-branch network, the standard summation of parallel branches with a
stochastic affine combination. Applied to 3-branch residual networks,
shake-shake regularization improves on the best single shot published results
on CIFAR-10 and CIFAR-100 by reaching test errors of 2.86% and 15.85%.
Experiments on architectures without skip connections or Batch Normalization
show encouraging results and open the door to a large set of applications. Code
is available at https://github.com/xgastaldi/shake-shake | http://arxiv.org/pdf/1705.07485 | Xavier Gastaldi | cs.LG, cs.CV | null | null | cs.LG | 20170521 | 20170523 | [
{
"id": "1612.01490"
},
{
"id": "1609.05672"
},
{
"id": "1608.03983"
},
{
"id": "1511.06807"
},
{
"id": "1611.05431"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
}
] |
1705.07485 | 15 | On CIFAR-100, a few hyperparameter modiï¬cations of a standard ResNeXt-29 8x64d (batchsize, no pre-activation, longer training time and cosine annealing) lead to a test error of 16.34%. Adding shake-even regularization reduces the test error to 15.85% (Average of 3 runs - Median 15.85%, Min = 15.66%, Max = 16.04%).
Table 3: Test error (%) and model size on CIFAR. Best results are blue.
Method Depth Params C10 C100 Wide ResNet 28 36.5M 3.8 18.3 ResNeXt-29, 16x64d 29 68.1M 3.58 17.31 DenseNet-BC (k=40) 190 25.6M 3.46 17.18 C10 Model S-S-I C100 Model S-E-I 26 29 26.2M 2.86 34.4M - - 15.85
# 3 Correlation between residual branches
To check whether the correlation between the 2 residual branches is increased or decreased by the regularization, the following test was performed:
For each residual block: | 1705.07485#15 | Shake-Shake regularization | The method introduced in this paper aims at helping deep learning
practitioners faced with an overfit problem. The idea is to replace, in a
multi-branch network, the standard summation of parallel branches with a
stochastic affine combination. Applied to 3-branch residual networks,
shake-shake regularization improves on the best single shot published results
on CIFAR-10 and CIFAR-100 by reaching test errors of 2.86% and 15.85%.
Experiments on architectures without skip connections or Batch Normalization
show encouraging results and open the door to a large set of applications. Code
is available at https://github.com/xgastaldi/shake-shake | http://arxiv.org/pdf/1705.07485 | Xavier Gastaldi | cs.LG, cs.CV | null | null | cs.LG | 20170521 | 20170523 | [
{
"id": "1612.01490"
},
{
"id": "1609.05672"
},
{
"id": "1608.03983"
},
{
"id": "1511.06807"
},
{
"id": "1611.05431"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
}
] |
1705.07485 | 16 | # 3 Correlation between residual branches
To check whether the correlation between the 2 residual branches is increased or decreased by the regularization, the following test was performed:
For each residual block:
1. Forward a mini-batch tensor xi through the residual branch 1 (ReLU-Conv3x3-BN-ReLU- . Do the same for residual branch Conv3x3-BN-Mul(0.5)) and store the output tensor in y(1) 2 and store the output in y(2) . i
# i
. Calculate the covariance between each corresponding item in the 2 vectors using an online version of the covariance algorithm.
3. Calculate the variances of f lat(1) i 4. Repeat until all the images in the test set have been forwarded. Use the resulting covariance
# i
and variances to calculate the correlation.
This algorithm was run on CIFAR-10 for 3 EEB models and 3 S-S-I models both 26 2x32d. The results are presented in Figure 3. The correlation between the output tensors of the 2 residual branches seems to be reduced by the regularization. This would support the assumption that the regularization forces the branches to learn something different. | 1705.07485#16 | Shake-Shake regularization | The method introduced in this paper aims at helping deep learning
practitioners faced with an overfit problem. The idea is to replace, in a
multi-branch network, the standard summation of parallel branches with a
stochastic affine combination. Applied to 3-branch residual networks,
shake-shake regularization improves on the best single shot published results
on CIFAR-10 and CIFAR-100 by reaching test errors of 2.86% and 15.85%.
Experiments on architectures without skip connections or Batch Normalization
show encouraging results and open the door to a large set of applications. Code
is available at https://github.com/xgastaldi/shake-shake | http://arxiv.org/pdf/1705.07485 | Xavier Gastaldi | cs.LG, cs.CV | null | null | cs.LG | 20170521 | 20170523 | [
{
"id": "1612.01490"
},
{
"id": "1609.05672"
},
{
"id": "1608.03983"
},
{
"id": "1511.06807"
},
{
"id": "1611.05431"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
}
] |
1705.07485 | 17 | One problem to be mindful of is the issue of alignment (see Li et al. (2016)). The method above assumes that the summation at the end of the residual blocks forces an alignment of the layers on the left and right residual branches. This can be veriï¬ed by calculating the layer wise correlation for each conï¬guration of the ï¬rst 3 layers of each block.
The results are presented in Figure 4. L1R3 for residual block i means the correlation between the activations of the ï¬rst layer in y(1) (right branch). Figure 4 shows that the correlation between the same layers on the left and right branches (i.e. L1R1, L2R2, etc..) is higher than in the other conï¬gurations, which is consistent with the assumption that the summation forces alignment.
5 | 1705.07485#17 | Shake-Shake regularization | The method introduced in this paper aims at helping deep learning
practitioners faced with an overfit problem. The idea is to replace, in a
multi-branch network, the standard summation of parallel branches with a
stochastic affine combination. Applied to 3-branch residual networks,
shake-shake regularization improves on the best single shot published results
on CIFAR-10 and CIFAR-100 by reaching test errors of 2.86% and 15.85%.
Experiments on architectures without skip connections or Batch Normalization
show encouraging results and open the door to a large set of applications. Code
is available at https://github.com/xgastaldi/shake-shake | http://arxiv.org/pdf/1705.07485 | Xavier Gastaldi | cs.LG, cs.CV | null | null | cs.LG | 20170521 | 20170523 | [
{
"id": "1612.01490"
},
{
"id": "1609.05672"
},
{
"id": "1608.03983"
},
{
"id": "1511.06807"
},
{
"id": "1611.05431"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
}
] |
1705.07485 | 18 | E-E-B Models S-S-I Models 01 02 O03 Avg 01 #02 O38 Avg 1 [0.07 0.36 0.16 | 0.20 1 [0.34 0.32 0.33] 0.33 2 |0.46 0.45 0.53 0.48 2 |0.25 0.24 0.23) 0.24 3 |0.47 0.45 0.45 | 0.46 3|0.24 0.24 0.23) 0.23 x 4 x 4/023 0.21 0.20/0.21) Value S 5/048 0.59 0.60/0.56] 8 5 |0.34 0.33 0.33/0.33 AI00) S 6/049 0.41 0.45/0.45) <=) 6/015 0.16 0.16/0.16 0.50 3 7/048 048 054/050) 3/7 /0.14 0.14 0.14/0.14 0.00 8 8 |0.56 0.53 0.49|0.53) $| 8 /0.18 0.17 0.17/0.18 -0.50 © 9 |0.57 0.59 0.59/0.58} â| 9 /0.25 0.24 | 1705.07485#18 | Shake-Shake regularization | The method introduced in this paper aims at helping deep learning
practitioners faced with an overfit problem. The idea is to replace, in a
multi-branch network, the standard summation of parallel branches with a
stochastic affine combination. Applied to 3-branch residual networks,
shake-shake regularization improves on the best single shot published results
on CIFAR-10 and CIFAR-100 by reaching test errors of 2.86% and 15.85%.
Experiments on architectures without skip connections or Batch Normalization
show encouraging results and open the door to a large set of applications. Code
is available at https://github.com/xgastaldi/shake-shake | http://arxiv.org/pdf/1705.07485 | Xavier Gastaldi | cs.LG, cs.CV | null | null | cs.LG | 20170521 | 20170523 | [
{
"id": "1612.01490"
},
{
"id": "1609.05672"
},
{
"id": "1608.03983"
},
{
"id": "1511.06807"
},
{
"id": "1611.05431"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
}
] |
1705.07485 | 21 | E-E-B S-S- 0.28 -0.04 -0.07 -0.03 0.32 -0.03 0.00 -0.03 0.35 0.17 0.01 0.04 0.00 0.22 0.00 -0.06 -0.03 0.10 0.12 -0.01 0.04 0.00 0.21 -0.03 -0.04 -0.04 0.00 0.24 0.05 0.02 0.02 0.20 0.02 0.02 -0.03 -0.04) Value 0.31 -0.04 0.04 -0.03 0.36 0.10 0.04 0.06 0.32 0.19 0.03 0,00 -0.03 0.12 -0.03 0.03 0.00 0.15; 0.50 0.11 0.01 -0.01 0.03 0.12 0.02 0.06 0.03 0.11 0.00 0.07 0.04 0.04 0.04 0.15 0.04 0.00 0.06 0.19 -0.50 0.27 -0.01 -0.01 -0.02 0.19 0.00 -0.03 0.02 0.21 0.18 -0.03 -0.03 | 1705.07485#21 | Shake-Shake regularization | The method introduced in this paper aims at helping deep learning
practitioners faced with an overfit problem. The idea is to replace, in a
multi-branch network, the standard summation of parallel branches with a
stochastic affine combination. Applied to 3-branch residual networks,
shake-shake regularization improves on the best single shot published results
on CIFAR-10 and CIFAR-100 by reaching test errors of 2.86% and 15.85%.
Experiments on architectures without skip connections or Batch Normalization
show encouraging results and open the door to a large set of applications. Code
is available at https://github.com/xgastaldi/shake-shake | http://arxiv.org/pdf/1705.07485 | Xavier Gastaldi | cs.LG, cs.CV | null | null | cs.LG | 20170521 | 20170523 | [
{
"id": "1612.01490"
},
{
"id": "1609.05672"
},
{
"id": "1608.03983"
},
{
"id": "1511.06807"
},
{
"id": "1611.05431"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
}
] |
1705.07485 | 22 | -0.01 -0.01 -0.02 0.19 0.00 -0.03 0.02 0.21 0.18 -0.03 -0.03 -0.02 0.22 0.06 -0.01 0.06 0.23 0.06 0.01 -0.07 0.02 0.01 0.00 0.03 0.08 0.08 0.51 -0.18 0.20 -0.40 0.48 0.23 0.17 0.04 0.49 0.39 0.15 -0.05 0.12 0.37 0.00 -0.16 -0.15 0.13 0.41 -0.11 -0.01 0.32 -0.10 0.14 0.05 -0.01 0.09 0.24 0.18 -0.12 -0.23 0.45 -0.37 0.13 -0.14/0173) 0.24 0.11 0.11 0.15 0.31 0.11 0.06 -0.05 0.45 0.39 0.25 -0.26 -0.05 0.30 -0.16 -0.09 -0.27 0.44 0.30 0.16 0.23 0.08 0.23 0.08 0.10 -0.06 0.29 | 1705.07485#22 | Shake-Shake regularization | The method introduced in this paper aims at helping deep learning
practitioners faced with an overfit problem. The idea is to replace, in a
multi-branch network, the standard summation of parallel branches with a
stochastic affine combination. Applied to 3-branch residual networks,
shake-shake regularization improves on the best single shot published results
on CIFAR-10 and CIFAR-100 by reaching test errors of 2.86% and 15.85%.
Experiments on architectures without skip connections or Batch Normalization
show encouraging results and open the door to a large set of applications. Code
is available at https://github.com/xgastaldi/shake-shake | http://arxiv.org/pdf/1705.07485 | Xavier Gastaldi | cs.LG, cs.CV | null | null | cs.LG | 20170521 | 20170523 | [
{
"id": "1612.01490"
},
{
"id": "1609.05672"
},
{
"id": "1608.03983"
},
{
"id": "1511.06807"
},
{
"id": "1611.05431"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
}
] |
1705.07485 | 23 | -0.09 -0.27 0.44 0.30 0.16 0.23 0.08 0.23 0.08 0.10 -0.06 0.29 (0.55 0.14 -0.03 -0.04/0.51 -0.05 0.04 -0.11 0.61 10 0.43 0.12 0.16 0.13 0.38 0.20 0.23 0.14 0.37 11 0.29 0.13 0.23 0.04 0.41 0.13 0.01 0.04 0.21 0.14 -0.01 -0.02 -0.02 0.22 0.10 0.00 0.09 0.26 12 (0194) 0.30 0.47 0.31 /0.90) 0.32 0.54 0.33 [0194) 0.27 -0.06 0.00 -0.09 0.30 0.15 -0.01 0.13 0.33 L1R1 L1R2 L1R3 L2R1 L2R2 L2R3 L3R1 L3R2 L3R3 LIR1 L1R2 L1R3 L2R1 L2R2 L2R3 L3R1 L3R2 L3R3 Layers used for correlation calculation | 1705.07485#23 | Shake-Shake regularization | The method introduced in this paper aims at helping deep learning
practitioners faced with an overfit problem. The idea is to replace, in a
multi-branch network, the standard summation of parallel branches with a
stochastic affine combination. Applied to 3-branch residual networks,
shake-shake regularization improves on the best single shot published results
on CIFAR-10 and CIFAR-100 by reaching test errors of 2.86% and 15.85%.
Experiments on architectures without skip connections or Batch Normalization
show encouraging results and open the door to a large set of applications. Code
is available at https://github.com/xgastaldi/shake-shake | http://arxiv.org/pdf/1705.07485 | Xavier Gastaldi | cs.LG, cs.CV | null | null | cs.LG | 20170521 | 20170523 | [
{
"id": "1612.01490"
},
{
"id": "1609.05672"
},
{
"id": "1608.03983"
},
{
"id": "1511.06807"
},
{
"id": "1611.05431"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
}
] |
1705.07485 | 25 | Figure 4: Layer-wise correlation between the ï¬rst 3 layers of each residual block.
# 4 Regularization strength
This section looks at what would happen if we give, during the backward pass, a large weight to a branch that received a small weight in the forward pass (and vice-versa).
Let αi.j be the coefï¬cient used during the forward pass for image j in residual block i. Let βi.j be the coefï¬cient used during the backward pass for the same image at the same position in the network.
The ï¬rst test (method 1) is to set βi.j = 1 - αi.j. All the tests in this section were performed on CIFAR-10 using 26 2x32d models at the Image level. These models are compared to a 26 2x32d Shake-Keep-Image model. The results of M1 can be seen on the left part of Figure 5 (blue curve). The effect is quite drastic and the training error stays really high.
Tests M2 to M5 in Table 4 were designed to understand why Method 1 (M1) has such a strong effect. The right part of Figure 5 illustrates Table 4 graphically.
What can be seen is that:
1. The regularization effect seems to be linked to the relative position of βi.j compared to αi.j | 1705.07485#25 | Shake-Shake regularization | The method introduced in this paper aims at helping deep learning
practitioners faced with an overfit problem. The idea is to replace, in a
multi-branch network, the standard summation of parallel branches with a
stochastic affine combination. Applied to 3-branch residual networks,
shake-shake regularization improves on the best single shot published results
on CIFAR-10 and CIFAR-100 by reaching test errors of 2.86% and 15.85%.
Experiments on architectures without skip connections or Batch Normalization
show encouraging results and open the door to a large set of applications. Code
is available at https://github.com/xgastaldi/shake-shake | http://arxiv.org/pdf/1705.07485 | Xavier Gastaldi | cs.LG, cs.CV | null | null | cs.LG | 20170521 | 20170523 | [
{
"id": "1612.01490"
},
{
"id": "1609.05672"
},
{
"id": "1608.03983"
},
{
"id": "1511.06807"
},
{
"id": "1611.05431"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
}
] |
1705.07485 | 26 | What can be seen is that:
1. The regularization effect seems to be linked to the relative position of βi.j compared to αi.j
2. The further away βi.j is from αi.j, the stronger the regularization effect
3. There seems to be a jump in strength when 0.5 is crossed
These insights could be useful when trying to control with more accuracy the strength of the regularization.
6
# Table 4: Update rules for βi.j. | 1705.07485#26 | Shake-Shake regularization | The method introduced in this paper aims at helping deep learning
practitioners faced with an overfit problem. The idea is to replace, in a
multi-branch network, the standard summation of parallel branches with a
stochastic affine combination. Applied to 3-branch residual networks,
shake-shake regularization improves on the best single shot published results
on CIFAR-10 and CIFAR-100 by reaching test errors of 2.86% and 15.85%.
Experiments on architectures without skip connections or Batch Normalization
show encouraging results and open the door to a large set of applications. Code
is available at https://github.com/xgastaldi/shake-shake | http://arxiv.org/pdf/1705.07485 | Xavier Gastaldi | cs.LG, cs.CV | null | null | cs.LG | 20170521 | 20170523 | [
{
"id": "1612.01490"
},
{
"id": "1609.05672"
},
{
"id": "1608.03983"
},
{
"id": "1511.06807"
},
{
"id": "1611.05431"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
}
] |
1705.07485 | 27 | These insights could be useful when trying to control with more accuracy the strength of the regularization.
6
# Table 4: Update rules for βi.j.
S-S-I S-E-I M1 M2 M3 M4 M5 αi.j < 0.5 rand(0, 1) 0.5 1 â αi.j rand(0, 1) â αi.j rand(0, 1) â (0.5 â αi.j) + αi.j rand(0, 1) â (0.5 â αi.j) + 0.5 rand(0, 1) â αi.j + (1 â αi.j) αi.j ⥠0.5 rand(0, 1) 0.5 1 â αi.j rand(0, 1) â (1 â αi.j) + αi.j rand(0, 1) â (αi.j â 0.5) + 0.5 rand(0, 1) â (0.5 â (1 â αi.j)) + (1 â αi.j) rand(0, 1) â (1 â αi.j) | 1705.07485#27 | Shake-Shake regularization | The method introduced in this paper aims at helping deep learning
practitioners faced with an overfit problem. The idea is to replace, in a
multi-branch network, the standard summation of parallel branches with a
stochastic affine combination. Applied to 3-branch residual networks,
shake-shake regularization improves on the best single shot published results
on CIFAR-10 and CIFAR-100 by reaching test errors of 2.86% and 15.85%.
Experiments on architectures without skip connections or Batch Normalization
show encouraging results and open the door to a large set of applications. Code
is available at https://github.com/xgastaldi/shake-shake | http://arxiv.org/pdf/1705.07485 | Xavier Gastaldi | cs.LG, cs.CV | null | null | cs.LG | 20170521 | 20170523 | [
{
"id": "1612.01490"
},
{
"id": "1609.05672"
},
{
"id": "1608.03983"
},
{
"id": "1511.06807"
},
{
"id": "1611.05431"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
}
] |
1705.07485 | 28 | Hoy < 0.6: M1 0 a 05 By=1-4,;1 M2 â â M3 CBLLBLAOA M4 ie 4 ws RA Ifa, = 0.5: M1 0 B,=1-4, 05 qi 1 M2 M3 camel M4 eH MS "44 eae
Error rate Hoy < 0.6: M1 0 a 05 By=1-4,;1 M2 â â M3 CBLLBLAOA M4 ie 4 ws RA Ifa, = 0.5: M1 0 B,=1-4, 05 qi 1 M2 M3 camel M4 eH MS "44 eae 0 500 1000 1500 epoch
Error rate 0 500 1000 1500 epoch
Figure 5: Left: Training curves (dark) and test curves (light) of models M1 to M5. Right: Illustration of the different methods in Table 4.
# 5 Removing skip connections / Removing Batch Normalization
One interesting question is whether the skip connection plays a role. A lot of deep learning systems donât use ResNets and making this type of regularization work without skip connections could extend the number of potential applications. | 1705.07485#28 | Shake-Shake regularization | The method introduced in this paper aims at helping deep learning
practitioners faced with an overfit problem. The idea is to replace, in a
multi-branch network, the standard summation of parallel branches with a
stochastic affine combination. Applied to 3-branch residual networks,
shake-shake regularization improves on the best single shot published results
on CIFAR-10 and CIFAR-100 by reaching test errors of 2.86% and 15.85%.
Experiments on architectures without skip connections or Batch Normalization
show encouraging results and open the door to a large set of applications. Code
is available at https://github.com/xgastaldi/shake-shake | http://arxiv.org/pdf/1705.07485 | Xavier Gastaldi | cs.LG, cs.CV | null | null | cs.LG | 20170521 | 20170523 | [
{
"id": "1612.01490"
},
{
"id": "1609.05672"
},
{
"id": "1608.03983"
},
{
"id": "1511.06807"
},
{
"id": "1611.05431"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
}
] |
1705.07485 | 29 | Table 5 and Figure 6 present the results of removing the skip connection. The ï¬rst variant (A) is exactly like the 26 2x32d used on CIFAR-10 but without the skip connection (i.e. 2 branches with the following components ReLU-Conv3x3-BN-ReLU-Conv3x3-BN-Mul). The second variant (B) is the same as A but with only 1 convolutional layer per branch (ReLU-Conv3x3-BN-Mul) and twice the number of blocks. Models using architecture A were tested once and models using architecture B were tested twice.
The results of architecture A clearly show that shake-shake regularization can work even without a skip connection. On that particular architecture and on a 26 2x32d model, S-S-I is too strong and the model underï¬ts. The softer effect of S-E-I works better but this could change if the capacity is increased (e.g. 64d or 96d). | 1705.07485#29 | Shake-Shake regularization | The method introduced in this paper aims at helping deep learning
practitioners faced with an overfit problem. The idea is to replace, in a
multi-branch network, the standard summation of parallel branches with a
stochastic affine combination. Applied to 3-branch residual networks,
shake-shake regularization improves on the best single shot published results
on CIFAR-10 and CIFAR-100 by reaching test errors of 2.86% and 15.85%.
Experiments on architectures without skip connections or Batch Normalization
show encouraging results and open the door to a large set of applications. Code
is available at https://github.com/xgastaldi/shake-shake | http://arxiv.org/pdf/1705.07485 | Xavier Gastaldi | cs.LG, cs.CV | null | null | cs.LG | 20170521 | 20170523 | [
{
"id": "1612.01490"
},
{
"id": "1609.05672"
},
{
"id": "1608.03983"
},
{
"id": "1511.06807"
},
{
"id": "1611.05431"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
}
] |
1705.07485 | 30 | The results of architecture B are actually the most surprising. The ï¬rst point to notice is that the regularization no longer works. This, in itself, would indicate that the regularization happens thanks to the interaction between the 2 convolutions in each branch. The second point is that the train and test curves of the S-E-I and E-E-B models are absolutely identical. This would indicate that, for architecture B, the shake operation of the forward pass has no effect on the cost function. The third point is that even with a really different training curve, the test curve of the S-S-I model is nearly identical to the test curves of the E-E-B and S-E-I models (albeit with a smaller variance).
7
Table 5: Error rates (%) on CIFAR-10. | 1705.07485#30 | Shake-Shake regularization | The method introduced in this paper aims at helping deep learning
practitioners faced with an overfit problem. The idea is to replace, in a
multi-branch network, the standard summation of parallel branches with a
stochastic affine combination. Applied to 3-branch residual networks,
shake-shake regularization improves on the best single shot published results
on CIFAR-10 and CIFAR-100 by reaching test errors of 2.86% and 15.85%.
Experiments on architectures without skip connections or Batch Normalization
show encouraging results and open the door to a large set of applications. Code
is available at https://github.com/xgastaldi/shake-shake | http://arxiv.org/pdf/1705.07485 | Xavier Gastaldi | cs.LG, cs.CV | null | null | cs.LG | 20170521 | 20170523 | [
{
"id": "1612.01490"
},
{
"id": "1609.05672"
},
{
"id": "1608.03983"
},
{
"id": "1511.06807"
},
{
"id": "1611.05431"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
}
] |
1705.07485 | 31 | 7
Table 5: Error rates (%) on CIFAR-10.
Architecture Model 26 2x32d E-E-B αi.j n/a A 4.84 B 5.17 C - 26 2x32d S-E-I 26 2x32d S-S-I rand(0,1) rand(0,1) 4.05 4.59 5.09 5.20 - - 14 2x32d E-E-B n/a - - 9.65 14 2x32d S-E-I v1 14 2x32d S-E-I v2 14 2x32d S-E-I v3 rand(0.4,0.6) rand(0.35,0.65) rand(0.30,0.70) - - - - - - 8.7 7.73 diverges
Error rate ââ- 0 500 1000 1500 epoch
Error rate 20 â T T 0 500 1000 1500 epoch
Error rate 20 TT T 0 500 1000 1500 epoch
Error rate Error rate Error rate 20 20 ââ- â T T TT T 0 500 1000 1500 0 500 1000 1500 0 500 1000 1500 epoch epoch epoch
Figure 6: Training curves (dark) and test curves (light). Left: Architecture A. Center: Architecture B. Right: Architecture C. | 1705.07485#31 | Shake-Shake regularization | The method introduced in this paper aims at helping deep learning
practitioners faced with an overfit problem. The idea is to replace, in a
multi-branch network, the standard summation of parallel branches with a
stochastic affine combination. Applied to 3-branch residual networks,
shake-shake regularization improves on the best single shot published results
on CIFAR-10 and CIFAR-100 by reaching test errors of 2.86% and 15.85%.
Experiments on architectures without skip connections or Batch Normalization
show encouraging results and open the door to a large set of applications. Code
is available at https://github.com/xgastaldi/shake-shake | http://arxiv.org/pdf/1705.07485 | Xavier Gastaldi | cs.LG, cs.CV | null | null | cs.LG | 20170521 | 20170523 | [
{
"id": "1612.01490"
},
{
"id": "1609.05672"
},
{
"id": "1608.03983"
},
{
"id": "1511.06807"
},
{
"id": "1611.05431"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
}
] |
1705.07485 | 32 | Figure 6: Training curves (dark) and test curves (light). Left: Architecture A. Center: Architecture B. Right: Architecture C.
Finally, it would be interesting to see whether this method works without Batch Normalization. While batchnorm is commonly used on computer vision datasets, it is not necessarily the case for other types of problems (e.g. NLP, etc ..). Architecture C is the same as architecture A but without Batch Normal- ization (i.e. no skip, 2 branches with the following structure ReLU-Conv3x3-ReLU-Conv3x3-Mul). To allow the E-E-B model to converge the depth was reduced from 26 to 14 and the initial learning rate was set to 0.05 after a warm start at 0.025 for 1 epoch. The absence of Batch Normalization makes the model a lot more sensitive and applying the same methods as before makes the model diverge. To soften the effect a S-E-I model was chosen and the interval covered by αi.j was reduced from [0,1] to [0.4,0.6]. Models using architecture C and different intervals were tested once on CIFAR-10. As shown in Table 5 and Figure 6, this method works quite well but it is also really easy to make the model diverge (see model 14 2x32d S-E-I v3). | 1705.07485#32 | Shake-Shake regularization | The method introduced in this paper aims at helping deep learning
practitioners faced with an overfit problem. The idea is to replace, in a
multi-branch network, the standard summation of parallel branches with a
stochastic affine combination. Applied to 3-branch residual networks,
shake-shake regularization improves on the best single shot published results
on CIFAR-10 and CIFAR-100 by reaching test errors of 2.86% and 15.85%.
Experiments on architectures without skip connections or Batch Normalization
show encouraging results and open the door to a large set of applications. Code
is available at https://github.com/xgastaldi/shake-shake | http://arxiv.org/pdf/1705.07485 | Xavier Gastaldi | cs.LG, cs.CV | null | null | cs.LG | 20170521 | 20170523 | [
{
"id": "1612.01490"
},
{
"id": "1609.05672"
},
{
"id": "1608.03983"
},
{
"id": "1511.06807"
},
{
"id": "1611.05431"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
}
] |
1705.07485 | 33 | # 6 Conclusion
A series of experiments seem to indicate an ability to combat overï¬t by decorrelating the branches of multi-branch networks. This method leads to state of the art results on CIFAR datasets and could potentially improve the accuracy of architectures that do not use ResNets or Batch Normalization. While these results are encouraging, questions remain on the exact dynamics at play. Understanding these dynamics could help expand the application ï¬eld to a wider variety of complex architectures.
8
# References
Masoud Abdi and Saeid Nahavandi. Multi-residual networks. arXiv preprint arXiv:1609.05672, 2016.
Guozhong An. The effects of adding noise during backpropagation training on a generalization performance. Neural Comput., 1996.
Léon Bottou. Online algorithms and stochastic approximations. In Online Learning and Neural Networks. Cambridge University Press, 1998.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In CVPR, 2016a.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual networks. In ECCV, 2016b. | 1705.07485#33 | Shake-Shake regularization | The method introduced in this paper aims at helping deep learning
practitioners faced with an overfit problem. The idea is to replace, in a
multi-branch network, the standard summation of parallel branches with a
stochastic affine combination. Applied to 3-branch residual networks,
shake-shake regularization improves on the best single shot published results
on CIFAR-10 and CIFAR-100 by reaching test errors of 2.86% and 15.85%.
Experiments on architectures without skip connections or Batch Normalization
show encouraging results and open the door to a large set of applications. Code
is available at https://github.com/xgastaldi/shake-shake | http://arxiv.org/pdf/1705.07485 | Xavier Gastaldi | cs.LG, cs.CV | null | null | cs.LG | 20170521 | 20170523 | [
{
"id": "1612.01490"
},
{
"id": "1609.05672"
},
{
"id": "1608.03983"
},
{
"id": "1511.06807"
},
{
"id": "1611.05431"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
}
] |
1705.07485 | 34 | Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual networks. In ECCV, 2016b.
Gao Huang, Zhuang Liu, and Kilian Q. Weinberger. Densely connected convolutional networks. arXiv preprint arXiv:1608.06993, 2016a.
Gao Huang, Yu Sun, Zhuang Liu, Daniel Sedra, and Kilian Q. Weinberger. Deep networks with stochastic depth. In ECCV, 2016b.
Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In ICML, 2015.
Guoliang Kang, Jun Li, and Dacheng Tao. Shakeout: A new regularized deep neural network training scheme. In AAAI Conference on Artiï¬cial Intelligence, 2016.
Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, and Ping Tak Peter In Tang. On large-batch training for deep learning: Generalization gap and sharp minima. International Conference on Learning Representation (ICLR â17), 2017.
Alex Krizhevsky. Learning multiple layers of features from tiny images. Tech Report, 2009. | 1705.07485#34 | Shake-Shake regularization | The method introduced in this paper aims at helping deep learning
practitioners faced with an overfit problem. The idea is to replace, in a
multi-branch network, the standard summation of parallel branches with a
stochastic affine combination. Applied to 3-branch residual networks,
shake-shake regularization improves on the best single shot published results
on CIFAR-10 and CIFAR-100 by reaching test errors of 2.86% and 15.85%.
Experiments on architectures without skip connections or Batch Normalization
show encouraging results and open the door to a large set of applications. Code
is available at https://github.com/xgastaldi/shake-shake | http://arxiv.org/pdf/1705.07485 | Xavier Gastaldi | cs.LG, cs.CV | null | null | cs.LG | 20170521 | 20170523 | [
{
"id": "1612.01490"
},
{
"id": "1609.05672"
},
{
"id": "1608.03983"
},
{
"id": "1511.06807"
},
{
"id": "1611.05431"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
}
] |
1705.07485 | 35 | Alex Krizhevsky. Learning multiple layers of features from tiny images. Tech Report, 2009.
Gustav Larsson, Michael Maire, and Gregory Shakhnarovich. Fractalnet: Ultra-deep neural networks without residuals. arXiv preprint arXiv:1605.07648, 2016.
Yixuan Li, Jason Yosinski, Jeff Clune, Hod Lipson, and John Hopcroft. Convergent learning: Do different neural networks learn the same representations? In International Conference on Learning Representation (ICLR â16), 2016.
Tsung-Yi Lin, Michael Maire, Serge J. Belongie, Lubomir D. Bourdev, Ross B. Girshick, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C. Lawrence Zitnick. Microsoft COCO: Common objects in context. In ECCV, 2014.
Ilya Loshchilov and Frank Hutter. Sgdr: stochastic gradient descent with restarts. arXiv preprint arXiv:1608.03983, 2016. | 1705.07485#35 | Shake-Shake regularization | The method introduced in this paper aims at helping deep learning
practitioners faced with an overfit problem. The idea is to replace, in a
multi-branch network, the standard summation of parallel branches with a
stochastic affine combination. Applied to 3-branch residual networks,
shake-shake regularization improves on the best single shot published results
on CIFAR-10 and CIFAR-100 by reaching test errors of 2.86% and 15.85%.
Experiments on architectures without skip connections or Batch Normalization
show encouraging results and open the door to a large set of applications. Code
is available at https://github.com/xgastaldi/shake-shake | http://arxiv.org/pdf/1705.07485 | Xavier Gastaldi | cs.LG, cs.CV | null | null | cs.LG | 20170521 | 20170523 | [
{
"id": "1612.01490"
},
{
"id": "1609.05672"
},
{
"id": "1608.03983"
},
{
"id": "1511.06807"
},
{
"id": "1611.05431"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
}
] |
1705.07485 | 36 | Arvind Neelakantan, Luke Vilnis, Quoc V Le, Ilya Sutskever, Lukasz Kaiser, Karol Kurach, and James Martens. Adding gradient noise improves learning for very deep networks. arXiv preprint arXiv:1511.06807, 2015.
Steven J. Nowlan and Geoffrey E. Hinton. Simplifying neural networks by soft weight-sharing. Neural Computation, 1992.
Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. IJCV, 2015.
Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: A simple way to prevent neural networks from overï¬tting. Journal of Machine Learning Research, 15:1929â1958, 2014.
9
Ilya Sutskever, James Martens, George Dahl, and Geoffrey Hinton. On the importance of initializa- tion and momentum in deep learning. In Proceedings of the 30th International Conference on International Conference on Machine Learning - Volume 28, 2013. | 1705.07485#36 | Shake-Shake regularization | The method introduced in this paper aims at helping deep learning
practitioners faced with an overfit problem. The idea is to replace, in a
multi-branch network, the standard summation of parallel branches with a
stochastic affine combination. Applied to 3-branch residual networks,
shake-shake regularization improves on the best single shot published results
on CIFAR-10 and CIFAR-100 by reaching test errors of 2.86% and 15.85%.
Experiments on architectures without skip connections or Batch Normalization
show encouraging results and open the door to a large set of applications. Code
is available at https://github.com/xgastaldi/shake-shake | http://arxiv.org/pdf/1705.07485 | Xavier Gastaldi | cs.LG, cs.CV | null | null | cs.LG | 20170521 | 20170523 | [
{
"id": "1612.01490"
},
{
"id": "1609.05672"
},
{
"id": "1608.03983"
},
{
"id": "1511.06807"
},
{
"id": "1611.05431"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
}
] |
1705.07485 | 37 | Christian Szegedy, Sergey Ioffe, Vincent Vanhoucke, and Alex A. Alemi. Inception-v4, inception- resnet and the impact of residual connections on learning. In ICLR 2016 Workshop, 2016.
Saining Xie, Ross Girshick, Piotr Dollár, Zhuowen Tu, and Kaiming He. Aggregated residual transformations for deep neural networks. arXiv preprint arXiv:1611.05431, 2016.
Li Yinan, Xu Ruoyi, and Liu Fang. Whiteout: Gaussian adaptive regularization noise in deep neural networks. arXiv preprint arXiv:1612.01490v2, 2016.
Sergey Zagoruyko and Nikos Komodakis. Wide residual networks. In BMVC, 2016.
10 | 1705.07485#37 | Shake-Shake regularization | The method introduced in this paper aims at helping deep learning
practitioners faced with an overfit problem. The idea is to replace, in a
multi-branch network, the standard summation of parallel branches with a
stochastic affine combination. Applied to 3-branch residual networks,
shake-shake regularization improves on the best single shot published results
on CIFAR-10 and CIFAR-100 by reaching test errors of 2.86% and 15.85%.
Experiments on architectures without skip connections or Batch Normalization
show encouraging results and open the door to a large set of applications. Code
is available at https://github.com/xgastaldi/shake-shake | http://arxiv.org/pdf/1705.07485 | Xavier Gastaldi | cs.LG, cs.CV | null | null | cs.LG | 20170521 | 20170523 | [
{
"id": "1612.01490"
},
{
"id": "1609.05672"
},
{
"id": "1608.03983"
},
{
"id": "1511.06807"
},
{
"id": "1611.05431"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
}
] |
1705.06950 | 0 | 7 1 0 2 y a M 9 1 ]
V C . s c [ 1 v 0 5 9 6 0 . 5 0 7 1 : v i X r a
# The Kinetics Human Action Video Dataset
# Will Kay [email protected]
# JoËao Carreira [email protected]
# Karen Simonyan [email protected]
# Brian Zhang [email protected]
# Chloe Hillier [email protected]
# Sudheendra Vijayanarasimhan [email protected]
# Fabio Viola [email protected]
# Tim Green [email protected]
# Trevor Back [email protected]
# Paul Natsev [email protected]
# Mustafa Suleyman [email protected]
# Andrew Zisserman [email protected]
# Abstract | 1705.06950#0 | The Kinetics Human Action Video Dataset | We describe the DeepMind Kinetics human action video dataset. The dataset
contains 400 human action classes, with at least 400 video clips for each
action. Each clip lasts around 10s and is taken from a different YouTube video.
The actions are human focussed and cover a broad range of classes including
human-object interactions such as playing instruments, as well as human-human
interactions such as shaking hands. We describe the statistics of the dataset,
how it was collected, and give some baseline performance figures for neural
network architectures trained and tested for human action classification on
this dataset. We also carry out a preliminary analysis of whether imbalance in
the dataset leads to bias in the classifiers. | http://arxiv.org/pdf/1705.06950 | Will Kay, Joao Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijayanarasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natsev, Mustafa Suleyman, Andrew Zisserman | cs.CV | null | null | cs.CV | 20170519 | 20170519 | [
{
"id": "1603.04467"
},
{
"id": "1502.03167"
},
{
"id": "1603.09025"
}
] |
1705.06950 | 1 | # Paul Natsev [email protected]
# Mustafa Suleyman [email protected]
# Andrew Zisserman [email protected]
# Abstract
We describe the DeepMind Kinetics human action video dataset. The dataset contains 400 human action classes, with at least 400 video clips for each action. Each clip lasts around 10s and is taken from a different YouTube video. The actions are human focussed and cover a broad range of classes including human-object interactions such as play- ing instruments, as well as human-human interactions such as shaking hands. We describe the statistics of the dataset, how it was collected, and give some baseline performance ï¬gures for neural network architectures trained and tested for human action classiï¬cation on this dataset. We also carry out a preliminary analysis of whether imbalance in the dataset leads to bias in the classiï¬ers.
# 1. Introduction
purposes, including multi-modal analysis. Our inspiration in providing a dataset for classiï¬cation is ImageNet [18], where the signiï¬cant beneï¬ts of ï¬rst training deep networks on this dataset for classiï¬cation, and then using the trained network for other purposes (detection, image segmenta- tion, non-visual modalities (e.g. sound, depth), etc) are well known. | 1705.06950#1 | The Kinetics Human Action Video Dataset | We describe the DeepMind Kinetics human action video dataset. The dataset
contains 400 human action classes, with at least 400 video clips for each
action. Each clip lasts around 10s and is taken from a different YouTube video.
The actions are human focussed and cover a broad range of classes including
human-object interactions such as playing instruments, as well as human-human
interactions such as shaking hands. We describe the statistics of the dataset,
how it was collected, and give some baseline performance figures for neural
network architectures trained and tested for human action classification on
this dataset. We also carry out a preliminary analysis of whether imbalance in
the dataset leads to bias in the classifiers. | http://arxiv.org/pdf/1705.06950 | Will Kay, Joao Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijayanarasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natsev, Mustafa Suleyman, Andrew Zisserman | cs.CV | null | null | cs.CV | 20170519 | 20170519 | [
{
"id": "1603.04467"
},
{
"id": "1502.03167"
},
{
"id": "1603.09025"
}
] |
1705.06950 | 2 | The Kinetics dataset can be seen as the successor to the two human action video datasets that have emerged as the standard benchmarks for this area: HMDB-51 [15] and UCF-101 [20]. These datasets have served the commu- nity very well, but their usefulness is now expiring. This is because they are simply not large enough or have suf- ï¬cient variation to train and test the current generation of human action classiï¬cation models based on deep learning. Coincidentally, one of the motivations for introducing the HMDB dataset was that the then current generation of ac- tion datasets was too small. The increase then was from 10 to 51 classes, and we in turn increase this to 400 classes.
In this paper we introduce a new, large, video dataset for human action classiï¬cation. We developed this dataset prin- cipally because there is a lack of such datasets for human action classiï¬cation, and we believe that having one will fa- cilitate research in this area â both because the dataset is large enough to train deep networks from scratch, and also because the dataset is challenging enough to act as a perfor- mance benchmark where the advantages of different archi- tectures can be teased apart. | 1705.06950#2 | The Kinetics Human Action Video Dataset | We describe the DeepMind Kinetics human action video dataset. The dataset
contains 400 human action classes, with at least 400 video clips for each
action. Each clip lasts around 10s and is taken from a different YouTube video.
The actions are human focussed and cover a broad range of classes including
human-object interactions such as playing instruments, as well as human-human
interactions such as shaking hands. We describe the statistics of the dataset,
how it was collected, and give some baseline performance figures for neural
network architectures trained and tested for human action classification on
this dataset. We also carry out a preliminary analysis of whether imbalance in
the dataset leads to bias in the classifiers. | http://arxiv.org/pdf/1705.06950 | Will Kay, Joao Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijayanarasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natsev, Mustafa Suleyman, Andrew Zisserman | cs.CV | null | null | cs.CV | 20170519 | 20170519 | [
{
"id": "1603.04467"
},
{
"id": "1502.03167"
},
{
"id": "1603.09025"
}
] |
1705.06950 | 3 | Our aim is to provide a large scale high quality dataset, covering a diverse range of human actions, that can be used for human action classiï¬cation, rather than temporal local- ization. Since the use case is classiï¬cation, only short clips of around 10s containing the action are included, and there are no untrimmed videos. However, the clips also con- tain sound so the dataset can potentially be used for many
Table 1 compares the size of Kinetics to a number of re- cent human action datasets. In terms of variation, although the UCF-101 dataset contains 101 actions with 100+ clips for each action, all the clips are taken from only 2.5k dis- tinct videos. For example there are 7 clips from one video of the same person brushing their hair. This means that there is far less variation than if the action in each clip was per- formed by a different person (and different viewpoint, light- ing, etc). This problem is avoided in Kinetics as each clip is taken from a different video.
The clips are sourced from YouTube videos. Con- sequently, for the most part, they are not professionally videoed and edited material (as in TV and ï¬lm videos). There can be considerable camera motion/shake, illumina- tion variations, shadows, background clutter, etc. More im1 | 1705.06950#3 | The Kinetics Human Action Video Dataset | We describe the DeepMind Kinetics human action video dataset. The dataset
contains 400 human action classes, with at least 400 video clips for each
action. Each clip lasts around 10s and is taken from a different YouTube video.
The actions are human focussed and cover a broad range of classes including
human-object interactions such as playing instruments, as well as human-human
interactions such as shaking hands. We describe the statistics of the dataset,
how it was collected, and give some baseline performance figures for neural
network architectures trained and tested for human action classification on
this dataset. We also carry out a preliminary analysis of whether imbalance in
the dataset leads to bias in the classifiers. | http://arxiv.org/pdf/1705.06950 | Will Kay, Joao Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijayanarasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natsev, Mustafa Suleyman, Andrew Zisserman | cs.CV | null | null | cs.CV | 20170519 | 20170519 | [
{
"id": "1603.04467"
},
{
"id": "1502.03167"
},
{
"id": "1603.09025"
}
] |
1705.06950 | 4 | Year Actions 2011 2012 2015 2017 Clips 51 min 102 101 min 101 200 avg 141 400 min 400 Total 6,766 13,320 28,108 306,245 Videos 3,312 2,500 19,994 306,245
Table 1: Statistics for recent human action recognition datasets. âActionsâ, speciï¬es the number of action classes; âClipsâ, the number of clips per class; âTotalâ, is the total number of clips; and âVideosâ, the total number of videos from which these clips are extracted.
portantly, there are a great variety of performers (since each clip is from a different video) with differences in how the action is performed (e.g. its speed), clothing, body pose and shape, age, and camera framing and viewpoint.
(ballet, macarena, tap, . . . ); Cooking (cutting, frying, peel- ing, . . . ). The full list of classes is given in the appendix, together with parent-child groupings. Figure 1 shows clips from a sample of classes. | 1705.06950#4 | The Kinetics Human Action Video Dataset | We describe the DeepMind Kinetics human action video dataset. The dataset
contains 400 human action classes, with at least 400 video clips for each
action. Each clip lasts around 10s and is taken from a different YouTube video.
The actions are human focussed and cover a broad range of classes including
human-object interactions such as playing instruments, as well as human-human
interactions such as shaking hands. We describe the statistics of the dataset,
how it was collected, and give some baseline performance figures for neural
network architectures trained and tested for human action classification on
this dataset. We also carry out a preliminary analysis of whether imbalance in
the dataset leads to bias in the classifiers. | http://arxiv.org/pdf/1705.06950 | Will Kay, Joao Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijayanarasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natsev, Mustafa Suleyman, Andrew Zisserman | cs.CV | null | null | cs.CV | 20170519 | 20170519 | [
{
"id": "1603.04467"
},
{
"id": "1502.03167"
},
{
"id": "1603.09025"
}
] |
1705.06950 | 5 | Our hope is that the dataset will enable a new generation of neural network architectures to be developed for video. For example, architectures including multiple streams of in- formation (RGB/appearance, optical ï¬ow, human pose, ob- ject category recognition), architectures using attention, etc. That will enable the virtues (or otherwise) of the new archi- tectures to be demonstrated. Issues such as the tension be- tween static and motion prediction, and the open question of the best method of temporal aggregation in video (recurrent vs convolutional) may ï¬nally be resolved.
Statistics: The dataset has 400 human action classes, with 400â1150 clips for each action, each from a unique video. Each clip lasts around 10s. The current version has 306,245 videos, and is divided into three splits, one for training hav- ing 250â1000 videos per class, one for validation with 50 videos per class and one for testing with 100 videos per class. The statistics are given in table 2. The clips are from YouTube videos and have a variable resolution and frame rate. | 1705.06950#5 | The Kinetics Human Action Video Dataset | We describe the DeepMind Kinetics human action video dataset. The dataset
contains 400 human action classes, with at least 400 video clips for each
action. Each clip lasts around 10s and is taken from a different YouTube video.
The actions are human focussed and cover a broad range of classes including
human-object interactions such as playing instruments, as well as human-human
interactions such as shaking hands. We describe the statistics of the dataset,
how it was collected, and give some baseline performance figures for neural
network architectures trained and tested for human action classification on
this dataset. We also carry out a preliminary analysis of whether imbalance in
the dataset leads to bias in the classifiers. | http://arxiv.org/pdf/1705.06950 | Will Kay, Joao Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijayanarasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natsev, Mustafa Suleyman, Andrew Zisserman | cs.CV | null | null | cs.CV | 20170519 | 20170519 | [
{
"id": "1603.04467"
},
{
"id": "1502.03167"
},
{
"id": "1603.09025"
}
] |
1705.06950 | 6 | The rest of the paper is organized as: Section 2 gives an overview of the new dataset; Section 3 describes how it was collected and discusses possible imbalances in the data and their consequences for classiï¬er bias. Section 4 gives the performance of a number of ConvNet architectures that are trained and tested on the dataset. Our companion paper [5] explores the beneï¬t of pre-training an action classiï¬cation network on Kinetics, and then using the features from the network for action classiï¬cation on other (smaller) datasets. The URLs of the YouTube videos and temporal intervals of the dataset can be obtained from http://deepmind. com/kinetics.
# 2. An Overview of the Kinetics Dataset
Content: The dataset is focused on human actions (rather than activities or events). The list of action classes covers: Person Actions (singular), e.g. drawing, drinking, laughing, pumping ï¬st; Person-Person Actions, e.g. hugging, kissing, shaking hands; and, Person-Object Actions, e.g. opening present, mowing lawn, washing dishes. Some actions are ï¬ne grained and require temporal reasoning to distinguish, for example different types of swimming. Other actions re- quire more emphasis on the object to distinguish, for exam- ple playing different types of wind instruments.
Train 250â1000 Validation Test 100 50 | 1705.06950#6 | The Kinetics Human Action Video Dataset | We describe the DeepMind Kinetics human action video dataset. The dataset
contains 400 human action classes, with at least 400 video clips for each
action. Each clip lasts around 10s and is taken from a different YouTube video.
The actions are human focussed and cover a broad range of classes including
human-object interactions such as playing instruments, as well as human-human
interactions such as shaking hands. We describe the statistics of the dataset,
how it was collected, and give some baseline performance figures for neural
network architectures trained and tested for human action classification on
this dataset. We also carry out a preliminary analysis of whether imbalance in
the dataset leads to bias in the classifiers. | http://arxiv.org/pdf/1705.06950 | Will Kay, Joao Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijayanarasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natsev, Mustafa Suleyman, Andrew Zisserman | cs.CV | null | null | cs.CV | 20170519 | 20170519 | [
{
"id": "1603.04467"
},
{
"id": "1502.03167"
},
{
"id": "1603.09025"
}
] |
1705.06950 | 7 | Train 250â1000 Validation Test 100 50
Table 2: Kinetics Dataset Statistics. The number of clips for each class in the train/val/test partitions.
Non-exhaustive annotation. Each class contains clips il- lustrating that action. However, a particular clip can con- tain several actions. Interesting examples in the dataset include: âtextingâ while âdriving a carâ; âHula hoopingâ while âplaying ukuleleâ; âbrushing teethâ while âdancingâ (of some type). In each case both of the actions are Kinetics classes, and the clip will probably only appear under only one of these classes not both, i.e. clips do not have complete (exhaustive) annotation. For this reason when evaluating classiï¬cation performance, a top-5 measure is more suitable than top-1. This is similar to the situation in ImageNet [18], where one of the reasons for using a top-5 measure is that images are only labelled for a single class, although it may contain multiple classes.
There is not a deep hierarchy, but instead there are several (non-exclusive) parent-child groupings, e.g. Music (playing drums, trombone, violin, . . . ); Personal Hygiene (brushing teeth, cutting nails, washing hands, . . . ); Dancing | 1705.06950#7 | The Kinetics Human Action Video Dataset | We describe the DeepMind Kinetics human action video dataset. The dataset
contains 400 human action classes, with at least 400 video clips for each
action. Each clip lasts around 10s and is taken from a different YouTube video.
The actions are human focussed and cover a broad range of classes including
human-object interactions such as playing instruments, as well as human-human
interactions such as shaking hands. We describe the statistics of the dataset,
how it was collected, and give some baseline performance figures for neural
network architectures trained and tested for human action classification on
this dataset. We also carry out a preliminary analysis of whether imbalance in
the dataset leads to bias in the classifiers. | http://arxiv.org/pdf/1705.06950 | Will Kay, Joao Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijayanarasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natsev, Mustafa Suleyman, Andrew Zisserman | cs.CV | null | null | cs.CV | 20170519 | 20170519 | [
{
"id": "1603.04467"
},
{
"id": "1502.03167"
},
{
"id": "1603.09025"
}
] |
1705.06950 | 9 | (a) headbanging
(b) stretching leg
(c) shaking hands
(d) tickling
(e) robot dancing
(f) salsa dancing
(g) riding a bike
(h) riding unicycle
# (i) playing violin
Te
WS
ay dk
oO ad
Av)
D oO DD ay Av) Te WS dk ad ViAT ARAORR
# (j) playing trumpet
# (k) braiding hair
PPP
# (l) brushing hair | 1705.06950#9 | The Kinetics Human Action Video Dataset | We describe the DeepMind Kinetics human action video dataset. The dataset
contains 400 human action classes, with at least 400 video clips for each
action. Each clip lasts around 10s and is taken from a different YouTube video.
The actions are human focussed and cover a broad range of classes including
human-object interactions such as playing instruments, as well as human-human
interactions such as shaking hands. We describe the statistics of the dataset,
how it was collected, and give some baseline performance figures for neural
network architectures trained and tested for human action classification on
this dataset. We also carry out a preliminary analysis of whether imbalance in
the dataset leads to bias in the classifiers. | http://arxiv.org/pdf/1705.06950 | Will Kay, Joao Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijayanarasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natsev, Mustafa Suleyman, Andrew Zisserman | cs.CV | null | null | cs.CV | 20170519 | 20170519 | [
{
"id": "1603.04467"
},
{
"id": "1502.03167"
},
{
"id": "1603.09025"
}
] |
1705.06950 | 10 | (m) dribbling basketball
(n) dunking basketball
Figure 1: Example classes from the Kinetics dataset. Best seen in colour and with zoom. Note that in some cases a single image is not enough for recognizing the action (e.g. âheadbangingâ) or distinguishing classes (âdribbling basketballâ vs âdunking basketballâ). The dataset contains: Singular Person Actions (e.g. ârobot dancingâ, âstretching legâ); Person-Person Actions (e.g. âshaking handsâ, âticklingâ); Person-Object Actions (e.g. âriding a bikeâ); same verb different objects (e.g. âplaying violinâ, âplaying trumpetâ); and same object different verbs (e.g. âdribbling basketballâ, âdunking basketballâ). These are realistic (amateur) videos â there is often signiï¬cant camera shake, for instance.
and clean up the dataset. We then discuss possible biases in the dataset due to the collection process. | 1705.06950#10 | The Kinetics Human Action Video Dataset | We describe the DeepMind Kinetics human action video dataset. The dataset
contains 400 human action classes, with at least 400 video clips for each
action. Each clip lasts around 10s and is taken from a different YouTube video.
The actions are human focussed and cover a broad range of classes including
human-object interactions such as playing instruments, as well as human-human
interactions such as shaking hands. We describe the statistics of the dataset,
how it was collected, and give some baseline performance figures for neural
network architectures trained and tested for human action classification on
this dataset. We also carry out a preliminary analysis of whether imbalance in
the dataset leads to bias in the classifiers. | http://arxiv.org/pdf/1705.06950 | Will Kay, Joao Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijayanarasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natsev, Mustafa Suleyman, Andrew Zisserman | cs.CV | null | null | cs.CV | 20170519 | 20170519 | [
{
"id": "1603.04467"
},
{
"id": "1502.03167"
},
{
"id": "1603.09025"
}
] |
1705.06950 | 11 | and clean up the dataset. We then discuss possible biases in the dataset due to the collection process.
Overview: clips for each class were obtained by ï¬rst searching on YouTube for candidates, and then using Ama- zon Mechanical Turkers (AMT) to decide if the clip con- tains the action or not. Three or more conï¬rmations (out of ï¬ve) were required before a clip was accepted. The dataset was de-duped, by checking that only one clip is taken from each video, and that clips do not contain common video material. Finally, classes were checked for overlap and de- noised.
We now describe these stages in more detail.
# 3.1. Stage 1: Obtaining an action list | 1705.06950#11 | The Kinetics Human Action Video Dataset | We describe the DeepMind Kinetics human action video dataset. The dataset
contains 400 human action classes, with at least 400 video clips for each
action. Each clip lasts around 10s and is taken from a different YouTube video.
The actions are human focussed and cover a broad range of classes including
human-object interactions such as playing instruments, as well as human-human
interactions such as shaking hands. We describe the statistics of the dataset,
how it was collected, and give some baseline performance figures for neural
network architectures trained and tested for human action classification on
this dataset. We also carry out a preliminary analysis of whether imbalance in
the dataset leads to bias in the classifiers. | http://arxiv.org/pdf/1705.06950 | Will Kay, Joao Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijayanarasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natsev, Mustafa Suleyman, Andrew Zisserman | cs.CV | null | null | cs.CV | 20170519 | 20170519 | [
{
"id": "1603.04467"
},
{
"id": "1502.03167"
},
{
"id": "1603.09025"
}
] |
1705.06950 | 12 | We now describe these stages in more detail.
# 3.1. Stage 1: Obtaining an action list
Curating a large list of human actions is challenging, as there is no single listing available at this scale with suitable visual action classes. Consequently, we had to combine numerous sources together with our own obser- vations of actions that surround us. These sources in- (i) Action datasets â existing datasets like Ac- clude: tivityNet [3], HMDB [15], UCF101 [20], MPII Human Pose [2], ACT [25] have useful classes and a suitable sub set of these were used; (ii) Motion capture â there are a num- ber of motion capture datasets which we looked through and extracted ï¬le titles. These titles described the motion within the ï¬le and were often quite creative; and, (iii) Crowd- sourced â we asked Mechanical Turk workers to come up with a more appropriate action if the label we had presented to them for a clip was incorrect.
# 3.2. Stage 2: Obtaining candidate clips
The chosen method and steps are detailed below which combine a number of different internal efforts:
Step 1: obtaining videos. Videos are drawn from the YouTube corpus by matching video titles with the Kinetics actions list. | 1705.06950#12 | The Kinetics Human Action Video Dataset | We describe the DeepMind Kinetics human action video dataset. The dataset
contains 400 human action classes, with at least 400 video clips for each
action. Each clip lasts around 10s and is taken from a different YouTube video.
The actions are human focussed and cover a broad range of classes including
human-object interactions such as playing instruments, as well as human-human
interactions such as shaking hands. We describe the statistics of the dataset,
how it was collected, and give some baseline performance figures for neural
network architectures trained and tested for human action classification on
this dataset. We also carry out a preliminary analysis of whether imbalance in
the dataset leads to bias in the classifiers. | http://arxiv.org/pdf/1705.06950 | Will Kay, Joao Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijayanarasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natsev, Mustafa Suleyman, Andrew Zisserman | cs.CV | null | null | cs.CV | 20170519 | 20170519 | [
{
"id": "1603.04467"
},
{
"id": "1502.03167"
},
{
"id": "1603.09025"
}
] |
1705.06950 | 13 | Step 1: obtaining videos. Videos are drawn from the YouTube corpus by matching video titles with the Kinetics actions list.
Step 2: temporal positioning within a video. Image classiï¬ers are available for a large number of human ac- tions. These classiï¬ers are obtained by tracking user ac- tions on Google Image Search. For example, for a search query âclimbing treeâ, user relevance feedback on images is collected by aggregating across the multiple times that that search query is issued. This relevance feedback is used to select a high-conï¬dence set of images that can be used to train a âclimbing treeâ image classiï¬er. These classiï¬ers are run at the frame level over the videos found in step 1, and clips extracted around the top k responses (where k = 2).
It was found that the action list had a better match to relevant classiï¬ers if action verbs are formatted to end with
âingâ. Thinking back to image search, this makes sense as typically if you are searching for an example of someone performing an action you would issue queries like ârunning manâ or âbrushing hairâ over other tenses like âman ranâ or âbrush hairâ. | 1705.06950#13 | The Kinetics Human Action Video Dataset | We describe the DeepMind Kinetics human action video dataset. The dataset
contains 400 human action classes, with at least 400 video clips for each
action. Each clip lasts around 10s and is taken from a different YouTube video.
The actions are human focussed and cover a broad range of classes including
human-object interactions such as playing instruments, as well as human-human
interactions such as shaking hands. We describe the statistics of the dataset,
how it was collected, and give some baseline performance figures for neural
network architectures trained and tested for human action classification on
this dataset. We also carry out a preliminary analysis of whether imbalance in
the dataset leads to bias in the classifiers. | http://arxiv.org/pdf/1705.06950 | Will Kay, Joao Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijayanarasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natsev, Mustafa Suleyman, Andrew Zisserman | cs.CV | null | null | cs.CV | 20170519 | 20170519 | [
{
"id": "1603.04467"
},
{
"id": "1502.03167"
},
{
"id": "1603.09025"
}
] |
1705.06950 | 14 | The output of this stage is a large number of videos and a position in all of them where one of the actions is po- tentially occurring. 10 second clips are created by taking 5 seconds either side of that position (there are length ex- ceptions when the position is within 5 seconds of the start or end of the video leading to a shorter clip length). The clips are then passed onto the next stage of cleanup through human labelling.
# 3.3. Stage 3: Manual labelling process
The key aim of this stage was to identify whether the supposed action was actually occurring during a clip or not. A human was required in the loop for this phase and we chose to use Amazonâs Mechanical Turk (AMT) for the task due to the large numbers of high quality workers using the platform.
A single-page webapp was built for the labelling task and optimised to maximise the number of clips presented to the workers whilst maintaining a high quality of annotation. The labelling interface is shown in ï¬gure 2. The user inter- face design and theme were chosen to differentiate the task from many others on the platform as well as make the task as stimulating and engaging as possible. This certainly paid off as the task was one of the highest rated on the platform and would frequently get more than 400 distinct workers as soon as a new run was launched. | 1705.06950#14 | The Kinetics Human Action Video Dataset | We describe the DeepMind Kinetics human action video dataset. The dataset
contains 400 human action classes, with at least 400 video clips for each
action. Each clip lasts around 10s and is taken from a different YouTube video.
The actions are human focussed and cover a broad range of classes including
human-object interactions such as playing instruments, as well as human-human
interactions such as shaking hands. We describe the statistics of the dataset,
how it was collected, and give some baseline performance figures for neural
network architectures trained and tested for human action classification on
this dataset. We also carry out a preliminary analysis of whether imbalance in
the dataset leads to bias in the classifiers. | http://arxiv.org/pdf/1705.06950 | Will Kay, Joao Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijayanarasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natsev, Mustafa Suleyman, Andrew Zisserman | cs.CV | null | null | cs.CV | 20170519 | 20170519 | [
{
"id": "1603.04467"
},
{
"id": "1502.03167"
},
{
"id": "1603.09025"
}
] |
1705.06950 | 15 | The workers were given clear instructions at the begin- ning. There were two screens of instruction, the second re- inforcing the ï¬rst. After acknowledging they understood the task they were presented with a media player and several response icons. The interface would fetch a set of videos from the available pool for the worker at that moment and embed the ï¬rst clip. The task consisted of 20 videos each with a different class where possible; we randomised all the videos and classes to make it more interesting for the work- ers and prevent them from becoming stuck on classes with low yields. Two of the video slots were used by us to in- ject groundtruth clips. This allowed us to get an estimate of the accuracy for each worker. If a worker fell below a 50% success rating on these, we showed them a âlow accuracyâ warning screen. This helped address many low accuracies. In the labelling interface, workers were asked the question âCan you see a human performing the action class-name?â. The following response options were available on the interface as icons:
⢠Yes, this contains a true example of the action
⢠No, this does not contain an example of the action | 1705.06950#15 | The Kinetics Human Action Video Dataset | We describe the DeepMind Kinetics human action video dataset. The dataset
contains 400 human action classes, with at least 400 video clips for each
action. Each clip lasts around 10s and is taken from a different YouTube video.
The actions are human focussed and cover a broad range of classes including
human-object interactions such as playing instruments, as well as human-human
interactions such as shaking hands. We describe the statistics of the dataset,
how it was collected, and give some baseline performance figures for neural
network architectures trained and tested for human action classification on
this dataset. We also carry out a preliminary analysis of whether imbalance in
the dataset leads to bias in the classifiers. | http://arxiv.org/pdf/1705.06950 | Will Kay, Joao Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijayanarasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natsev, Mustafa Suleyman, Andrew Zisserman | cs.CV | null | null | cs.CV | 20170519 | 20170519 | [
{
"id": "1603.04467"
},
{
"id": "1502.03167"
},
{
"id": "1603.09025"
}
] |
1705.06950 | 16 | ⢠Yes, this contains a true example of the action
⢠No, this does not contain an example of the action
Evaluating Actions in Videos Can you see a & human performing the action riding mule? Instructions We would like to find videos that contain real humans performing actions e.g. scrubbing their face, jumping, kissing someone etc. Please click on the most appropriate button after watching each video: ry Yes, this contains a true example of the action cy No, this does not contain an example of the action @ You are unsure if there is an example of the action Ey Replay the video Video does not play, does not contain a human, is an image, cartoon or a computer game. We have turned off the audio, you need to judge the clip using the visuals only.
Figure 2: Labeling interface used in Mechanical Turk.
⢠You are unsure if there is an example of the action
⢠Replay the video
Following annotating, the video ids, clip times and labels were exported from the database and handed on to be used for model training.
⢠Video does not play, does not contain a human, is an image, cartoon or a computer game.
When a worker responded with âYesâ we also asked the question âDoes the action last for the whole clip?â in or- der to use this signal later during model training. | 1705.06950#16 | The Kinetics Human Action Video Dataset | We describe the DeepMind Kinetics human action video dataset. The dataset
contains 400 human action classes, with at least 400 video clips for each
action. Each clip lasts around 10s and is taken from a different YouTube video.
The actions are human focussed and cover a broad range of classes including
human-object interactions such as playing instruments, as well as human-human
interactions such as shaking hands. We describe the statistics of the dataset,
how it was collected, and give some baseline performance figures for neural
network architectures trained and tested for human action classification on
this dataset. We also carry out a preliminary analysis of whether imbalance in
the dataset leads to bias in the classifiers. | http://arxiv.org/pdf/1705.06950 | Will Kay, Joao Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijayanarasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natsev, Mustafa Suleyman, Andrew Zisserman | cs.CV | null | null | cs.CV | 20170519 | 20170519 | [
{
"id": "1603.04467"
},
{
"id": "1502.03167"
},
{
"id": "1603.09025"
}
] |
1705.06950 | 17 | Note, the AMT workers didnât have access to the audio to ensure that the video can be classiï¬ed purely based on its visual content.
In order for a clip to be added to the dataset, it needed to receive at least 3 positive responses from workers. We allowed each clip to be annotated 5 times except if it had been annotated by more than 2 of a speciï¬c response. For example, if 3 out of 3 workers had said it did not contain an example of the action we would immediately remove it from the pool and not continue until 5 workers had annotated it. Due to the large scale of the task it was necessary to quickly remove classes that were made up of low quality or completely irrelevant candidates. Failing to do this would have meant that we spent a lot of money paying workers to mark videos as negative or bad. Accuracies for each class were calculated after 20 clips from that class had been an- notated. We adjusted the accuracy threshold between runs but would typically start at a high accuracy of 50% (1 in 2 videos were expected to contain the action). | 1705.06950#17 | The Kinetics Human Action Video Dataset | We describe the DeepMind Kinetics human action video dataset. The dataset
contains 400 human action classes, with at least 400 video clips for each
action. Each clip lasts around 10s and is taken from a different YouTube video.
The actions are human focussed and cover a broad range of classes including
human-object interactions such as playing instruments, as well as human-human
interactions such as shaking hands. We describe the statistics of the dataset,
how it was collected, and give some baseline performance figures for neural
network architectures trained and tested for human action classification on
this dataset. We also carry out a preliminary analysis of whether imbalance in
the dataset leads to bias in the classifiers. | http://arxiv.org/pdf/1705.06950 | Will Kay, Joao Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijayanarasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natsev, Mustafa Suleyman, Andrew Zisserman | cs.CV | null | null | cs.CV | 20170519 | 20170519 | [
{
"id": "1603.04467"
},
{
"id": "1502.03167"
},
{
"id": "1603.09025"
}
] |
1705.06950 | 18 | What we learnt: We found that more speciï¬c classes like âriding muleâ were producing much less noise than more general classes like âridingâ. However, occasionally us- ing more general classes was a beneï¬t as they could sub- sequently be split into a few distinct classes that were not previously present and the candidates resent out to workers e.g. âgardeningâ was split into âwatering plantsâ, âtrimming treesâ and âplanting treesâ.
The amount of worker trafï¬c that the task generated meant that we could not rely on direct fetching and writes to the database even with appropriate indexes and optimised queries. We therefore created many caches which were made up of groups of clips for each worker. When a worker started a new task, the interface would fetch a set of clips for that speciï¬c worker. The cache was replenished often by background processes as clips received a sufï¬cient num- ber of annotations. This also negated labelling collisions where previously > 1 worker might pick up the same video to annotate and we would quickly exceed 5 responses for any 1 clip.
# 3.4. Stage 4: Cleaning up and de-noising | 1705.06950#18 | The Kinetics Human Action Video Dataset | We describe the DeepMind Kinetics human action video dataset. The dataset
contains 400 human action classes, with at least 400 video clips for each
action. Each clip lasts around 10s and is taken from a different YouTube video.
The actions are human focussed and cover a broad range of classes including
human-object interactions such as playing instruments, as well as human-human
interactions such as shaking hands. We describe the statistics of the dataset,
how it was collected, and give some baseline performance figures for neural
network architectures trained and tested for human action classification on
this dataset. We also carry out a preliminary analysis of whether imbalance in
the dataset leads to bias in the classifiers. | http://arxiv.org/pdf/1705.06950 | Will Kay, Joao Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijayanarasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natsev, Mustafa Suleyman, Andrew Zisserman | cs.CV | null | null | cs.CV | 20170519 | 20170519 | [
{
"id": "1603.04467"
},
{
"id": "1502.03167"
},
{
"id": "1603.09025"
}
] |
1705.06950 | 19 | # 3.4. Stage 4: Cleaning up and de-noising
One of the dataset design goals was having a single clip from each given video sequence, different from ex- isting datasets which slice videos containing repetitive ac- tions into many (correlated) training examples. We also employed mechanisms for identifying structural problems as we grew the dataset, such as repeated classes due to syn- onymy or different word order (e.g. riding motorbike, riding motorcycle), classes that are too general and co-occur with many others (e.g. talking) and which are problematic for typical 1-of-K classiï¬cation learning approaches (instead of multi-label classiï¬cation). We will now describe these pro- cedures. | 1705.06950#19 | The Kinetics Human Action Video Dataset | We describe the DeepMind Kinetics human action video dataset. The dataset
contains 400 human action classes, with at least 400 video clips for each
action. Each clip lasts around 10s and is taken from a different YouTube video.
The actions are human focussed and cover a broad range of classes including
human-object interactions such as playing instruments, as well as human-human
interactions such as shaking hands. We describe the statistics of the dataset,
how it was collected, and give some baseline performance figures for neural
network architectures trained and tested for human action classification on
this dataset. We also carry out a preliminary analysis of whether imbalance in
the dataset leads to bias in the classifiers. | http://arxiv.org/pdf/1705.06950 | Will Kay, Joao Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijayanarasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natsev, Mustafa Suleyman, Andrew Zisserman | cs.CV | null | null | cs.CV | 20170519 | 20170519 | [
{
"id": "1603.04467"
},
{
"id": "1502.03167"
},
{
"id": "1603.09025"
}
] |
1705.06950 | 20 | De-duplicating videos. We de-duplicated videos using two complementary approaches. First, in order to have only one clip from each YouTube link, we randomly selected a single clip from amongst those validated by Turkers for that video. This stage ï¬ltered out around 20% of Turker- approved examples, but we visually found that it still left many duplicates. The reason is that YouTube users often create videos reusing portions of other videos, for example as part of video compilations or promotional adverts. Some- times they are cropped, resized and generally pre-processed in different ways (but, nevertheless, the image classiï¬er could localize the same clip). So even though each clip is from a distinct video there were still duplications. | 1705.06950#20 | The Kinetics Human Action Video Dataset | We describe the DeepMind Kinetics human action video dataset. The dataset
contains 400 human action classes, with at least 400 video clips for each
action. Each clip lasts around 10s and is taken from a different YouTube video.
The actions are human focussed and cover a broad range of classes including
human-object interactions such as playing instruments, as well as human-human
interactions such as shaking hands. We describe the statistics of the dataset,
how it was collected, and give some baseline performance figures for neural
network architectures trained and tested for human action classification on
this dataset. We also carry out a preliminary analysis of whether imbalance in
the dataset leads to bias in the classifiers. | http://arxiv.org/pdf/1705.06950 | Will Kay, Joao Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijayanarasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natsev, Mustafa Suleyman, Andrew Zisserman | cs.CV | null | null | cs.CV | 20170519 | 20170519 | [
{
"id": "1603.04467"
},
{
"id": "1502.03167"
},
{
"id": "1603.09025"
}
] |
1705.06950 | 21 | We devised a process for de-duplicating across YouTube links which operated independently for each class. First we computed Inception-V1 [12] feature vectors (taken after last average pooling layer) on 224 Ã 224 center crops of 25 uni- formly sampled frames from each video, which we then av- eraged. Afterwards we built a class-wise matrix having all cosine similarities between these feature vectors and thresh- olded it. Finally, we computed connected components and kept a random example from each. We found this to work well for most classes using the same threshold of 0.97, but adjusted it in a few cases where classes were visually sim- ilar, such as some taking place in the snow or in the water. This process reduced the number of Turker-approved exam- ples by a further 15%. | 1705.06950#21 | The Kinetics Human Action Video Dataset | We describe the DeepMind Kinetics human action video dataset. The dataset
contains 400 human action classes, with at least 400 video clips for each
action. Each clip lasts around 10s and is taken from a different YouTube video.
The actions are human focussed and cover a broad range of classes including
human-object interactions such as playing instruments, as well as human-human
interactions such as shaking hands. We describe the statistics of the dataset,
how it was collected, and give some baseline performance figures for neural
network architectures trained and tested for human action classification on
this dataset. We also carry out a preliminary analysis of whether imbalance in
the dataset leads to bias in the classifiers. | http://arxiv.org/pdf/1705.06950 | Will Kay, Joao Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijayanarasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natsev, Mustafa Suleyman, Andrew Zisserman | cs.CV | null | null | cs.CV | 20170519 | 20170519 | [
{
"id": "1603.04467"
},
{
"id": "1502.03167"
},
{
"id": "1603.09025"
}
] |
1705.06950 | 22 | Detecting noisy classes. Classes can be ânoisyâ in that they may overlap with other classes or they may contain several quite distinct (in terms of the action) groupings due to an ambiguity in the class name. For example, âskippingâ can be âskipping with a ropeâ and also âskipping stones across waterâ. We trained two-stream action classiï¬ers [19] repeatedly throughout the dataset development to identify these noise classes. This allowed us to ï¬nd the top con- fusions for each class, which sometimes were clear even by just verifying the class names (but went unnoticed due
to the scale of the dataset), and other times required eye- balling the data to understand if the confusions were alright and the classes were just difï¬cult to distinguish because of shortcomings of the model. We merged, split or outright removed classes based on these detected confusions. | 1705.06950#22 | The Kinetics Human Action Video Dataset | We describe the DeepMind Kinetics human action video dataset. The dataset
contains 400 human action classes, with at least 400 video clips for each
action. Each clip lasts around 10s and is taken from a different YouTube video.
The actions are human focussed and cover a broad range of classes including
human-object interactions such as playing instruments, as well as human-human
interactions such as shaking hands. We describe the statistics of the dataset,
how it was collected, and give some baseline performance figures for neural
network architectures trained and tested for human action classification on
this dataset. We also carry out a preliminary analysis of whether imbalance in
the dataset leads to bias in the classifiers. | http://arxiv.org/pdf/1705.06950 | Will Kay, Joao Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijayanarasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natsev, Mustafa Suleyman, Andrew Zisserman | cs.CV | null | null | cs.CV | 20170519 | 20170519 | [
{
"id": "1603.04467"
},
{
"id": "1502.03167"
},
{
"id": "1603.09025"
}
] |
1705.06950 | 23 | Final ï¬ltering. After all the data was collected, de- duplicated and the classes were selected, we ran a ï¬nal man- ual clip ï¬ltering stage. Here the class scores from the two- stream model were again useful as they allowed sorting the examples from most conï¬dent to least conï¬dent â a mea- sure of how prototypical they were. We found that noisy ex- amples were often among the lowest ranked examples and focused on those. The ranking also made adjacent any re- maining duplicate videos, which made it easier to ï¬lter out those too.
# 3.5. Discussion: dataset bias I
We are familiar with the notion of dataset bias leading to lack of generalization: where a classiï¬er trained on one dataset, e.g. Caltech 256 [10], does not perform well when tested on another, e.g. PASCAL VOC [8]. Indeed it is even possible to train a classiï¬er to identify which dataset an im- age belongs to [22]. | 1705.06950#23 | The Kinetics Human Action Video Dataset | We describe the DeepMind Kinetics human action video dataset. The dataset
contains 400 human action classes, with at least 400 video clips for each
action. Each clip lasts around 10s and is taken from a different YouTube video.
The actions are human focussed and cover a broad range of classes including
human-object interactions such as playing instruments, as well as human-human
interactions such as shaking hands. We describe the statistics of the dataset,
how it was collected, and give some baseline performance figures for neural
network architectures trained and tested for human action classification on
this dataset. We also carry out a preliminary analysis of whether imbalance in
the dataset leads to bias in the classifiers. | http://arxiv.org/pdf/1705.06950 | Will Kay, Joao Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijayanarasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natsev, Mustafa Suleyman, Andrew Zisserman | cs.CV | null | null | cs.CV | 20170519 | 20170519 | [
{
"id": "1603.04467"
},
{
"id": "1502.03167"
},
{
"id": "1603.09025"
}
] |
1705.06950 | 24 | There is another sense of bias which could arise from un- balanced categories within a dataset. For example, gender imbalance in a training set could lead to a corresponding performance bias for classiï¬ers trained on this set. There are precedents for this, e.g. in publicly available face detec- tors not being race agnostic1, and more recently in learning a semantic bias in written texts [4]. It is thus an important question as to whether Kinetics leads to such bias.
To this end we carried out a preliminary study on (i) whether the data for each action class of Kinetics is gen- der balanced, and (ii) if, there is an imbalance, whether it leads to a biased performance of the action classiï¬es.
The outcome of (i) is that in 340 action classes out of the 400, the data is either not dominated by a single gender, or it is mostly not possible to determine the gender â the latter arises in classes where, for example, only hands appear, or the âactorsâ are too small or heavily clothed. The classes that do show gender imbalance include âshaving beardâ and âdunking basketballâ, that are mostly male, and âï¬lling eye- browsâ and âcheerleadingâ, that are mostly female. | 1705.06950#24 | The Kinetics Human Action Video Dataset | We describe the DeepMind Kinetics human action video dataset. The dataset
contains 400 human action classes, with at least 400 video clips for each
action. Each clip lasts around 10s and is taken from a different YouTube video.
The actions are human focussed and cover a broad range of classes including
human-object interactions such as playing instruments, as well as human-human
interactions such as shaking hands. We describe the statistics of the dataset,
how it was collected, and give some baseline performance figures for neural
network architectures trained and tested for human action classification on
this dataset. We also carry out a preliminary analysis of whether imbalance in
the dataset leads to bias in the classifiers. | http://arxiv.org/pdf/1705.06950 | Will Kay, Joao Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijayanarasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natsev, Mustafa Suleyman, Andrew Zisserman | cs.CV | null | null | cs.CV | 20170519 | 20170519 | [
{
"id": "1603.04467"
},
{
"id": "1502.03167"
},
{
"id": "1603.09025"
}
] |
1705.06950 | 25 | The outcome of (ii) for these classes we found little evi- dence of classiï¬er bias for action classes with gender imbal- ance. For example in âplaying pokerâ, which tends to have more male players, all videos with female players are cor- rectly classiï¬ed. The same happens for âHammer throwâ. We can conjecture that this lack of bias is because the clas- siï¬er is able to make use of both the objects involved in
# 1https://www.media.mit.edu/posts/
media-lab-student-recognized-for-fighting-bias-in-machine-learning/
an action as well as the motion patterns, rather than simply physical appearance. | 1705.06950#25 | The Kinetics Human Action Video Dataset | We describe the DeepMind Kinetics human action video dataset. The dataset
contains 400 human action classes, with at least 400 video clips for each
action. Each clip lasts around 10s and is taken from a different YouTube video.
The actions are human focussed and cover a broad range of classes including
human-object interactions such as playing instruments, as well as human-human
interactions such as shaking hands. We describe the statistics of the dataset,
how it was collected, and give some baseline performance figures for neural
network architectures trained and tested for human action classification on
this dataset. We also carry out a preliminary analysis of whether imbalance in
the dataset leads to bias in the classifiers. | http://arxiv.org/pdf/1705.06950 | Will Kay, Joao Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijayanarasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natsev, Mustafa Suleyman, Andrew Zisserman | cs.CV | null | null | cs.CV | 20170519 | 20170519 | [
{
"id": "1603.04467"
},
{
"id": "1502.03167"
},
{
"id": "1603.09025"
}
] |
1705.06950 | 26 | media-lab-student-recognized-for-fighting-bias-in-machine-learning/
an action as well as the motion patterns, rather than simply physical appearance.
Imbalance can also be examined on other âaxesâ, for ex- ample age and race. Again, in a preliminary investigation we found very little clear bias. There is one exception where there is clear bias to babies â in âcryingâ, where many of the videos of non-babies crying are misclassiï¬ed; another ex- ample is âwrestlingâ, where the opposite happens: adults wrestling in a ring seem to be better classiï¬ed than children wrestling in their homes, but it is hard to tell whether the deciding factor is age or the scenes where the actions hap- pen. Nevertheless, these issues of dataset imbalance and any resulting classiï¬er bias warrant a more thorough inves- tigation, and we return to this in section 5.
# 3.6. Discussion: dataset bias II | 1705.06950#26 | The Kinetics Human Action Video Dataset | We describe the DeepMind Kinetics human action video dataset. The dataset
contains 400 human action classes, with at least 400 video clips for each
action. Each clip lasts around 10s and is taken from a different YouTube video.
The actions are human focussed and cover a broad range of classes including
human-object interactions such as playing instruments, as well as human-human
interactions such as shaking hands. We describe the statistics of the dataset,
how it was collected, and give some baseline performance figures for neural
network architectures trained and tested for human action classification on
this dataset. We also carry out a preliminary analysis of whether imbalance in
the dataset leads to bias in the classifiers. | http://arxiv.org/pdf/1705.06950 | Will Kay, Joao Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijayanarasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natsev, Mustafa Suleyman, Andrew Zisserman | cs.CV | null | null | cs.CV | 20170519 | 20170519 | [
{
"id": "1603.04467"
},
{
"id": "1502.03167"
},
{
"id": "1603.09025"
}
] |
1705.06950 | 27 | # 3.6. Discussion: dataset bias II
Another type of bias could arise because classiï¬ers are involved in the dataset collection pipeline: it could be that these classiï¬ers lead to a reduction in the visual variety of the clips obtained, which in turn leads to a bias in the action classiï¬er trained on these clips. In more detail, although the videos are selected based on their title (which is provided by the person uploading the video to YouTube), the position of the candidate clip within the video is provided by an image (RGB) classiï¬er, as described above. In practice, using a classiï¬er at this point does not seem to constrain the variety of the clips â since the video is about the action, the par- ticular frame chosen as part of the clip may not be crucial; and, in any case, the clip contains hundreds of more frames where the appearance (RGB) and motion can vary consid- erably. For these reasons we are not so concerned about the intermediate use of image classiï¬ers.
# 4. Benchmark Performance | 1705.06950#27 | The Kinetics Human Action Video Dataset | We describe the DeepMind Kinetics human action video dataset. The dataset
contains 400 human action classes, with at least 400 video clips for each
action. Each clip lasts around 10s and is taken from a different YouTube video.
The actions are human focussed and cover a broad range of classes including
human-object interactions such as playing instruments, as well as human-human
interactions such as shaking hands. We describe the statistics of the dataset,
how it was collected, and give some baseline performance figures for neural
network architectures trained and tested for human action classification on
this dataset. We also carry out a preliminary analysis of whether imbalance in
the dataset leads to bias in the classifiers. | http://arxiv.org/pdf/1705.06950 | Will Kay, Joao Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijayanarasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natsev, Mustafa Suleyman, Andrew Zisserman | cs.CV | null | null | cs.CV | 20170519 | 20170519 | [
{
"id": "1603.04467"
},
{
"id": "1502.03167"
},
{
"id": "1603.09025"
}
] |
1705.06950 | 28 | # 4. Benchmark Performance
In this section we ï¬rst brieï¬y describe three standard ConvNet architectures for human action recognition in video. We then use these architectures as baselines and compare their performance by training and testing on the Kinetics dataset. We also include their performance on UCF-101 and HMDB-51.
We consider three typical approaches for video classiï¬- cation: ConvNets with an LSTM on top [7, 26]; two-stream networks [9, 19]; and a 3D ConvNet [13, 21, 23]. There have been many improvements over these basic architec- tures, e.g. [9], but our intention here is not to perform a thorough study on what is the very best architecture on Ki- netics, but instead to provide an indication of the level of difï¬culty of the dataset. A rough graphical overview of the three types of architectures we compare is shown in ï¬gure 3, and the speciï¬cation of their temporal interfaces is given in table 3. | 1705.06950#28 | The Kinetics Human Action Video Dataset | We describe the DeepMind Kinetics human action video dataset. The dataset
contains 400 human action classes, with at least 400 video clips for each
action. Each clip lasts around 10s and is taken from a different YouTube video.
The actions are human focussed and cover a broad range of classes including
human-object interactions such as playing instruments, as well as human-human
interactions such as shaking hands. We describe the statistics of the dataset,
how it was collected, and give some baseline performance figures for neural
network architectures trained and tested for human action classification on
this dataset. We also carry out a preliminary analysis of whether imbalance in
the dataset leads to bias in the classifiers. | http://arxiv.org/pdf/1705.06950 | Will Kay, Joao Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijayanarasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natsev, Mustafa Suleyman, Andrew Zisserman | cs.CV | null | null | cs.CV | 20170519 | 20170519 | [
{
"id": "1603.04467"
},
{
"id": "1502.03167"
},
{
"id": "1603.09025"
}
] |
1705.06950 | 29 | For the experiments on the Kinetics dataset all three ar- chitectures are trained from scratch using Kinetics. However, for the experiments on UCF-101 and HMDB-51 the architectures (apart from the 3D ConvNet) are pre-trained on ImageNet (since these datasets are too small to train the architectures from scratch).
# 4.1. ConvNet+LSTM
The high performance of image classiï¬cation networks makes it appealing to try to reuse them with as minimal change as possible for video. This can be achieved by using them to extract features independently from each frame then pooling their predictions across the whole video [14]. This is in the spirit of bag of words image modeling approaches [16, 17, 24], but while convenient in practice, it has the issue of entirely ignoring temporal structure (e.g. models canât potentially distinguish opening from closing a door). | 1705.06950#29 | The Kinetics Human Action Video Dataset | We describe the DeepMind Kinetics human action video dataset. The dataset
contains 400 human action classes, with at least 400 video clips for each
action. Each clip lasts around 10s and is taken from a different YouTube video.
The actions are human focussed and cover a broad range of classes including
human-object interactions such as playing instruments, as well as human-human
interactions such as shaking hands. We describe the statistics of the dataset,
how it was collected, and give some baseline performance figures for neural
network architectures trained and tested for human action classification on
this dataset. We also carry out a preliminary analysis of whether imbalance in
the dataset leads to bias in the classifiers. | http://arxiv.org/pdf/1705.06950 | Will Kay, Joao Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijayanarasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natsev, Mustafa Suleyman, Andrew Zisserman | cs.CV | null | null | cs.CV | 20170519 | 20170519 | [
{
"id": "1603.04467"
},
{
"id": "1502.03167"
},
{
"id": "1603.09025"
}
] |
1705.06950 | 30 | In theory, a more satisfying approach is to add a recur- rent layer to the model [7, 26], such as an LSTM, which can encode state, and capture temporal ordering and long range dependencies. We position an LSTM layer with batch nor- malization (as proposed by Cooijmans et al. [6]) after the last average pooling layer of a ResNet-50 model [11], with 512 hidden units. We then add a fully connected layer on top of the output of the LSTM for the multi-way classiï¬ca- tion. At test time the classiï¬cation is taken from the model output for the last frame.
# 4.2. Two-Stream networks | 1705.06950#30 | The Kinetics Human Action Video Dataset | We describe the DeepMind Kinetics human action video dataset. The dataset
contains 400 human action classes, with at least 400 video clips for each
action. Each clip lasts around 10s and is taken from a different YouTube video.
The actions are human focussed and cover a broad range of classes including
human-object interactions such as playing instruments, as well as human-human
interactions such as shaking hands. We describe the statistics of the dataset,
how it was collected, and give some baseline performance figures for neural
network architectures trained and tested for human action classification on
this dataset. We also carry out a preliminary analysis of whether imbalance in
the dataset leads to bias in the classifiers. | http://arxiv.org/pdf/1705.06950 | Will Kay, Joao Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijayanarasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natsev, Mustafa Suleyman, Andrew Zisserman | cs.CV | null | null | cs.CV | 20170519 | 20170519 | [
{
"id": "1603.04467"
},
{
"id": "1502.03167"
},
{
"id": "1603.09025"
}
] |
1705.06950 | 31 | # 4.2. Two-Stream networks
LSTMs on features from the last layers of ConvNets can model high-level variation, but may not be able to capture ï¬ne low-level motion which is critical in many cases. It is also expensive to train as it requires unrolling the network through multiple frames for backpropagation-through-time. A different, very practical approach, introduced by Si- monyan and Zisserman [19], models short temporal snap- shots of videos by averaging the predictions from a single RGB frame and a stack of 10 externally computed opti- cal ï¬ow frames, after passing them through two replicas of an ImageNet-pretrained ConvNet. The ï¬ow stream has an adapted input convolutional layer with twice as many input channels as ï¬ow frames (because ï¬ow has two channels, horizontal and vertical), and at test time multiple snapshots are sampled from the video and the action prediction is av- eraged. This was shown to get very high performance on existing benchmarks, while being very efï¬cient to train and test.
# 4.3. 3D ConvNets | 1705.06950#31 | The Kinetics Human Action Video Dataset | We describe the DeepMind Kinetics human action video dataset. The dataset
contains 400 human action classes, with at least 400 video clips for each
action. Each clip lasts around 10s and is taken from a different YouTube video.
The actions are human focussed and cover a broad range of classes including
human-object interactions such as playing instruments, as well as human-human
interactions such as shaking hands. We describe the statistics of the dataset,
how it was collected, and give some baseline performance figures for neural
network architectures trained and tested for human action classification on
this dataset. We also carry out a preliminary analysis of whether imbalance in
the dataset leads to bias in the classifiers. | http://arxiv.org/pdf/1705.06950 | Will Kay, Joao Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijayanarasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natsev, Mustafa Suleyman, Andrew Zisserman | cs.CV | null | null | cs.CV | 20170519 | 20170519 | [
{
"id": "1603.04467"
},
{
"id": "1502.03167"
},
{
"id": "1603.09025"
}
] |
1705.06950 | 32 | # 4.3. 3D ConvNets
3D ConvNets [13, 21, 23] seem like a natural approach to video modeling. They are just like standard 2D convo- lutional networks, but with spatio-temporal ï¬lters, and have a very interesting characteristic: they directly create hier- archical representations of spatio-temporal data. One issue with these models is that they have many more parameters
a) LSTM b) Two-Stream c) 3D ConvNet Action Action i 1 Action >) oo⢠|LSTM 900 _ LSTM "Gy } : â ; )) ( )) | ConvNet | âcone SD GamuNei ConvNet) 0 |ConvNet ) ar + \ ) LL | Image 1 | x0) Image K Image 1 Optical Images |. Flow 1 to N ltok time time ime
Figure 3: Video architectures used as baseline human action classiï¬ers. | 1705.06950#32 | The Kinetics Human Action Video Dataset | We describe the DeepMind Kinetics human action video dataset. The dataset
contains 400 human action classes, with at least 400 video clips for each
action. Each clip lasts around 10s and is taken from a different YouTube video.
The actions are human focussed and cover a broad range of classes including
human-object interactions such as playing instruments, as well as human-human
interactions such as shaking hands. We describe the statistics of the dataset,
how it was collected, and give some baseline performance figures for neural
network architectures trained and tested for human action classification on
this dataset. We also carry out a preliminary analysis of whether imbalance in
the dataset leads to bias in the classifiers. | http://arxiv.org/pdf/1705.06950 | Will Kay, Joao Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijayanarasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natsev, Mustafa Suleyman, Andrew Zisserman | cs.CV | null | null | cs.CV | 20170519 | 20170519 | [
{
"id": "1603.04467"
},
{
"id": "1502.03167"
},
{
"id": "1603.09025"
}
] |
1705.06950 | 33 | Figure 3: Video architectures used as baseline human action classiï¬ers.
than 2D ConvNets because of the additional kernel dimen- sion, and this makes them harder to train. Also, they seem to preclude the beneï¬ts of ImageNet pre-training and pre- vious work has deï¬ned relatively shallow custom architec- tures and trained them from scratch [13, 14, 21, 23]. Re- sults on benchmarks have shown promise but have not yet matched the state-of-the-art, possibly because they require more training data than their 2D counterparts. Thus 3D ConvNets are a good candidate for evaluation on our larger dataset.
# 4.4. Implementation details
The ConvNet+LSTM and Two-Stream architecures use In the case of the ResNet-50 as the base architecture. Two-Stream architecture, a separate ResNet-50 is trained independently for each stream. As noted earlier, for these architectures the ResNet-50 model is pre-trained on Ima- geNet for the experiments on UCF-101 and HMDB-51, and trained from scratch for experiments on Kinetics. The 3D- ConvNet is not pre-trained. | 1705.06950#33 | The Kinetics Human Action Video Dataset | We describe the DeepMind Kinetics human action video dataset. The dataset
contains 400 human action classes, with at least 400 video clips for each
action. Each clip lasts around 10s and is taken from a different YouTube video.
The actions are human focussed and cover a broad range of classes including
human-object interactions such as playing instruments, as well as human-human
interactions such as shaking hands. We describe the statistics of the dataset,
how it was collected, and give some baseline performance figures for neural
network architectures trained and tested for human action classification on
this dataset. We also carry out a preliminary analysis of whether imbalance in
the dataset leads to bias in the classifiers. | http://arxiv.org/pdf/1705.06950 | Will Kay, Joao Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijayanarasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natsev, Mustafa Suleyman, Andrew Zisserman | cs.CV | null | null | cs.CV | 20170519 | 20170519 | [
{
"id": "1603.04467"
},
{
"id": "1502.03167"
},
{
"id": "1603.09025"
}
] |
1705.06950 | 34 | For this paper we implemented a small variation of C3D [23], which has 8 convolutional layers, 5 pooling layers and 2 fully connected layers at the top. The inputs to the model are short 16-frame clips with 112 à 112-pixel crops. Dif- ferently from the original paper we use batch normalization after all convolutional and fully connected layers. Another difference to the original model is in the ï¬rst pooling layer, where we use a temporal stride of 2 instead of 1, which re- duces the memory footprint and allows for bigger batches â this was important for batch normalization (especially after the fully connected layers, where there is no weight tying). Using this stride we were able to train with 15 videos per batch per GPU using standard K40 GPUs.
We trained the models on videos using standard SGD with momentum in all cases, with synchronous paralleliza- tion across 64 GPUs for all models. We trained models on Kinetics for up to 100k steps, with a 10x reduction of learn- ing rate when validation loss saturated, and tuned weight decay and learning rate hyperparameters on the validation set of Kinetics. All the models were implemented in Ten- sorFlow [1]. | 1705.06950#34 | The Kinetics Human Action Video Dataset | We describe the DeepMind Kinetics human action video dataset. The dataset
contains 400 human action classes, with at least 400 video clips for each
action. Each clip lasts around 10s and is taken from a different YouTube video.
The actions are human focussed and cover a broad range of classes including
human-object interactions such as playing instruments, as well as human-human
interactions such as shaking hands. We describe the statistics of the dataset,
how it was collected, and give some baseline performance figures for neural
network architectures trained and tested for human action classification on
this dataset. We also carry out a preliminary analysis of whether imbalance in
the dataset leads to bias in the classifiers. | http://arxiv.org/pdf/1705.06950 | Will Kay, Joao Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijayanarasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natsev, Mustafa Suleyman, Andrew Zisserman | cs.CV | null | null | cs.CV | 20170519 | 20170519 | [
{
"id": "1603.04467"
},
{
"id": "1502.03167"
},
{
"id": "1603.09025"
}
] |
1705.06950 | 35 | The original clips have variable resolution and frame rate. In our experiments they are all normalized so that the larger image side is 340 pixels wide for models using ResNet-50 and 128 pixels wide for the 3D ConvNet. We also resample the videos so they have 25 frames per sec- ond.
At test time, we split the video uniformly into crops of 16 frames and apply the classiï¬er separately on each. We then average the class scores, as in the original paper.
Data augmentation is known to be of crucial importance for the performance of deep architectures. We used random cropping both spatially â randomly cropping a 299 Ã 299
Method (a) ConvNet+LSTM (b) Two-Stream (c) 3D-ConvNet #Params 29M 48M 79M Training # Input Frames Temporal Footprint 25 rgb 1 rgb, 10 ï¬ow 16 rgb 5s 0.4s 0.64s # Input Frames 50 rgb 25 rgb, 250 ï¬ow 240 rgb Testing Temporal Footprint 10s 10s 9.6s
Table 3: Number of parameters and temporal input sizes of the models. ConvNet+LSTM and Two-Stream use ResNet-50 ConvNet modules. | 1705.06950#35 | The Kinetics Human Action Video Dataset | We describe the DeepMind Kinetics human action video dataset. The dataset
contains 400 human action classes, with at least 400 video clips for each
action. Each clip lasts around 10s and is taken from a different YouTube video.
The actions are human focussed and cover a broad range of classes including
human-object interactions such as playing instruments, as well as human-human
interactions such as shaking hands. We describe the statistics of the dataset,
how it was collected, and give some baseline performance figures for neural
network architectures trained and tested for human action classification on
this dataset. We also carry out a preliminary analysis of whether imbalance in
the dataset leads to bias in the classifiers. | http://arxiv.org/pdf/1705.06950 | Will Kay, Joao Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijayanarasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natsev, Mustafa Suleyman, Andrew Zisserman | cs.CV | null | null | cs.CV | 20170519 | 20170519 | [
{
"id": "1603.04467"
},
{
"id": "1502.03167"
},
{
"id": "1603.09025"
}
] |
1705.06950 | 36 | Table 3: Number of parameters and temporal input sizes of the models. ConvNet+LSTM and Two-Stream use ResNet-50 ConvNet modules.
UCF-101 HMDB-51 Architecture (a) ConvNet+LSTM 84.3 84.2 (b) Two-Stream 51.6 (c) 3D-ConvNet RGB Flow RGB+Flow RGB Flow RGB+Flow â 85.9 â â 92.5 â 43.9 51.0 24.3 â 56.9 â â 63.7 â RGB 57.0 / 79.0 56.0 / 77.3 56.1 / 79.5 Kinetics Flow â 49.5 / 71.9 â
Table 4: Baseline comparisons across datasets: (left) training and testing on split 1 of UCF-101; (middle) training and testing on split 1 of HMDB-51; (right) training and testing on Kinetics (showing top-1/top-5 performance). ConvNet+LSTM and Two-Stream use ResNet-50 ConvNet modules, pretrained on ImageNet for UCF-101 and HMDB-51 examples but not for the Kinetics experiments. Note that the Two-Stream architecture numbers on individual RGB and Flow streams can be interpreted as a simple baseline which applies a ConvNet independently on 25 uniformly sampled frames then averages the predictions. | 1705.06950#36 | The Kinetics Human Action Video Dataset | We describe the DeepMind Kinetics human action video dataset. The dataset
contains 400 human action classes, with at least 400 video clips for each
action. Each clip lasts around 10s and is taken from a different YouTube video.
The actions are human focussed and cover a broad range of classes including
human-object interactions such as playing instruments, as well as human-human
interactions such as shaking hands. We describe the statistics of the dataset,
how it was collected, and give some baseline performance figures for neural
network architectures trained and tested for human action classification on
this dataset. We also carry out a preliminary analysis of whether imbalance in
the dataset leads to bias in the classifiers. | http://arxiv.org/pdf/1705.06950 | Will Kay, Joao Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijayanarasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natsev, Mustafa Suleyman, Andrew Zisserman | cs.CV | null | null | cs.CV | 20170519 | 20170519 | [
{
"id": "1603.04467"
},
{
"id": "1502.03167"
},
{
"id": "1603.09025"
}
] |
1705.06950 | 37 | patch (respectively 112 à 112 for the 3D ConvNet) â and temporally, when picking the starting frame among those early enough to guarantee a desired number of frames. For shorter videos, we looped the video as many times as neces- sary to satisfy each modelâs input interface. We also applied random left-right ï¬ipping consistently for each video during training.
At test time, we sample from up to 10 seconds of video, again looping if necessary. Better performance could be obtained by also considering left-right ï¬ipped videos at test time and by adding additional augmentation, such as photo- metric, during training. We leave this to future work.
# 4.5. Baseline evaluations
unlike the other baselines. This translates into poor per- formance on all datasets but especially on UCF-101 and HMDB-51 â on Kinetics it is much closer to the perfor- mance of the other models, thanks to the much larger train- ing set of Kinetics. | 1705.06950#37 | The Kinetics Human Action Video Dataset | We describe the DeepMind Kinetics human action video dataset. The dataset
contains 400 human action classes, with at least 400 video clips for each
action. Each clip lasts around 10s and is taken from a different YouTube video.
The actions are human focussed and cover a broad range of classes including
human-object interactions such as playing instruments, as well as human-human
interactions such as shaking hands. We describe the statistics of the dataset,
how it was collected, and give some baseline performance figures for neural
network architectures trained and tested for human action classification on
this dataset. We also carry out a preliminary analysis of whether imbalance in
the dataset leads to bias in the classifiers. | http://arxiv.org/pdf/1705.06950 | Will Kay, Joao Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijayanarasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natsev, Mustafa Suleyman, Andrew Zisserman | cs.CV | null | null | cs.CV | 20170519 | 20170519 | [
{
"id": "1603.04467"
},
{
"id": "1502.03167"
},
{
"id": "1603.09025"
}
] |
1705.06950 | 38 | ⢠Class difï¬culty. We include a full list of Kinetics classes sorted by classiï¬cation accuracy under the two- stream model in ï¬gure 4. Eating classes are among the hardest, as they sometimes require distinguishing what is being eaten, such as hotdogs, chips and doughnuts â and these may appear small and already partially con- sumed, in the video. Dancing classes are also hard, as well as classes centered on a speciï¬c body part, such as âmassaging feetâ, or âshaking headâ.
In this section we compare the performance of the three baseline architectures whilst varying the dataset used for training and testing.
Table 4 shows the classiï¬cation accuracy when training and testing on either UCF-101, HMDB-51 or Kinetics. We train and test on split 1 of UCF-101 and HMDB-51, and on the train/val set and held-out test set of Kinetics. | 1705.06950#38 | The Kinetics Human Action Video Dataset | We describe the DeepMind Kinetics human action video dataset. The dataset
contains 400 human action classes, with at least 400 video clips for each
action. Each clip lasts around 10s and is taken from a different YouTube video.
The actions are human focussed and cover a broad range of classes including
human-object interactions such as playing instruments, as well as human-human
interactions such as shaking hands. We describe the statistics of the dataset,
how it was collected, and give some baseline performance figures for neural
network architectures trained and tested for human action classification on
this dataset. We also carry out a preliminary analysis of whether imbalance in
the dataset leads to bias in the classifiers. | http://arxiv.org/pdf/1705.06950 | Will Kay, Joao Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijayanarasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natsev, Mustafa Suleyman, Andrew Zisserman | cs.CV | null | null | cs.CV | 20170519 | 20170519 | [
{
"id": "1603.04467"
},
{
"id": "1502.03167"
},
{
"id": "1603.09025"
}
] |
1705.06950 | 39 | There are several noteworthy observations. First, the per- formance is far lower on Kinetics than on UCF-101, an indi- cation of the different levels of difï¬culty of the two datasets. On the other hand, the performance on HMDB-51 is worse than on Kinetics â it seems to have a truly difï¬cult test set, and it was designed to be difï¬cult to appearance-centered methods, while having little training data. The parameter- rich 3D-ConvNet model is not pre-trained on ImageNet,
⢠Class confusion. The top 10 class confusions are provided in table 5. They mostly correspond to ï¬ne- grained distinctions that one would expect to be hard, for example âlong jumpâ and âtriple jumpâ, confusing burger with doughnuts. The confusion between âswing dancingâ and âsalsa dancingâ raises the question of how accurate motion modeling is in the two-stream model, since âswing dancingâ is typically much faster-paced and has a peculiar style that makes it easy for humans to distinguish from salsa.
⢠Classes where motion matters most. We tried to an- alyze for which classes motion is more important and | 1705.06950#39 | The Kinetics Human Action Video Dataset | We describe the DeepMind Kinetics human action video dataset. The dataset
contains 400 human action classes, with at least 400 video clips for each
action. Each clip lasts around 10s and is taken from a different YouTube video.
The actions are human focussed and cover a broad range of classes including
human-object interactions such as playing instruments, as well as human-human
interactions such as shaking hands. We describe the statistics of the dataset,
how it was collected, and give some baseline performance figures for neural
network architectures trained and tested for human action classification on
this dataset. We also carry out a preliminary analysis of whether imbalance in
the dataset leads to bias in the classifiers. | http://arxiv.org/pdf/1705.06950 | Will Kay, Joao Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijayanarasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natsev, Mustafa Suleyman, Andrew Zisserman | cs.CV | null | null | cs.CV | 20170519 | 20170519 | [
{
"id": "1603.04467"
},
{
"id": "1502.03167"
},
{
"id": "1603.09025"
}
] |
1705.06950 | 40 | ⢠Classes where motion matters most. We tried to an- alyze for which classes motion is more important and
riding mechanical bull presenting weather forecast sled dog racing playing squash / racquetball snowkiting diving cliff shearing sheep pull ups filling eyebrows: bench pressing riding or walking with horse passing American football (in game) picking fruit weaving basket) playing tennis crawling baby cutting watermelon tying tie trapezing bowling recording music tossing coin fixing hair yawning shooting basketball answering questions rock scissors paper drinking beer shaking hands making a cake throwing ball drinking shots eating chips drinking headbutting sneezing sniffing eating doughnuts faceplanting slapping 00 0.2 04 06 08 Accuracy
Figure 4: List of 20 easiest and 20 hardest Kinetics classes sorted by class accuracies obtained using the two-stream model.
which ones were recognized correctly using just ap- pearance information, by comparing the recognition accuracy ratios when using the ï¬ow and RGB streams of the two-stream model in isolation. We show the ï¬ve classes where this ratio is largest and smallest in ta- ble 6.
# 5. Conclusion | 1705.06950#40 | The Kinetics Human Action Video Dataset | We describe the DeepMind Kinetics human action video dataset. The dataset
contains 400 human action classes, with at least 400 video clips for each
action. Each clip lasts around 10s and is taken from a different YouTube video.
The actions are human focussed and cover a broad range of classes including
human-object interactions such as playing instruments, as well as human-human
interactions such as shaking hands. We describe the statistics of the dataset,
how it was collected, and give some baseline performance figures for neural
network architectures trained and tested for human action classification on
this dataset. We also carry out a preliminary analysis of whether imbalance in
the dataset leads to bias in the classifiers. | http://arxiv.org/pdf/1705.06950 | Will Kay, Joao Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijayanarasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natsev, Mustafa Suleyman, Andrew Zisserman | cs.CV | null | null | cs.CV | 20170519 | 20170519 | [
{
"id": "1603.04467"
},
{
"id": "1502.03167"
},
{
"id": "1603.09025"
}
] |
1705.06950 | 41 | # 5. Conclusion
We have described the Kinetics Human Action Video dataset, which has an order of magnitude more videos than previous datasets of its type. We have also discussed the procedures we employed collecting the data and for ensur- ing its quality. We have shown that the performance of stan- dard existing models on this dataset is much lower than on UCF-101 and on par with HMDB-51, whilst allowing large models such as 3D ConvNets to be trained from scratch, unlike the existing human action datasets.
We have also carried out a preliminary analysis of dataset imbalance and whether this leads to bias in the classiï¬ers trained on the dataset. We found little evidence that the resulting classiï¬ers demonstrate bias along sensitive axes, such as across gender. This is however a complex area that deserves further attention. We leave a thorough analysis for future work, in collaboration with specialists from comple- mentary areas, namely social scientists and critical human- ists.
We will release trained baseline models (in TensorFlow), so that they can be used, for example, to generate features for new action classes.
# Acknowledgements: | 1705.06950#41 | The Kinetics Human Action Video Dataset | We describe the DeepMind Kinetics human action video dataset. The dataset
contains 400 human action classes, with at least 400 video clips for each
action. Each clip lasts around 10s and is taken from a different YouTube video.
The actions are human focussed and cover a broad range of classes including
human-object interactions such as playing instruments, as well as human-human
interactions such as shaking hands. We describe the statistics of the dataset,
how it was collected, and give some baseline performance figures for neural
network architectures trained and tested for human action classification on
this dataset. We also carry out a preliminary analysis of whether imbalance in
the dataset leads to bias in the classifiers. | http://arxiv.org/pdf/1705.06950 | Will Kay, Joao Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijayanarasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natsev, Mustafa Suleyman, Andrew Zisserman | cs.CV | null | null | cs.CV | 20170519 | 20170519 | [
{
"id": "1603.04467"
},
{
"id": "1502.03167"
},
{
"id": "1603.09025"
}
] |
1705.06950 | 42 | We will release trained baseline models (in TensorFlow), so that they can be used, for example, to generate features for new action classes.
# Acknowledgements:
The collection of this dataset was funded by DeepMind. We are very grateful for help from Andreas Kirsch, John- Paul Holt, Danielle Breen, Jonathan Fildes, James Besley and Brian Carver. We are grateful for advice and comments from Tom Duerig, Juan Carlos Niebles, Simon Osindero, Chuck Rosenberg and Sean Legassick; we would also like to thank Sandra and Aditya for data clean up.
# References
[1] M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin, et al. Tensorï¬ow: Large-scale machine learning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467, 2016. 8
[2] M. Andriluka, L. Pishchulin, P. Gehler, and B. Schiele. 2d human pose estimation: New benchmark and state of the In Computer Vision and Pattern Recognition art analysis. (CVPR), 2014 IEEE Conference on. IEEE, 2014. 4 | 1705.06950#42 | The Kinetics Human Action Video Dataset | We describe the DeepMind Kinetics human action video dataset. The dataset
contains 400 human action classes, with at least 400 video clips for each
action. Each clip lasts around 10s and is taken from a different YouTube video.
The actions are human focussed and cover a broad range of classes including
human-object interactions such as playing instruments, as well as human-human
interactions such as shaking hands. We describe the statistics of the dataset,
how it was collected, and give some baseline performance figures for neural
network architectures trained and tested for human action classification on
this dataset. We also carry out a preliminary analysis of whether imbalance in
the dataset leads to bias in the classifiers. | http://arxiv.org/pdf/1705.06950 | Will Kay, Joao Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijayanarasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natsev, Mustafa Suleyman, Andrew Zisserman | cs.CV | null | null | cs.CV | 20170519 | 20170519 | [
{
"id": "1603.04467"
},
{
"id": "1502.03167"
},
{
"id": "1603.09025"
}
] |
1705.06950 | 43 | [3] F. Caba Heilbron, V. Escorcia, B. Ghanem, and J. C. Niebles. Activitynet: A large-scale video benchmark for human activ- ity understanding. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2015. 2, 4
[4] A. Caliskan, J. J. Bryson, and A. Narayanan. Semantics de- rived automatically from language corpora contain human- like biases. Science, 356(6334):183â186, 2017. 6
[5] J. Carreira and A. Zisserman. Quo vadis, action recogni- tion? new models and the kinetics dataset. In IEEE Interna- tional Conference on Computer Vision and Pattern Recogni- tion CVPR, 2017. 2
[6] T. Cooijmans, N. Ballas, C. Laurent, and A. Courville. | 1705.06950#43 | The Kinetics Human Action Video Dataset | We describe the DeepMind Kinetics human action video dataset. The dataset
contains 400 human action classes, with at least 400 video clips for each
action. Each clip lasts around 10s and is taken from a different YouTube video.
The actions are human focussed and cover a broad range of classes including
human-object interactions such as playing instruments, as well as human-human
interactions such as shaking hands. We describe the statistics of the dataset,
how it was collected, and give some baseline performance figures for neural
network architectures trained and tested for human action classification on
this dataset. We also carry out a preliminary analysis of whether imbalance in
the dataset leads to bias in the classifiers. | http://arxiv.org/pdf/1705.06950 | Will Kay, Joao Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijayanarasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natsev, Mustafa Suleyman, Andrew Zisserman | cs.CV | null | null | cs.CV | 20170519 | 20170519 | [
{
"id": "1603.04467"
},
{
"id": "1502.03167"
},
{
"id": "1603.09025"
}
] |
1705.06950 | 44 | [6] T. Cooijmans, N. Ballas, C. Laurent, and A. Courville.
Class 1 âriding muleâ âhockey stopâ âswing dancingâ âstrumming guitarâ âshooting basketballâ âcooking sausagesâ âsweeping ï¬oorâ âtriple jumpâ âdoing aerobicsâ âpetting animal (not cat)â âshaving legsâ âsnowboardingâ Class 2 âriding or walking with horseâ âice skatingâ âsalsa dancingâ âplaying guitarâ âplaying basketballâ âcooking chickenâ âmopping ï¬oorâ âlong jumpâ âzumbaâ âfeeding goatsâ âwaxing legsâ âskiing (not slalom or crosscountry)â confusion 40% 36% 36% 35% 32% 29% 27% 26% 26% 25% 25% 22%
Table 5: Top-12 class confusions in Kinetics, using the two-stream model. | 1705.06950#44 | The Kinetics Human Action Video Dataset | We describe the DeepMind Kinetics human action video dataset. The dataset
contains 400 human action classes, with at least 400 video clips for each
action. Each clip lasts around 10s and is taken from a different YouTube video.
The actions are human focussed and cover a broad range of classes including
human-object interactions such as playing instruments, as well as human-human
interactions such as shaking hands. We describe the statistics of the dataset,
how it was collected, and give some baseline performance figures for neural
network architectures trained and tested for human action classification on
this dataset. We also carry out a preliminary analysis of whether imbalance in
the dataset leads to bias in the classifiers. | http://arxiv.org/pdf/1705.06950 | Will Kay, Joao Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijayanarasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natsev, Mustafa Suleyman, Andrew Zisserman | cs.CV | null | null | cs.CV | 20170519 | 20170519 | [
{
"id": "1603.04467"
},
{
"id": "1502.03167"
},
{
"id": "1603.09025"
}
] |
1705.06950 | 45 | Table 5: Top-12 class confusions in Kinetics, using the two-stream model.
Class ârock scissors paperâ âsword ï¬ghtingâ ârobot dancingâ âair drummingâ âexercising armâ âmaking a cakeâ âcooking sausagesâ âsnifï¬ngâ âeating cakeâ âmaking a sandwichâ Flow/RGB accuracy ratio 5.3 3.1 3.1 2.8 2.5 0.1 0.1 0.1 0.0 0.0
Table 6: Classes with largest and smallest ratios of recogni- tion accuracy when using ï¬ow and RGB. The highest ratios correspond to when ï¬ow does better, the smallest to when RGB does better. We also evaluated the ratios of rgb+ï¬ow to rgb accuracies and the ordering was quite similar.
Recurrent arXiv:1603.09025, 2016. 7 batch normalization. arXiv preprint
S. Guadarrama, M. Rohrbach, S. Venugopalan, K. Saenko, and T. Dar- rell. Long-term recurrent convolutional networks for visual In Proceedings of the IEEE recognition and description. Conference on Computer Vision and Pattern Recognition, pages 2625â2634, 2015. 7 | 1705.06950#45 | The Kinetics Human Action Video Dataset | We describe the DeepMind Kinetics human action video dataset. The dataset
contains 400 human action classes, with at least 400 video clips for each
action. Each clip lasts around 10s and is taken from a different YouTube video.
The actions are human focussed and cover a broad range of classes including
human-object interactions such as playing instruments, as well as human-human
interactions such as shaking hands. We describe the statistics of the dataset,
how it was collected, and give some baseline performance figures for neural
network architectures trained and tested for human action classification on
this dataset. We also carry out a preliminary analysis of whether imbalance in
the dataset leads to bias in the classifiers. | http://arxiv.org/pdf/1705.06950 | Will Kay, Joao Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijayanarasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natsev, Mustafa Suleyman, Andrew Zisserman | cs.CV | null | null | cs.CV | 20170519 | 20170519 | [
{
"id": "1603.04467"
},
{
"id": "1502.03167"
},
{
"id": "1603.09025"
}
] |
1705.06950 | 46 | [8] M. Everingham, S. A. Eslami, L. Van Gool, C. K. Williams, J. Winn, and A. Zisserman. The pascal visual object classes International Journal of Com- challenge: A retrospective. puter Vision, 111(1):98â136, 2015. 6
[9] C. Feichtenhofer, A. Pinz, and A. Zisserman. Convolutional two-stream network fusion for video action recognition. In IEEE International Conference on Computer Vision and Pat- tern Recognition CVPR, 2016. 7
[11] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learn- ing for image recognition. In Computer Vision and Pattern Recognition (CVPR), 2016 IEEE Conference on, 2016. 7 [12] S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015. 6
[13] S. Ji, W. Xu, M. Yang, and K. Yu. 3d convolutional neural networks for human action recognition. IEEE transactions on pattern analysis and machine intelligence, 35(1):221â 231, 2013. 7, 8 | 1705.06950#46 | The Kinetics Human Action Video Dataset | We describe the DeepMind Kinetics human action video dataset. The dataset
contains 400 human action classes, with at least 400 video clips for each
action. Each clip lasts around 10s and is taken from a different YouTube video.
The actions are human focussed and cover a broad range of classes including
human-object interactions such as playing instruments, as well as human-human
interactions such as shaking hands. We describe the statistics of the dataset,
how it was collected, and give some baseline performance figures for neural
network architectures trained and tested for human action classification on
this dataset. We also carry out a preliminary analysis of whether imbalance in
the dataset leads to bias in the classifiers. | http://arxiv.org/pdf/1705.06950 | Will Kay, Joao Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijayanarasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natsev, Mustafa Suleyman, Andrew Zisserman | cs.CV | null | null | cs.CV | 20170519 | 20170519 | [
{
"id": "1603.04467"
},
{
"id": "1502.03167"
},
{
"id": "1603.09025"
}
] |
1705.06950 | 47 | [14] A. Karpathy, G. Toderici, S. Shetty, T. Leung, R. Sukthankar, and L. Fei-Fei. Large-scale video classiï¬cation with convo- lutional neural networks. In Proceedings of the IEEE con- ference on Computer Vision and Pattern Recognition, pages 1725â1732, 2014. 7, 8
[15] H. Kuehne, H. Jhuang, E. Garrote, T. Poggio, and T. Serre. HMDB: a large video database for human motion recog- In Proceedings of the International Conference on nition. Computer Vision (ICCV), 2011. 1, 2, 4
[16] I. Laptev, M. Marszalek, C. Schmid, and B. Rozenfeld. Learning realistic human actions from movies. In Computer Vision and Pattern Recognition, 2008. CVPR 2008. IEEE Conference on, pages 1â8. IEEE, 2008. 7
[17] J. C. Niebles, H. Wang, and L. Fei-Fei. Unsupervised learn- ing of human action categories using spatial-temporal words. International journal of computer vision, 79(3):299â318, 2008. 7 | 1705.06950#47 | The Kinetics Human Action Video Dataset | We describe the DeepMind Kinetics human action video dataset. The dataset
contains 400 human action classes, with at least 400 video clips for each
action. Each clip lasts around 10s and is taken from a different YouTube video.
The actions are human focussed and cover a broad range of classes including
human-object interactions such as playing instruments, as well as human-human
interactions such as shaking hands. We describe the statistics of the dataset,
how it was collected, and give some baseline performance figures for neural
network architectures trained and tested for human action classification on
this dataset. We also carry out a preliminary analysis of whether imbalance in
the dataset leads to bias in the classifiers. | http://arxiv.org/pdf/1705.06950 | Will Kay, Joao Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijayanarasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natsev, Mustafa Suleyman, Andrew Zisserman | cs.CV | null | null | cs.CV | 20170519 | 20170519 | [
{
"id": "1603.04467"
},
{
"id": "1502.03167"
},
{
"id": "1603.09025"
}
] |
1705.06950 | 48 | [18] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, S. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. Berg, and F. Li. Imagenet large scale visual recognition challenge. IJCV, 2015. 1, 2
[19] K. Simonyan and A. Zisserman. Two-stream convolutional In Advances networks for action recognition in videos. in Neural Information Processing Systems, pages 568â576, 2014. 6, 7
[20] K. Soomro, A. R. Zamir, and M. Shah. Ucf101: A dataset of 101 human actions classes from videos in the wild. arXiv preprint arXiv:1212.0402, 2012. 1, 2, 4
[10] G. Grifï¬n, A. Holub, and P. Perona. Caltech-256 object cat- egory dataset. 2007. 6
[21] G. W. Taylor, R. Fergus, Y. LeCun, and C. Bregler. Convolu- tional learning of spatio-temporal features. In European conference on computer vision, pages 140â153. Springer, 2010. 7, 8 | 1705.06950#48 | The Kinetics Human Action Video Dataset | We describe the DeepMind Kinetics human action video dataset. The dataset
contains 400 human action classes, with at least 400 video clips for each
action. Each clip lasts around 10s and is taken from a different YouTube video.
The actions are human focussed and cover a broad range of classes including
human-object interactions such as playing instruments, as well as human-human
interactions such as shaking hands. We describe the statistics of the dataset,
how it was collected, and give some baseline performance figures for neural
network architectures trained and tested for human action classification on
this dataset. We also carry out a preliminary analysis of whether imbalance in
the dataset leads to bias in the classifiers. | http://arxiv.org/pdf/1705.06950 | Will Kay, Joao Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijayanarasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natsev, Mustafa Suleyman, Andrew Zisserman | cs.CV | null | null | cs.CV | 20170519 | 20170519 | [
{
"id": "1603.04467"
},
{
"id": "1502.03167"
},
{
"id": "1603.09025"
}
] |
1705.06950 | 49 | [22] A. Torralba and A. A. Efros. Unbiased look at dataset bias. In Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on, pages 1521â1528. IEEE, 2011. 6 [23] D. Tran, L. Bourdev, R. Fergus, L. Torresani, and M. Paluri. Learning spatiotemporal features with 3d convolutional net- works. In 2015 IEEE International Conference on Computer Vision (ICCV), pages 4489â4497. IEEE, 2015. 7, 8
[24] H. Wang and C. Schmid. Action recognition with improved In International Conference on Computer Vi- trajectories. sion, 2013. 7
[25] X. Wang, A. Farhadi, and A. Gupta. Actions Ë transforma- tions. In CVPR, 2016. 4
[26] J. Yue-Hei Ng, M. Hausknecht, S. Vijayanarasimhan, O. Vinyals, R. Monga, and G. Toderici. Beyond short snip- In Proceed- pets: Deep networks for video classiï¬cation. ings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4694â4702, 2015. 7
# A. List of Kinetics Human Action Classes | 1705.06950#49 | The Kinetics Human Action Video Dataset | We describe the DeepMind Kinetics human action video dataset. The dataset
contains 400 human action classes, with at least 400 video clips for each
action. Each clip lasts around 10s and is taken from a different YouTube video.
The actions are human focussed and cover a broad range of classes including
human-object interactions such as playing instruments, as well as human-human
interactions such as shaking hands. We describe the statistics of the dataset,
how it was collected, and give some baseline performance figures for neural
network architectures trained and tested for human action classification on
this dataset. We also carry out a preliminary analysis of whether imbalance in
the dataset leads to bias in the classifiers. | http://arxiv.org/pdf/1705.06950 | Will Kay, Joao Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijayanarasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natsev, Mustafa Suleyman, Andrew Zisserman | cs.CV | null | null | cs.CV | 20170519 | 20170519 | [
{
"id": "1603.04467"
},
{
"id": "1502.03167"
},
{
"id": "1603.09025"
}
] |
1705.06950 | 50 | # A. List of Kinetics Human Action Classes
This is the list of classes included in the human action video dataset. The number of clips for each action class is given by the number in brackets following each class name.
1. abseiling (1146)
2. air drumming (1132)
3. answering questions (478)
4. applauding (411)
5. applying cream (478)
6. archery (1147)
7. arm wrestling (1123)
8. arranging ï¬owers (583)
9. assembling computer (542)
10. auctioning (478)
11. baby waking up (611)
12. baking cookies (927)
13. balloon blowing (826)
14. bandaging (569)
15. barbequing (1070)
16. bartending (601)
17. beatboxing (943)
18. bee keeping (430)
19. belly dancing (1115)
20. bench pressing (1106)
21. bending back (635)
22. bending metal (410)
23. biking through snow (1052)
24. blasting sand (713)
25. blowing glass (1145)
26. blowing leaves (405)
27. blowing nose (597)
28. blowing out candles (1150)
29. bobsledding (605)
30. bookbinding (914)
31. bouncing on trampoline (690) | 1705.06950#50 | The Kinetics Human Action Video Dataset | We describe the DeepMind Kinetics human action video dataset. The dataset
contains 400 human action classes, with at least 400 video clips for each
action. Each clip lasts around 10s and is taken from a different YouTube video.
The actions are human focussed and cover a broad range of classes including
human-object interactions such as playing instruments, as well as human-human
interactions such as shaking hands. We describe the statistics of the dataset,
how it was collected, and give some baseline performance figures for neural
network architectures trained and tested for human action classification on
this dataset. We also carry out a preliminary analysis of whether imbalance in
the dataset leads to bias in the classifiers. | http://arxiv.org/pdf/1705.06950 | Will Kay, Joao Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijayanarasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natsev, Mustafa Suleyman, Andrew Zisserman | cs.CV | null | null | cs.CV | 20170519 | 20170519 | [
{
"id": "1603.04467"
},
{
"id": "1502.03167"
},
{
"id": "1603.09025"
}
] |
1705.06950 | 51 | 28. blowing out candles (1150)
29. bobsledding (605)
30. bookbinding (914)
31. bouncing on trampoline (690)
32. bowling (1079)
33. braiding hair (780)
34. breading or breadcrumbing (454)
35. breakdancing (948)
36. brush painting (532)
37. brushing hair (934)
38. brushing teeth (1149)
39. building cabinet (431)
40. building shed (427)
41. bungee jumping (1056)
42. busking (851)
43. canoeing or kayaking (1146)
44. capoeira (1092)
45. carrying baby (558)
46. cartwheeling (616)
47. carving pumpkin (711)
48. catching ï¬sh (671)
49. catching or throwing baseball (756)
50. catching or throwing frisbee (1060)
51. catching or throwing softball (842)
52. celebrating (751)
53. changing oil (714)
54. changing wheel (459)
55. checking tires (555)
56. cheerleading (1145)
57. chopping wood (916)
58. clapping (491)
59. clay pottery making (513)
60. clean and jerk (902)
61. cleaning ï¬oor (874)
62. cleaning gutters (598) | 1705.06950#51 | The Kinetics Human Action Video Dataset | We describe the DeepMind Kinetics human action video dataset. The dataset
contains 400 human action classes, with at least 400 video clips for each
action. Each clip lasts around 10s and is taken from a different YouTube video.
The actions are human focussed and cover a broad range of classes including
human-object interactions such as playing instruments, as well as human-human
interactions such as shaking hands. We describe the statistics of the dataset,
how it was collected, and give some baseline performance figures for neural
network architectures trained and tested for human action classification on
this dataset. We also carry out a preliminary analysis of whether imbalance in
the dataset leads to bias in the classifiers. | http://arxiv.org/pdf/1705.06950 | Will Kay, Joao Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijayanarasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natsev, Mustafa Suleyman, Andrew Zisserman | cs.CV | null | null | cs.CV | 20170519 | 20170519 | [
{
"id": "1603.04467"
},
{
"id": "1502.03167"
},
{
"id": "1603.09025"
}
] |
1705.06950 | 52 | 59. clay pottery making (513)
60. clean and jerk (902)
61. cleaning ï¬oor (874)
62. cleaning gutters (598)
63. cleaning pool (447)
64. cleaning shoes (706)
65. cleaning toilet (576)
66. cleaning windows (695)
67. climbing a rope (413)
68. climbing ladder (662)
69. climbing tree (1120)
70. contact juggling (1135)
71. cooking chicken (1000)
72. cooking egg (618)
73. cooking on campï¬re (403)
74. cooking sausages (467)
75. counting money (674)
76. country line dancing (1015)
77. cracking neck (449)
78. crawling baby (1150)
79. crossing river (951)
80. crying (1037)
81. curling hair (855)
82. cutting nails (560)
83. cutting pineapple (712)
84. cutting watermelon (767)
85. dancing ballet (1144)
86. dancing charleston (721)
87. dancing gangnam style (836)
88. dancing macarena (958)
89. deadlifting (805)
90. decorating the christmas tree (612)
91. digging (404)
92. dining (671)
93. disc golï¬ng (565)
94. diving cliff (1075) | 1705.06950#52 | The Kinetics Human Action Video Dataset | We describe the DeepMind Kinetics human action video dataset. The dataset
contains 400 human action classes, with at least 400 video clips for each
action. Each clip lasts around 10s and is taken from a different YouTube video.
The actions are human focussed and cover a broad range of classes including
human-object interactions such as playing instruments, as well as human-human
interactions such as shaking hands. We describe the statistics of the dataset,
how it was collected, and give some baseline performance figures for neural
network architectures trained and tested for human action classification on
this dataset. We also carry out a preliminary analysis of whether imbalance in
the dataset leads to bias in the classifiers. | http://arxiv.org/pdf/1705.06950 | Will Kay, Joao Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijayanarasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natsev, Mustafa Suleyman, Andrew Zisserman | cs.CV | null | null | cs.CV | 20170519 | 20170519 | [
{
"id": "1603.04467"
},
{
"id": "1502.03167"
},
{
"id": "1603.09025"
}
] |
1705.06950 | 53 | 91. digging (404)
92. dining (671)
93. disc golï¬ng (565)
94. diving cliff (1075)
95. dodgeball (595)
96. doing aerobics (461)
97. doing laundry (461)
98. doing nails (949)
99. drawing (445)
100. dribbling basketball (923)
101. drinking (599)
102. drinking beer (575)
103. drinking shots (403)
104. driving car (1118)
105. driving tractor (922)
106. drop kicking (716)
107. drumming ï¬ngers (409)
108. dunking basketball (1105)
109. dying hair (1072)
110. eating burger (864)
111. eating cake (494)
112. eating carrots (516)
113. eating chips (749)
114. eating doughnuts (528)
115. eating hotdog (570)
116. eating ice cream (927)
117. eating spaghetti (1145)
118. eating watermelon (550)
119. egg hunting (500)
120. exercising arm (416)
121. exercising with an exercise ball (438)
122. extinguishing ï¬re (602)
123. faceplanting (441)
124. feeding birds (1150)
125. feeding ï¬sh (973)
126. feeding goats (1027)
127. ï¬lling eyebrows (1085) | 1705.06950#53 | The Kinetics Human Action Video Dataset | We describe the DeepMind Kinetics human action video dataset. The dataset
contains 400 human action classes, with at least 400 video clips for each
action. Each clip lasts around 10s and is taken from a different YouTube video.
The actions are human focussed and cover a broad range of classes including
human-object interactions such as playing instruments, as well as human-human
interactions such as shaking hands. We describe the statistics of the dataset,
how it was collected, and give some baseline performance figures for neural
network architectures trained and tested for human action classification on
this dataset. We also carry out a preliminary analysis of whether imbalance in
the dataset leads to bias in the classifiers. | http://arxiv.org/pdf/1705.06950 | Will Kay, Joao Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijayanarasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natsev, Mustafa Suleyman, Andrew Zisserman | cs.CV | null | null | cs.CV | 20170519 | 20170519 | [
{
"id": "1603.04467"
},
{
"id": "1502.03167"
},
{
"id": "1603.09025"
}
] |
1705.06950 | 54 | 124. feeding birds (1150)
125. feeding ï¬sh (973)
126. feeding goats (1027)
127. ï¬lling eyebrows (1085)
128. ï¬nger snapping (825)
129. ï¬xing hair (676)
130. ï¬ipping pancake (720)
131. ï¬ying kite (1063)
132. folding clothes (695)
133. folding napkins (874)
134. folding paper (940)
135. front raises (962)
136. frying vegetables (608)
137. garbage collecting (441)
138. gargling (430)
139. getting a haircut (658)
140. getting a tattoo (737)
141. giving or receiving award (953)
142. golf chipping (699)
143. golf driving (836)
144. golf putting (1081)
145. grinding meat (415)
146. grooming dog (613)
147. grooming horse (645)
148. gymnastics tumbling (1143)
149. hammer throw (1148)
150. headbanging (1090)
151. headbutting (640)
152. high jump (954)
153. high kick (825)
154. hitting baseball (1071)
155. hockey stop (468)
156. holding snake (430)
157. hopscotch (726)
158. hoverboarding (564) | 1705.06950#54 | The Kinetics Human Action Video Dataset | We describe the DeepMind Kinetics human action video dataset. The dataset
contains 400 human action classes, with at least 400 video clips for each
action. Each clip lasts around 10s and is taken from a different YouTube video.
The actions are human focussed and cover a broad range of classes including
human-object interactions such as playing instruments, as well as human-human
interactions such as shaking hands. We describe the statistics of the dataset,
how it was collected, and give some baseline performance figures for neural
network architectures trained and tested for human action classification on
this dataset. We also carry out a preliminary analysis of whether imbalance in
the dataset leads to bias in the classifiers. | http://arxiv.org/pdf/1705.06950 | Will Kay, Joao Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijayanarasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natsev, Mustafa Suleyman, Andrew Zisserman | cs.CV | null | null | cs.CV | 20170519 | 20170519 | [
{
"id": "1603.04467"
},
{
"id": "1502.03167"
},
{
"id": "1603.09025"
}
] |
1705.06950 | 55 | 154. hitting baseball (1071)
155. hockey stop (468)
156. holding snake (430)
157. hopscotch (726)
158. hoverboarding (564)
159. hugging (517)
160. hula hooping (1129)
161. hurdling (622)
162. hurling (sport) (836)
163. ice climbing (845)
164. ice ï¬shing (555)
165. ice skating (1140)
166. ironing (535)
167. javelin throw (912)
168. jetskiing (1140)
169. jogging (417)
170. juggling balls (923)
171. juggling ï¬re (668)
172. juggling soccer ball (484)
173. jumping into pool (1133)
174. jumpstyle dancing (662)
175. kicking ï¬eld goal (833)
176. kicking soccer ball (544)
177. kissing (733)
178. kitesurï¬ng (794)
179. knitting (691)
180. krumping (657)
181. laughing (926)
182. laying bricks (432)
183. long jump (831)
184. lunge (759)
185. making a cake (463)
186. making a sandwich (440)
187. making bed (679)
188. making jewelry (658)
189. making pizza (1147) | 1705.06950#55 | The Kinetics Human Action Video Dataset | We describe the DeepMind Kinetics human action video dataset. The dataset
contains 400 human action classes, with at least 400 video clips for each
action. Each clip lasts around 10s and is taken from a different YouTube video.
The actions are human focussed and cover a broad range of classes including
human-object interactions such as playing instruments, as well as human-human
interactions such as shaking hands. We describe the statistics of the dataset,
how it was collected, and give some baseline performance figures for neural
network architectures trained and tested for human action classification on
this dataset. We also carry out a preliminary analysis of whether imbalance in
the dataset leads to bias in the classifiers. | http://arxiv.org/pdf/1705.06950 | Will Kay, Joao Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijayanarasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natsev, Mustafa Suleyman, Andrew Zisserman | cs.CV | null | null | cs.CV | 20170519 | 20170519 | [
{
"id": "1603.04467"
},
{
"id": "1502.03167"
},
{
"id": "1603.09025"
}
] |
1705.06950 | 57 | 198. milking cow (980) 199. mopping ï¬oor (606) 200. motorcycling (1142) 201. moving furniture (426) 202. mowing lawn (1147) 203. news anchoring (420) 204. opening bottle (732) 205. opening present (866) 206. paragliding (800) 207. parasailing (762) 208. parkour (504) 209. passing American football (in game) (863) 210. passing American football (not in game) (1045) 211. peeling apples (592) 212. peeling potatoes (457) 213. petting animal (not cat) (757) 214. petting cat (756) 215. picking fruit (793) 216. planting trees (557) 217. plastering (428) 218. playing accordion (925) 219. playing badminton (944) 220. playing bagpipes (838) 221. playing basketball (1144) 222. playing bass guitar (1135) 223. playing cards (737) 224. playing cello (1081) 225. playing chess (850) 226. playing clarinet (1022) 227. playing controller (524) 228. playing cricket (949) 229. playing cymbals (636) 230. playing | 1705.06950#57 | The Kinetics Human Action Video Dataset | We describe the DeepMind Kinetics human action video dataset. The dataset
contains 400 human action classes, with at least 400 video clips for each
action. Each clip lasts around 10s and is taken from a different YouTube video.
The actions are human focussed and cover a broad range of classes including
human-object interactions such as playing instruments, as well as human-human
interactions such as shaking hands. We describe the statistics of the dataset,
how it was collected, and give some baseline performance figures for neural
network architectures trained and tested for human action classification on
this dataset. We also carry out a preliminary analysis of whether imbalance in
the dataset leads to bias in the classifiers. | http://arxiv.org/pdf/1705.06950 | Will Kay, Joao Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijayanarasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natsev, Mustafa Suleyman, Andrew Zisserman | cs.CV | null | null | cs.CV | 20170519 | 20170519 | [
{
"id": "1603.04467"
},
{
"id": "1502.03167"
},
{
"id": "1603.09025"
}
] |
1705.06950 | 58 | 226. playing clarinet (1022) 227. playing controller (524) 228. playing cricket (949) 229. playing cymbals (636) 230. playing didgeridoo (787) 238. playing kickball (468) 239. playing monopoly (731) 240. playing organ (672) 241. playing paintball (1140) 242. playing piano (691) 243. playing poker (1134) 244. playing recorder (1148) 245. playing saxophone (916) 246. playing squash or racquetball (980) 247. playing tennis (1144) 248. playing trombone (1149) 249. playing trumpet (989) 250. playing ukulele (1146) 251. playing violin (1142) 252. playing volleyball (804) 253. playing xylophone (746) 254. pole vault (984) 255. presenting weather forecast (1050) 256. pull ups (1121) 257. pumping ï¬st (1009) 258. pumping gas (544) 259. punching bag (1150) 260. punching person (boxing) (483) 261. push up (614) 262. pushing car (1069) 263. pushing cart (1150) 264. pushing wheelchair (465) 265. reading | 1705.06950#58 | The Kinetics Human Action Video Dataset | We describe the DeepMind Kinetics human action video dataset. The dataset
contains 400 human action classes, with at least 400 video clips for each
action. Each clip lasts around 10s and is taken from a different YouTube video.
The actions are human focussed and cover a broad range of classes including
human-object interactions such as playing instruments, as well as human-human
interactions such as shaking hands. We describe the statistics of the dataset,
how it was collected, and give some baseline performance figures for neural
network architectures trained and tested for human action classification on
this dataset. We also carry out a preliminary analysis of whether imbalance in
the dataset leads to bias in the classifiers. | http://arxiv.org/pdf/1705.06950 | Will Kay, Joao Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijayanarasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natsev, Mustafa Suleyman, Andrew Zisserman | cs.CV | null | null | cs.CV | 20170519 | 20170519 | [
{
"id": "1603.04467"
},
{
"id": "1502.03167"
},
{
"id": "1603.09025"
}
] |
1705.06950 | 61 | 274. riding or walking with horse (1131)
275. riding scooter (674)
276. riding unicycle (864)
277. ripping paper (605)
278. robot dancing (893) 279. rock climbing (1144) 280. rock scissors paper (424) 281. roller skating (960) 282. running on treadmill (428) 283. sailing (867) 284. salsa dancing (1148) 285. sanding ï¬oor (574) 286. scrambling eggs (816) 287. scuba diving (968) 288. setting table (478) 289. shaking hands (640) 290. shaking head (885) 291. sharpening knives (424) 292. sharpening pencil (752) 293. shaving head (971) 294. shaving legs (509) 295. shearing sheep (988) 296. shining shoes (615) 297. shooting basketball (595) 298. shooting goal (soccer) (444) 299. shot put (987) 300. shoveling snow (879) 301. shredding paper (403) 302. shufï¬ing cards (828) 303. side kick (991) 304. sign language interpreting (446) 305. singing (1147) 306. situp (817) 307. skateboarding (1139) 308. ski jumping (1051) | 1705.06950#61 | The Kinetics Human Action Video Dataset | We describe the DeepMind Kinetics human action video dataset. The dataset
contains 400 human action classes, with at least 400 video clips for each
action. Each clip lasts around 10s and is taken from a different YouTube video.
The actions are human focussed and cover a broad range of classes including
human-object interactions such as playing instruments, as well as human-human
interactions such as shaking hands. We describe the statistics of the dataset,
how it was collected, and give some baseline performance figures for neural
network architectures trained and tested for human action classification on
this dataset. We also carry out a preliminary analysis of whether imbalance in
the dataset leads to bias in the classifiers. | http://arxiv.org/pdf/1705.06950 | Will Kay, Joao Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijayanarasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natsev, Mustafa Suleyman, Andrew Zisserman | cs.CV | null | null | cs.CV | 20170519 | 20170519 | [
{
"id": "1603.04467"
},
{
"id": "1502.03167"
},
{
"id": "1603.09025"
}
] |
1705.06950 | 62 | 309. skiing (not slalom or crosscountry) (1140)
310. skiing crosscountry (477)
311. skiing slalom (539)
312. skipping rope (488)
313. skydiving (505)
314. slacklining (790)
315. slapping (465)
316. sled dog racing (775)
317. smoking (1105)
318. smoking hookah (857)
319. snatch weight lifting (943)
320. sneezing (505)
321. snifï¬ng (399)
322. snorkeling (1012)
323. snowboarding (937)
324. snowkiting (1145)
325. snowmobiling (601)
326. somersaulting (993)
327. spinning poi (1134)
328. spray painting (908)
329. spraying (470)
330. springboard diving (406)
331. squat (1148)
332. sticking tongue out (770)
333. stomping grapes (444)
334. stretching arm (718)
335. stretching leg (829)
336. strumming guitar (472)
337. surï¬ng crowd (876)
338. surï¬ng water (751)
339. sweeping ï¬oor (604)
340. swimming backstroke (1077)
341. swimming breast stroke (833)
342. swimming butterï¬y stroke (678) | 1705.06950#62 | The Kinetics Human Action Video Dataset | We describe the DeepMind Kinetics human action video dataset. The dataset
contains 400 human action classes, with at least 400 video clips for each
action. Each clip lasts around 10s and is taken from a different YouTube video.
The actions are human focussed and cover a broad range of classes including
human-object interactions such as playing instruments, as well as human-human
interactions such as shaking hands. We describe the statistics of the dataset,
how it was collected, and give some baseline performance figures for neural
network architectures trained and tested for human action classification on
this dataset. We also carry out a preliminary analysis of whether imbalance in
the dataset leads to bias in the classifiers. | http://arxiv.org/pdf/1705.06950 | Will Kay, Joao Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijayanarasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natsev, Mustafa Suleyman, Andrew Zisserman | cs.CV | null | null | cs.CV | 20170519 | 20170519 | [
{
"id": "1603.04467"
},
{
"id": "1502.03167"
},
{
"id": "1603.09025"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.