doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1710.11573 | 48 | # 5 CONCLUSION
In this work, we presented a novel mixed convex-combinatorial optimization framework for learning deep neural networks with hard-threshold units. Combinatorial optimization is used to set discrete targets for the hard-threshold hidden units, such that each unit only has a linearly-separable problem to solve. The network then decomposes into individual perceptrons, which can be learned with standard convex approaches, given these targets. Based on this, we developed a recursive algorithm for learning deep hard-threshold networks, which we call feasible target propagation (FTPROP), and an efï¬cient mini-batch variant (FTPROP-MB). We showed that the commonly-used but poorly-justiï¬ed saturating straight-through estimator (STE) is the special case of FTPROP-MB that results from using a saturated hinge loss at each layer and our target heuristic and other types of STE correspond to other heuristic and loss combinations in FTPROP-MB. Finally, we deï¬ned the soft hinge loss and showed that FTPROP-MB with a soft hinge loss at each layer improves classiï¬cation accuracy for multiple models on CIFAR-10 and ImageNet when compared to the saturating STE. | 1710.11573#48 | Deep Learning as a Mixed Convex-Combinatorial Optimization Problem | As neural networks grow deeper and wider, learning networks with
hard-threshold activations is becoming increasingly important, both for network
quantization, which can drastically reduce time and energy requirements, and
for creating large integrated systems of deep networks, which may have
non-differentiable components and must avoid vanishing and exploding gradients
for effective learning. However, since gradient descent is not applicable to
hard-threshold functions, it is not clear how to learn networks of them in a
principled way. We address this problem by observing that setting targets for
hard-threshold hidden units in order to minimize loss is a discrete
optimization problem, and can be solved as such. The discrete optimization goal
is to find a set of targets such that each unit, including the output, has a
linearly separable problem to solve. Given these targets, the network
decomposes into individual perceptrons, which can then be learned with standard
convex approaches. Based on this, we develop a recursive mini-batch algorithm
for learning deep hard-threshold networks that includes the popular but poorly
justified straight-through estimator as a special case. Empirically, we show
that our algorithm improves classification accuracy in a number of settings,
including for AlexNet and ResNet-18 on ImageNet, when compared to the
straight-through estimator. | http://arxiv.org/pdf/1710.11573 | Abram L. Friesen, Pedro Domingos | cs.LG, cs.CV, cs.NE | 14 pages (9 body, 5 pages of references and appendices) | In Proceedings of the International Conference on Learning
Representations (ICLR) 2018 | cs.LG | 20171031 | 20180416 | [
{
"id": "1710.03740"
},
{
"id": "1606.06160"
},
{
"id": "1512.03385"
}
] |
1710.11469 | 49 | Figure 5: Eyeglass detection for CelebA dataset with small sample size. The goal is to predict whether a person wears glasses or not. Random samples from training and test data are shown. Groups of observations in the training data that have common (Y, ID) here correspond to pictures of the same person with either glasses on or oï¬. These are labelled by red boxes in the training data and the conditional variance penalty is calculated across these groups of pictures.
of the original image. We show that this approach generalizes better than simply pooling the augmented data, in the sense that we need fewer augmented samples to achieve the same test error. This setting is shown in §5.5.
Details of the network architectures can be found in Appendix §C. All reported error rates are averaged over ï¬ve runs of the respective method. A TensorFlow (Abadi et al., 2015) implementation of CoRe can be found at https://github.com/christinaheinze/core.
# 5.1 Eyeglasses detection with small sample size
In this example, we explore a setting where training and test data are drawn from the same distribution, so we might not expect a distributional shift between the two. However, we consider a small training sample size which gives rise to statistical ï¬uctuations between training and test data. We assess to which extent the conditional variance penalty can help to improve test accuracies in this setting. | 1710.11469#49 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11573 | 49 | In future work, we plan to develop novel target heuristics and layer loss functions by investigating connections between our framework and constraint satisfaction and satisï¬ability. We also intend to further explore the beneï¬ts of deep networks with hard-threshold units. In particular, while recent research clearly shows their ability to reduce computation and energy requirements, they should also be less susceptible to vanishing and exploding gradients and may be less susceptible to covariate shift and adversarial examples.
9
# ACKNOWLEDGMENTS
This research was partly funded by ONR grant N00014-16-1-2697. The GPU machine used for this research was donated by NVIDIA.
# REFERENCES
Yoshua Bengio. How Auto-Encoders Could Provide Credit Assignment in Deep Networks via Target Propagation. arXiv preprint arXiv:1407.7906 [cs.LG], 2014.
Yoshua Bengio, Nicholas L´eonard, and Aaron Courville. Estimating or Propagating Gradients Through Stochastic Neurons for Conditional Computation. arXiv preprint arXiv:1308.3432 [cs.LG], 2013.
Miguel ´A. Carreira-PerpiËn´an and Weiran Wang. Distributed optimization of deeply nested systems. | 1710.11573#49 | Deep Learning as a Mixed Convex-Combinatorial Optimization Problem | As neural networks grow deeper and wider, learning networks with
hard-threshold activations is becoming increasingly important, both for network
quantization, which can drastically reduce time and energy requirements, and
for creating large integrated systems of deep networks, which may have
non-differentiable components and must avoid vanishing and exploding gradients
for effective learning. However, since gradient descent is not applicable to
hard-threshold functions, it is not clear how to learn networks of them in a
principled way. We address this problem by observing that setting targets for
hard-threshold hidden units in order to minimize loss is a discrete
optimization problem, and can be solved as such. The discrete optimization goal
is to find a set of targets such that each unit, including the output, has a
linearly separable problem to solve. Given these targets, the network
decomposes into individual perceptrons, which can then be learned with standard
convex approaches. Based on this, we develop a recursive mini-batch algorithm
for learning deep hard-threshold networks that includes the popular but poorly
justified straight-through estimator as a special case. Empirically, we show
that our algorithm improves classification accuracy in a number of settings,
including for AlexNet and ResNet-18 on ImageNet, when compared to the
straight-through estimator. | http://arxiv.org/pdf/1710.11573 | Abram L. Friesen, Pedro Domingos | cs.LG, cs.CV, cs.NE | 14 pages (9 body, 5 pages of references and appendices) | In Proceedings of the International Conference on Learning
Representations (ICLR) 2018 | cs.LG | 20171031 | 20180416 | [
{
"id": "1710.03740"
},
{
"id": "1606.06160"
},
{
"id": "1512.03385"
}
] |
1710.11469 | 50 | Speciï¬cally, we use a subsample of the CelebA dataset (Liu et al., 2015) and try to classify images according to whether or not the person in the image wears glasses. For construction of the ID variable, we exploit the fact that several photos of the same person are available and set ID to be the identiï¬er of the person in the dataset. Figure 5 shows examples from both the training and the test data set The conditional variance penalty is estimated across groups of observations that share a common (Y, ID). Here, this corresponds to pictures of the same person where all pictures show the person either with glasses (if Y = 1) or all pictures show the person without glasses (Y = 0). Statistical ï¬uctuations between training and test set could for instance arise if by chance the background of eyeglass wearers is darker in the training sample than in test samples, the eyeglass wearers happen to be outdoors more often or might be more often female than male etc.
Below, we present the following analyses. First, we look at ï¬ve diï¬erent datasets and analyze the eï¬ect of adding the CoRe penalty (using conditional-variance-of-prediction)
17 | 1710.11469#50 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11573 | 50 | Miguel ´A. Carreira-PerpiËn´an and Weiran Wang. Distributed optimization of deeply nested systems.
Proceedings of the International Conference on Artiï¬cial Intelligence and Statistics, 2014. In
Abram L. Friesen and Pedro Domingos. Recursive Decomposition for Nonconvex Optimization. In Qiang Yang and Michael Woolridge (eds.), Proceedings of the 24th International Joint Conference on Artiï¬cial Intelligence, pp. 253â259. AAAI Press, 2015.
Abram L. Friesen and Pedro Domingos. The Sum-Product Theorem: A Foundation for Learning Tractable Models. In Proceedings of the 33rd International Conference on Machine Learning, 2016.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep Residual Learning for Image Recognition. arXiv preprint arXiv:1512.03385 [cs.CV], 2015a.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Delving deep into rectiï¬ers: Surpassing human-level performance on ImageNet classiï¬cation. In Proceedings of the IEEE International Conference on Computer Vision, pp. 1026â1034, 2015b. | 1710.11573#50 | Deep Learning as a Mixed Convex-Combinatorial Optimization Problem | As neural networks grow deeper and wider, learning networks with
hard-threshold activations is becoming increasingly important, both for network
quantization, which can drastically reduce time and energy requirements, and
for creating large integrated systems of deep networks, which may have
non-differentiable components and must avoid vanishing and exploding gradients
for effective learning. However, since gradient descent is not applicable to
hard-threshold functions, it is not clear how to learn networks of them in a
principled way. We address this problem by observing that setting targets for
hard-threshold hidden units in order to minimize loss is a discrete
optimization problem, and can be solved as such. The discrete optimization goal
is to find a set of targets such that each unit, including the output, has a
linearly separable problem to solve. Given these targets, the network
decomposes into individual perceptrons, which can then be learned with standard
convex approaches. Based on this, we develop a recursive mini-batch algorithm
for learning deep hard-threshold networks that includes the popular but poorly
justified straight-through estimator as a special case. Empirically, we show
that our algorithm improves classification accuracy in a number of settings,
including for AlexNet and ResNet-18 on ImageNet, when compared to the
straight-through estimator. | http://arxiv.org/pdf/1710.11573 | Abram L. Friesen, Pedro Domingos | cs.LG, cs.CV, cs.NE | 14 pages (9 body, 5 pages of references and appendices) | In Proceedings of the International Conference on Learning
Representations (ICLR) 2018 | cs.LG | 20171031 | 20180416 | [
{
"id": "1710.03740"
},
{
"id": "1606.06160"
},
{
"id": "1512.03385"
}
] |
1710.11469 | 51 | to the cross-entropy loss. Second, we focus on one dataset and compare the four diï¬erent variants of the CoRe penalty in Eqs. (10) and (11) with ν â {1/2, 1}.
5.1.1 CoRe penalty using the conditional variance of the predicted logits
We consider ï¬ve diï¬erent training sets which are created as follows. For each person in the standard CelebA training data we count the number of available images and select the 50 identities for which most images are available individually. We partition these 50 identities into 5 disjoint subsets of size 10 and consider the resulting 5 datasets, containing the images of 10 unique identities each. The resulting 5 datasets have sizes {289, 296, 292, 287, 287}. For the validation and the test set, we consider the usual CelebA validation and test split but balance these with respect to the target variable âEyeglassesâ. The balanced validation set consists of 2766 observations; the balanced test set contains 2578 images. The identities in the validation and test sets are disjoint from the identities in the training sets. | 1710.11469#51 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11573 | 51 | Geoffrey E. Hinton. Coursera Lectures: Neural networks for machine learning, 2012.
Itay Hubara, Daniel Soudry, and Ran El-Yaniv. Binarized Neural Networks. In Advances in Neural Information Processing Systems, pp. 1â17, 2016.
Sergey Ioffe and Christian Szegedy. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. In Francis Bach and David Blei (eds.), Proceedings of the 32nd International Conference on Machine Learning, volume 37, pp. 448â456, Lille, France, 2015.
Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, and Ping Tak Peter Tang. On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima. In Proceedings of the 5th International Conference on Learning Representations, 2016.
Diederik P. Kingma and Jimmy Lei Ba. Adam: A method for stochastic optimization. In Proceedings of the 3rd International Conference on Learning Representations, 2015.
Alex Krizhevsky. Learning multiple layers of features from tiny images. Technical report, University of Toronto, 2009. | 1710.11573#51 | Deep Learning as a Mixed Convex-Combinatorial Optimization Problem | As neural networks grow deeper and wider, learning networks with
hard-threshold activations is becoming increasingly important, both for network
quantization, which can drastically reduce time and energy requirements, and
for creating large integrated systems of deep networks, which may have
non-differentiable components and must avoid vanishing and exploding gradients
for effective learning. However, since gradient descent is not applicable to
hard-threshold functions, it is not clear how to learn networks of them in a
principled way. We address this problem by observing that setting targets for
hard-threshold hidden units in order to minimize loss is a discrete
optimization problem, and can be solved as such. The discrete optimization goal
is to find a set of targets such that each unit, including the output, has a
linearly separable problem to solve. Given these targets, the network
decomposes into individual perceptrons, which can then be learned with standard
convex approaches. Based on this, we develop a recursive mini-batch algorithm
for learning deep hard-threshold networks that includes the popular but poorly
justified straight-through estimator as a special case. Empirically, we show
that our algorithm improves classification accuracy in a number of settings,
including for AlexNet and ResNet-18 on ImageNet, when compared to the
straight-through estimator. | http://arxiv.org/pdf/1710.11573 | Abram L. Friesen, Pedro Domingos | cs.LG, cs.CV, cs.NE | 14 pages (9 body, 5 pages of references and appendices) | In Proceedings of the International Conference on Learning
Representations (ICLR) 2018 | cs.LG | 20171031 | 20180416 | [
{
"id": "1710.03740"
},
{
"id": "1606.06160"
},
{
"id": "1512.03385"
}
] |
1710.11469 | 52 | Given a training dataset, the standard approach would be to pool all examples. The only additional information we exploit is that some observations can be grouped. If using a 5-layer convolutional neural network with a standard ridge penalty (details can be found in Table C.1) and pooling all data, the test error on unseen images ranges from 18.08% to 25.97%. Exploiting the group structure with the CoRe penalty (in addition to a ridge penalty) results in test errors ranging from 14.79% to 21.49%, see Table 1. The relative improvements when using the CoRe penalty range from 9% to 28.6%.
The test error is not very sensitive to the weight of the CoRe penalty as shown in Figure 6(a): for a large range of penalty weights, adding the CoRe penalty decreases the test error compared to the pooled estimator (identical to a CoRe penalty weight of 0). This holds true for various ridge penalty weights.
While test error rates shown in Figure 6 suggests already that the CoRe penalty diï¬er- entiates itself clearly from a standard ridge penalty, we examine next the diï¬erential eï¬ect of the CoRe penalty on the between- and within-group variances. Concretely, the variance of the predictions can be decomposed as | 1710.11469#52 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11573 | 52 | Alex Krizhevsky. Learning multiple layers of features from tiny images. Technical report, University of Toronto, 2009.
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. Imagenet classiï¬cation with deep convolutional neural networks. In Advances in Neural Information Processing Systems, pp. 1097â1105, 2012.
Yann LeCun. Learning Process in an Asymmetric Threshold Network. In E. Bienenstock, F. Fogelman Souli´e, and G. Weisbuch (eds.), Disordered Systems and Biological Organization, pp. 233â240. Springer, Berlin, Heidelberg, 1986.
Yann LeCun. Modeles connexionnistes de lâapprentissage (connectionist learning models). PhD thesis, Universit´e P. et M. Curie (Paris 6), 1987.
Yann LeCun, L´eon Bottou, Genevieve B. Orr, and Klaus-Robert M¨uller. Efï¬cient BackProp. In Gr´egoire Montavon, Genevi`eve B Orr, and Klaus-Robert M¨uller (eds.), Neural Networks: Tricks of the Trade: Second Edition, pp. 9â48. Springer Berlin Heidelberg, Berlin, Heidelberg, 2012. | 1710.11573#52 | Deep Learning as a Mixed Convex-Combinatorial Optimization Problem | As neural networks grow deeper and wider, learning networks with
hard-threshold activations is becoming increasingly important, both for network
quantization, which can drastically reduce time and energy requirements, and
for creating large integrated systems of deep networks, which may have
non-differentiable components and must avoid vanishing and exploding gradients
for effective learning. However, since gradient descent is not applicable to
hard-threshold functions, it is not clear how to learn networks of them in a
principled way. We address this problem by observing that setting targets for
hard-threshold hidden units in order to minimize loss is a discrete
optimization problem, and can be solved as such. The discrete optimization goal
is to find a set of targets such that each unit, including the output, has a
linearly separable problem to solve. Given these targets, the network
decomposes into individual perceptrons, which can then be learned with standard
convex approaches. Based on this, we develop a recursive mini-batch algorithm
for learning deep hard-threshold networks that includes the popular but poorly
justified straight-through estimator as a special case. Empirically, we show
that our algorithm improves classification accuracy in a number of settings,
including for AlexNet and ResNet-18 on ImageNet, when compared to the
straight-through estimator. | http://arxiv.org/pdf/1710.11573 | Abram L. Friesen, Pedro Domingos | cs.LG, cs.CV, cs.NE | 14 pages (9 body, 5 pages of references and appendices) | In Proceedings of the International Conference on Learning
Representations (ICLR) 2018 | cs.LG | 20171031 | 20180416 | [
{
"id": "1710.03740"
},
{
"id": "1606.06160"
},
{
"id": "1512.03385"
}
] |
1710.11469 | 53 | Var(fo(X)) = E[Var(fo(X)]Â¥,1D)] + Var [B(fo(X)IY.1D)]. where the first term on the rhs is the within-group variance that CORE penalizes, while a ridge penalty would penalize both the within- and also the between-group variance (the second term on the rhs above). In Figure we show the ratio between the CORE penalty and the between-group variance where groups are defined by conditioning on (Y, ID). Specifically, the ratio is computed as
E[Var( fo(X)|Y,1D)] /Var[B(fo(X)|Â¥, 1D)]. (14) The results shown in Figure are computed on dataset 1 (DS 1). While increasing ridge penalty weights do lead to a smaller value of the CORE penalty, the between-group variance is also reduced such that the ratio between the two terms does not decrease with larger weights of the ridge penalty With increasing weight of the CORE penalty, the variance ratio decreases, showing that the CORE penalty indeed penalizes the within-group variance more than the between-group variance.
8. In Figure D.1 in the Appendix, the numerator and the denominator are plotted separately as a function of the CoRe penalty weight.
18 | 1710.11469#53 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11573 | 53 | Dong Hyun Lee, Saizheng Zhang, Asja Fischer, and Yoshua Bengio. Difference target propagation.
10
Proceedings of the Joint European Conference on Machine Learning and Knowledge Discovery in Databases, volume 9284, pp. 498â515, 2015.
Hao Li, Soham De, Zheng Xu, Christoph Studer, Hanan Samet, and Tom Goldstein. Training Quantized Nets: A Deeper Understanding. In Advances in Neural Information Processing Systems, 2017.
Darryl D. Lin and Sachin S. Talathi. Fixed Point Quantization of Deep Convolutional Networks. In Proceedings of the 33rd International Conference on Machine Learning, pp. 2849â2858, 2016.
Darryl D. Lin, Sachin S. Talathi, and V. Sreekanth Annapureddy. Overcoming Challenges in Fixed Point Training of Deep Convolutional Networks. In Workshop on On-Device Intelligence at ICML, 2016.
Paulius Micikevicius, Sharan Narang, Jonah Alben, Gregory Diamos, Erich Elsen, David Garcia, Boris Ginsburg, Michael Houston, Oleksii Kuchaev, Ganesh Venkatesh, and Hao Wu. Mixed Precision Training. arXiv preprint arXiv:1710.03740 [cs.AI], 2017. | 1710.11573#53 | Deep Learning as a Mixed Convex-Combinatorial Optimization Problem | As neural networks grow deeper and wider, learning networks with
hard-threshold activations is becoming increasingly important, both for network
quantization, which can drastically reduce time and energy requirements, and
for creating large integrated systems of deep networks, which may have
non-differentiable components and must avoid vanishing and exploding gradients
for effective learning. However, since gradient descent is not applicable to
hard-threshold functions, it is not clear how to learn networks of them in a
principled way. We address this problem by observing that setting targets for
hard-threshold hidden units in order to minimize loss is a discrete
optimization problem, and can be solved as such. The discrete optimization goal
is to find a set of targets such that each unit, including the output, has a
linearly separable problem to solve. Given these targets, the network
decomposes into individual perceptrons, which can then be learned with standard
convex approaches. Based on this, we develop a recursive mini-batch algorithm
for learning deep hard-threshold networks that includes the popular but poorly
justified straight-through estimator as a special case. Empirically, we show
that our algorithm improves classification accuracy in a number of settings,
including for AlexNet and ResNet-18 on ImageNet, when compared to the
straight-through estimator. | http://arxiv.org/pdf/1710.11573 | Abram L. Friesen, Pedro Domingos | cs.LG, cs.CV, cs.NE | 14 pages (9 body, 5 pages of references and appendices) | In Proceedings of the International Conference on Learning
Representations (ICLR) 2018 | cs.LG | 20171031 | 20180416 | [
{
"id": "1710.03740"
},
{
"id": "1606.06160"
},
{
"id": "1512.03385"
}
] |
1710.11469 | 54 | Error Penalty value Method Training Test Training Test 1 5-layer CNN S D 5-layer CNN + CoRe 0.0% (0.00%) 18.08% (0.24%) 0.0% (0.00%) 15.08% (0.43%) 19.14 (1.70) 0.01 (0.01) 18.86 (1.87) 0.70 (0.05) 2 5-layer CNN S D 5-layer CNN + CoRe 0.0% (0.00%) 23.81% (0.51%) 0.0% (0.00%) 17.00% (0.75%) 6.20 (0.35) 0.00 (0.00) 6.97 (0.46) 0.41 (0.04) 3 5-layer CNN S D 5-layer CNN + CoRe 0.0% (0.00%) 18.61% (0.52%) 0.0% (0.00%) 14.79% (0.89%) 7.33 (1.40) 0.00 (0.00) 7.91 (1.13) 0.26 (0.03) 4 5-layer CNN S D 5-layer CNN + CoRe 0.0% (0.00%) 25.97% (0.24%) 0.0% (0.00%) | 1710.11469#54 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11573 | 54 | Marvin L. Minsky and Seymour Papert. Perceptrons: an introduction to computational geometry. The MIT Press, Cambridge, MA, 1969.
A. B. J. Novikoff. On convergence proofs on perceptrons. In Proceedings of the Symposium on the Mathematical Theory of Automata, pp. 615â622. Polytechnic Institute of Brooklyn, 1962.
Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, and Ali Farhadi. XNOR-Net: ImageNet Classiï¬cation Using Binary Convolutional Neural Networks. In Proceedings of the 14th European Conference on Computer Vision, 2016.
Frank Rosenblatt. The perceptron: A probabilistic model for information storage and organization in the brain. Psychological Review, 65(6):386â408, 1958.
David E. Rumelhart, Geoffrey E. Hinton, and R. J. Williams. Learining Internal Representations by Error Propagation. In Parallel Distributed Processing: Explorations in the Microstructure of Cognition, volume 1, pp. 318â362. The MIT Press, 1986. | 1710.11573#54 | Deep Learning as a Mixed Convex-Combinatorial Optimization Problem | As neural networks grow deeper and wider, learning networks with
hard-threshold activations is becoming increasingly important, both for network
quantization, which can drastically reduce time and energy requirements, and
for creating large integrated systems of deep networks, which may have
non-differentiable components and must avoid vanishing and exploding gradients
for effective learning. However, since gradient descent is not applicable to
hard-threshold functions, it is not clear how to learn networks of them in a
principled way. We address this problem by observing that setting targets for
hard-threshold hidden units in order to minimize loss is a discrete
optimization problem, and can be solved as such. The discrete optimization goal
is to find a set of targets such that each unit, including the output, has a
linearly separable problem to solve. Given these targets, the network
decomposes into individual perceptrons, which can then be learned with standard
convex approaches. Based on this, we develop a recursive mini-batch algorithm
for learning deep hard-threshold networks that includes the popular but poorly
justified straight-through estimator as a special case. Empirically, we show
that our algorithm improves classification accuracy in a number of settings,
including for AlexNet and ResNet-18 on ImageNet, when compared to the
straight-through estimator. | http://arxiv.org/pdf/1710.11573 | Abram L. Friesen, Pedro Domingos | cs.LG, cs.CV, cs.NE | 14 pages (9 body, 5 pages of references and appendices) | In Proceedings of the International Conference on Learning
Representations (ICLR) 2018 | cs.LG | 20171031 | 20180416 | [
{
"id": "1710.03740"
},
{
"id": "1606.06160"
},
{
"id": "1512.03385"
}
] |
1710.11573 | 55 | Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Fei-Fei Li. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV), 115(3):211â252, 2015.
Shai Shalev-Shwartz, Ohad Shamir, and Shaked Shammah. Failures of Gradient-Based Deep Learning. In Proceedings of the 34th International Conference on Machine Learning, 2017.
Daniel Soudry, Itay Hubara, and Ron Meir. Expectation Backpropagation: parameter-free training of multilayer neural networks with real and discrete weights. In Advances in Neural Information Processing Systems. MIT Press Cambridge, 2014.
Wei Tang, Gang Hua, and Liang Wang. How to Train a Compact Binary Neural Network with High Accuracy ? In Proceedings of the 31st Conference on Artiï¬cial Intelligence, pp. 2625â2631, 2017.
Gavin Taylor, Ryan Burmeister, Zheng Xu, Bharat Singh, Ankit Patel, and Tom Goldstein. Training Neural Networks Without Gradients: A Scalable ADMM Approach. In Proceedings of the 33rd International Conference on Machine Learning, 2016. | 1710.11573#55 | Deep Learning as a Mixed Convex-Combinatorial Optimization Problem | As neural networks grow deeper and wider, learning networks with
hard-threshold activations is becoming increasingly important, both for network
quantization, which can drastically reduce time and energy requirements, and
for creating large integrated systems of deep networks, which may have
non-differentiable components and must avoid vanishing and exploding gradients
for effective learning. However, since gradient descent is not applicable to
hard-threshold functions, it is not clear how to learn networks of them in a
principled way. We address this problem by observing that setting targets for
hard-threshold hidden units in order to minimize loss is a discrete
optimization problem, and can be solved as such. The discrete optimization goal
is to find a set of targets such that each unit, including the output, has a
linearly separable problem to solve. Given these targets, the network
decomposes into individual perceptrons, which can then be learned with standard
convex approaches. Based on this, we develop a recursive mini-batch algorithm
for learning deep hard-threshold networks that includes the popular but poorly
justified straight-through estimator as a special case. Empirically, we show
that our algorithm improves classification accuracy in a number of settings,
including for AlexNet and ResNet-18 on ImageNet, when compared to the
straight-through estimator. | http://arxiv.org/pdf/1710.11573 | Abram L. Friesen, Pedro Domingos | cs.LG, cs.CV, cs.NE | 14 pages (9 body, 5 pages of references and appendices) | In Proceedings of the International Conference on Learning
Representations (ICLR) 2018 | cs.LG | 20171031 | 20180416 | [
{
"id": "1710.03740"
},
{
"id": "1606.06160"
},
{
"id": "1512.03385"
}
] |
1710.11469 | 56 | Table 1: Eyeglass detection, trained on small subsets (DS1âDS5) of the CelebA dataset with disjoint identities. We report training and test error as well as the value of the CoRe penalty ËCf,1,θ on the training and the test set after training, evaluated for both the pooled estimator and the CoRe estimator. The weights of the ridge and the CoRe penalty were chosen based on their performance on the validation set.
19
(a) (b)
# Fi F
Figure 6: Eyeglass detection, trained on a small subset (DS1) of the CelebA dataset with disjoint identities. (a) Average test error as a function of both the CoRe penalty on x-axis and various levels of the ridge penalty. The results can be seen to be fairly insensitive to the ridge penalty. (b) The variance ratio (14) on test data as a function of both the CoRe and ridge penalty weights. The CoRe penalty can be seen to penalize the within- group variance selectively, whereas a strong ridge penalty decreases both the within- and between-group variance. | 1710.11469#56 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11573 | 56 | Ronald J. Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine Learning, 8(3):229â256, 1992.
Rodney Winter and Bernard Widrow. MADALINE RULE II: A training algorithm for neural networks. In Proceedings of the IEEE International Conference on Neural Networks, San Diego, CA, USA, 1988. IEEE.
Yichao Wu and Yufeng Liu. Robust Truncated Hinge Loss Support Vector Machines. Journal of the American Statistical Association, 102(479):974â983, 2007.
Shuchang Zhou, Yuxin Wu, Zekun Ni, Xinyu Zhou, He Wen, and Yuheng Zou. DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients. arXiv preprint arXiv:1606.06160 [cs.NE], 2016.
Chenzhuo Zhu, Song Han, Huizi Mao, and William J. Dally. Trained Ternary Quantization. In Proceedings of the 5th International Conference on Learning Representations, 2017.
11
# A EXPERIMENT DETAILS | 1710.11573#56 | Deep Learning as a Mixed Convex-Combinatorial Optimization Problem | As neural networks grow deeper and wider, learning networks with
hard-threshold activations is becoming increasingly important, both for network
quantization, which can drastically reduce time and energy requirements, and
for creating large integrated systems of deep networks, which may have
non-differentiable components and must avoid vanishing and exploding gradients
for effective learning. However, since gradient descent is not applicable to
hard-threshold functions, it is not clear how to learn networks of them in a
principled way. We address this problem by observing that setting targets for
hard-threshold hidden units in order to minimize loss is a discrete
optimization problem, and can be solved as such. The discrete optimization goal
is to find a set of targets such that each unit, including the output, has a
linearly separable problem to solve. Given these targets, the network
decomposes into individual perceptrons, which can then be learned with standard
convex approaches. Based on this, we develop a recursive mini-batch algorithm
for learning deep hard-threshold networks that includes the popular but poorly
justified straight-through estimator as a special case. Empirically, we show
that our algorithm improves classification accuracy in a number of settings,
including for AlexNet and ResNet-18 on ImageNet, when compared to the
straight-through estimator. | http://arxiv.org/pdf/1710.11573 | Abram L. Friesen, Pedro Domingos | cs.LG, cs.CV, cs.NE | 14 pages (9 body, 5 pages of references and appendices) | In Proceedings of the International Conference on Learning
Representations (ICLR) 2018 | cs.LG | 20171031 | 20180416 | [
{
"id": "1710.03740"
},
{
"id": "1606.06160"
},
{
"id": "1512.03385"
}
] |
1710.11469 | 57 | Table 1 also reports the value of the CoRe penalty after training when evaluated for the pooled and the CoRe estimator on the training and the test set. As a qualitative measure to assess the presence of sample bias in the data (provided the model assumptions hold), we can compare the value the CoRe penalty takes after training when evaluated for the pooled estimator and the CoRe estimator. The diï¬erence yields a measure for the extent the respective estimators are functions of â. If the respective hold-out values are both small, this would indicate that the style features are not very predictive for the target variable. If, on the other hand, the CoRe penalty evaluated for the pooled estimator takes a much larger value than for the CoRe estimator (as in this case), this would indicate the presence of sample bias.
# 5.1.2 Other CoRe penalty types | 1710.11469#57 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11573 | 57 | 11
# A EXPERIMENT DETAILS
All experiments were performed using PyTorch (http://pytorch.org/). CIFAR-10 experiments with the 4-layer convolutional network were performed on an NVIDIA Titan X. All other experiments were performed on NVIDIA Tesla P100 devices in a DGX-1. Code for the experiments is available at https://github.com/afriesen/ftprop.
# A.1 CIFAR-10
On CIFAR-10, which has 50K training images and 10K test images divided into 10 classes, we trained both a simple 4-layer convolutional network and a deeper 8-layer convolutional network used in (Zhou et al., 2016) with the above methods and then compared their top-1 accuracies on the test set. We pre-processed the images with mean / std normalization, and augmented the dataset with random horizontal ï¬ips and random crops from images padded with 4 pixels. Hyperparameters were chosen based on a small amount of exploration on a validation set. | 1710.11573#57 | Deep Learning as a Mixed Convex-Combinatorial Optimization Problem | As neural networks grow deeper and wider, learning networks with
hard-threshold activations is becoming increasingly important, both for network
quantization, which can drastically reduce time and energy requirements, and
for creating large integrated systems of deep networks, which may have
non-differentiable components and must avoid vanishing and exploding gradients
for effective learning. However, since gradient descent is not applicable to
hard-threshold functions, it is not clear how to learn networks of them in a
principled way. We address this problem by observing that setting targets for
hard-threshold hidden units in order to minimize loss is a discrete
optimization problem, and can be solved as such. The discrete optimization goal
is to find a set of targets such that each unit, including the output, has a
linearly separable problem to solve. Given these targets, the network
decomposes into individual perceptrons, which can then be learned with standard
convex approaches. Based on this, we develop a recursive mini-batch algorithm
for learning deep hard-threshold networks that includes the popular but poorly
justified straight-through estimator as a special case. Empirically, we show
that our algorithm improves classification accuracy in a number of settings,
including for AlexNet and ResNet-18 on ImageNet, when compared to the
straight-through estimator. | http://arxiv.org/pdf/1710.11573 | Abram L. Friesen, Pedro Domingos | cs.LG, cs.CV, cs.NE | 14 pages (9 body, 5 pages of references and appendices) | In Proceedings of the International Conference on Learning
Representations (ICLR) 2018 | cs.LG | 20171031 | 20180416 | [
{
"id": "1710.03740"
},
{
"id": "1606.06160"
},
{
"id": "1512.03385"
}
] |
1710.11469 | 58 | # 5.1.2 Other CoRe penalty types
We now compare all CoRe penalty types, i.e., penalizing with (i) the conditional variance of the predicted logits ËCf,1,θ, (ii) the conditional standard deviation of the predicted logits ËCf,1/2,θ, (iii) the conditional variance of the loss ËCl,1,θ and (iv) the conditional standard deviation of the loss ËCl,1/2,θ. For this comparison, we use the training dataset 1 (DS 1) from above. Table 2 contains the test error (training error was 0% for all methods) as
20 | 1710.11469#58 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11573 | 58 | The ï¬rst network we tested on CIFAR-10 was a simple 4-layer convolutional network (convnet) structured as: conv(32) â conv(64) â fc(1024) â fc(10), where conv(c) and fc(c) indicate a convolutional layer and fully-connected layer, respectively, with c channels. Both convolutional layers used 5 à 5 kernels. Max-pooling with stride 2 was used after each convolutional layer, and a non-linearity was placed before each of the above layers except the ï¬rst. Adam (Kingma & Ba, 2015) with learning rate 2.5e-4 and weight decay 5e-4 was used to minimize the cross-entropy loss for 300 epochs. The learning rate was decayed by a factor of 0.1 after 200 and 250 epochs. | 1710.11573#58 | Deep Learning as a Mixed Convex-Combinatorial Optimization Problem | As neural networks grow deeper and wider, learning networks with
hard-threshold activations is becoming increasingly important, both for network
quantization, which can drastically reduce time and energy requirements, and
for creating large integrated systems of deep networks, which may have
non-differentiable components and must avoid vanishing and exploding gradients
for effective learning. However, since gradient descent is not applicable to
hard-threshold functions, it is not clear how to learn networks of them in a
principled way. We address this problem by observing that setting targets for
hard-threshold hidden units in order to minimize loss is a discrete
optimization problem, and can be solved as such. The discrete optimization goal
is to find a set of targets such that each unit, including the output, has a
linearly separable problem to solve. Given these targets, the network
decomposes into individual perceptrons, which can then be learned with standard
convex approaches. Based on this, we develop a recursive mini-batch algorithm
for learning deep hard-threshold networks that includes the popular but poorly
justified straight-through estimator as a special case. Empirically, we show
that our algorithm improves classification accuracy in a number of settings,
including for AlexNet and ResNet-18 on ImageNet, when compared to the
straight-through estimator. | http://arxiv.org/pdf/1710.11573 | Abram L. Friesen, Pedro Domingos | cs.LG, cs.CV, cs.NE | 14 pages (9 body, 5 pages of references and appendices) | In Proceedings of the International Conference on Learning
Representations (ICLR) 2018 | cs.LG | 20171031 | 20180416 | [
{
"id": "1710.03740"
},
{
"id": "1606.06160"
},
{
"id": "1512.03385"
}
] |
1710.11573 | 59 | In order to evaluate the performance of FTPROP-MB with the soft hinge loss on a deeper network, we adapted the 8-layer convnet from Zhou et al. (2016) to CIFAR-10. This network has 7 convolutional layers and one fully-connected layer for the output and uses batch normalization (Ioffe & Szegedy, 2015) before each non-linearity. We optimized the cross-entropy loss with Adam using a learning rate of 1e-3 and a weight decay of 1e-7 for the sign activation and 5e-4 for the qReLU and baseline activations. We trained for 300 epochs, decaying the learning rate by 0.1 after 200 and 250 epochs.
A.2 LEARNING CURVES FOR CIFAR-10
= Sign (FTP-SH) âgReLU (FTP-SH) 80+|âSign (SSTE) 85||ââqReLU (SSTE) ââReLU 78 iy â Saturated ReLU g s ie Wir 5 5 376 g 60 < < a7 Bo6 F F 72 70 300 70 220 240, 260280 300 0 50 100 150 200 250 300 0 50 100 150 200 250 300 Epoch Epoch | 1710.11573#59 | Deep Learning as a Mixed Convex-Combinatorial Optimization Problem | As neural networks grow deeper and wider, learning networks with
hard-threshold activations is becoming increasingly important, both for network
quantization, which can drastically reduce time and energy requirements, and
for creating large integrated systems of deep networks, which may have
non-differentiable components and must avoid vanishing and exploding gradients
for effective learning. However, since gradient descent is not applicable to
hard-threshold functions, it is not clear how to learn networks of them in a
principled way. We address this problem by observing that setting targets for
hard-threshold hidden units in order to minimize loss is a discrete
optimization problem, and can be solved as such. The discrete optimization goal
is to find a set of targets such that each unit, including the output, has a
linearly separable problem to solve. Given these targets, the network
decomposes into individual perceptrons, which can then be learned with standard
convex approaches. Based on this, we develop a recursive mini-batch algorithm
for learning deep hard-threshold networks that includes the popular but poorly
justified straight-through estimator as a special case. Empirically, we show
that our algorithm improves classification accuracy in a number of settings,
including for AlexNet and ResNet-18 on ImageNet, when compared to the
straight-through estimator. | http://arxiv.org/pdf/1710.11573 | Abram L. Friesen, Pedro Domingos | cs.LG, cs.CV, cs.NE | 14 pages (9 body, 5 pages of references and appendices) | In Proceedings of the International Conference on Learning
Representations (ICLR) 2018 | cs.LG | 20171031 | 20180416 | [
{
"id": "1710.03740"
},
{
"id": "1606.06160"
},
{
"id": "1512.03385"
}
] |
1710.11469 | 60 | Table 2: Eyeglass detection, trained on a small subset (DS1) of the CelebA dataset with disjoint identities. We report training and test error as well as the value of the CoRe penalties ËCf,1,θ, ËCf,1/2,θ, ËCl,1,θ and ËCl,1/2,θ on the training and the test set after training, evaluated for both the pooled estimator and the CoRe estimator. The weights of the ridge and the CoRe penalty were chosen based on their performance on the validation set. The four CoRe penalty variantsâ performance diï¬erences are not statistically signiï¬cant.
well as the value the respective CoRe penalty took after training on the training set and the test set. The four CoRe penalty variantsâ performance diï¬erences are not statistically signiï¬cant. Hence, we mostly focus on the conditional variance of the predicted logits ËCf,1,θ in the other experiments.
# 5.1.3 Discussion | 1710.11469#60 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11573 | 60 | Figure 4: The top-1 test accuracies for the 4-layer convolutional network with different activation functions on CIFAR-10. The inset ï¬gures show the test accuracy for the ï¬nal 100 epochs in detail. The left ï¬gure shows the network with sign activations. The right ï¬gure shows the network with 2-bit quantized ReLU (qReLU) activations and with the full-precision baselines. Best viewed in color.
12
84 +|ââ Sign (FTP-SH) â Sign (SSTE) âqReLU (FTP-SH) âqReLU (SSTE) âReLU â Saturated ReLU ~ a s 3 Top-1 Accuracy â bh ier 20 240, 260280300 0 50 100 150 200 250 300 0 50 100 150 200 250 300 Epoch Epoch
# Top-1 Accuracy | 1710.11573#60 | Deep Learning as a Mixed Convex-Combinatorial Optimization Problem | As neural networks grow deeper and wider, learning networks with
hard-threshold activations is becoming increasingly important, both for network
quantization, which can drastically reduce time and energy requirements, and
for creating large integrated systems of deep networks, which may have
non-differentiable components and must avoid vanishing and exploding gradients
for effective learning. However, since gradient descent is not applicable to
hard-threshold functions, it is not clear how to learn networks of them in a
principled way. We address this problem by observing that setting targets for
hard-threshold hidden units in order to minimize loss is a discrete
optimization problem, and can be solved as such. The discrete optimization goal
is to find a set of targets such that each unit, including the output, has a
linearly separable problem to solve. Given these targets, the network
decomposes into individual perceptrons, which can then be learned with standard
convex approaches. Based on this, we develop a recursive mini-batch algorithm
for learning deep hard-threshold networks that includes the popular but poorly
justified straight-through estimator as a special case. Empirically, we show
that our algorithm improves classification accuracy in a number of settings,
including for AlexNet and ResNet-18 on ImageNet, when compared to the
straight-through estimator. | http://arxiv.org/pdf/1710.11573 | Abram L. Friesen, Pedro Domingos | cs.LG, cs.CV, cs.NE | 14 pages (9 body, 5 pages of references and appendices) | In Proceedings of the International Conference on Learning
Representations (ICLR) 2018 | cs.LG | 20171031 | 20180416 | [
{
"id": "1710.03740"
},
{
"id": "1606.06160"
},
{
"id": "1512.03385"
}
] |
1710.11469 | 61 | # 5.1.3 Discussion
While the distributional shift in this example arises due to statistical ï¬uctuations which will diminish as the sample size grows, the following examples are more concerned with biases that will persist even if the number of training and test samples is very large. A second diï¬erence to the subsequent examples is the grouping structureâin this example, we consider only a few identities, namely m = 10, with a relatively large number ni of associated observations (about thirty observations per individual). In the following examples, m is much larger while ni is typically smaller than ï¬ve.
# 5.2 Gender classiï¬cation with unknown confounding
In the following set of experiments, we work again with the CelebA dataset and the 5-layer convolutional neural network architecture described in Table C.1. This time we consider the problem of classifying whether the person shown in the image is male or female. We create a confounding in training and test set I by including mostly images of men wearing glasses and women not wearing glasses. In test set 2 the association between gender and glasses is ï¬ipped: women always wear glasses while men never wear glasses. Examples from the training and test sets 1 and 2 are shown in Figure 7. The training set, test set 1 and 2 are subsampled such that they are balanced with respect to Y , resulting in 16982, 4224 and 1120 observations, respectively.
21 | 1710.11469#61 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11573 | 61 | # Top-1 Accuracy
Figure 5: The top-1 test accuracies for the 8-layer convolutional network with different activation functions on CIFAR-10. The inset ï¬gures show the test accuracy for the ï¬nal 100 epochs in detail. The left ï¬gure shows the network with sign activations. The right ï¬gure shows the network with 2-bit quantized ReLU (qReLU) activations and with the full-precision baselines. Best viewed in color.
IMAGENET (ILSVRC 2012)
On ImageNet, a much more challenging dataset with roughly 1.2M training images and 50K validation images divided into 1000 classes, we trained AlexNet, the most commonly used model in the quantization literature, with different activations and compared top-1 and top-5 accuracies of the trained models on the validation set. As is standard practice, we treat the validation set as the test data. Images were resized to 256 à 256, mean / std normalized, and then randomly cropped to 224 à 224 and randomly horizontally ï¬ipped. Models are tested on centered 224 à 224 crops of the test images. Hyperparameters were set based on Zhou et al. (2016) and Zhu et al. (2017), which both used SSTE to train AlexNet on ImageNet. | 1710.11573#61 | Deep Learning as a Mixed Convex-Combinatorial Optimization Problem | As neural networks grow deeper and wider, learning networks with
hard-threshold activations is becoming increasingly important, both for network
quantization, which can drastically reduce time and energy requirements, and
for creating large integrated systems of deep networks, which may have
non-differentiable components and must avoid vanishing and exploding gradients
for effective learning. However, since gradient descent is not applicable to
hard-threshold functions, it is not clear how to learn networks of them in a
principled way. We address this problem by observing that setting targets for
hard-threshold hidden units in order to minimize loss is a discrete
optimization problem, and can be solved as such. The discrete optimization goal
is to find a set of targets such that each unit, including the output, has a
linearly separable problem to solve. Given these targets, the network
decomposes into individual perceptrons, which can then be learned with standard
convex approaches. Based on this, we develop a recursive mini-batch algorithm
for learning deep hard-threshold networks that includes the popular but poorly
justified straight-through estimator as a special case. Empirically, we show
that our algorithm improves classification accuracy in a number of settings,
including for AlexNet and ResNet-18 on ImageNet, when compared to the
straight-through estimator. | http://arxiv.org/pdf/1710.11573 | Abram L. Friesen, Pedro Domingos | cs.LG, cs.CV, cs.NE | 14 pages (9 body, 5 pages of references and appendices) | In Proceedings of the International Conference on Learning
Representations (ICLR) 2018 | cs.LG | 20171031 | 20180416 | [
{
"id": "1710.03740"
},
{
"id": "1606.06160"
},
{
"id": "1512.03385"
}
] |
1710.11573 | 62 | We trained the Zhou et al. (2016) variant of AlexNet (Krizhevsky et al., 2012) on ImageNet with sign, 2-bit qReLU, ReLU, and saturated ReLU activations. This version of AlexNet removes the dropout and replaces the local contrast normalization layers with batch normalization. Our implementation does not split the convolutions into two separate blocks. We used the Adam optimizer with learning rate 1e-4 on the cross-entropy loss for 80 epochs, decaying the learning rate by 0.1 after 56 and 64 epochs. For the sign activation, we used a weight decay of 5e-6 as in Zhou et al. (2016). For the ReLU and saturated ReLU activations, which are much more likely to overï¬t, we used a weight decay of 5e-4, as used in Krizhevsky et al. (2012). For the 2-bit qReLU activation, we used a weight decay of 5e-5, since it is more expressive than sign but less so than ReLU. | 1710.11573#62 | Deep Learning as a Mixed Convex-Combinatorial Optimization Problem | As neural networks grow deeper and wider, learning networks with
hard-threshold activations is becoming increasingly important, both for network
quantization, which can drastically reduce time and energy requirements, and
for creating large integrated systems of deep networks, which may have
non-differentiable components and must avoid vanishing and exploding gradients
for effective learning. However, since gradient descent is not applicable to
hard-threshold functions, it is not clear how to learn networks of them in a
principled way. We address this problem by observing that setting targets for
hard-threshold hidden units in order to minimize loss is a discrete
optimization problem, and can be solved as such. The discrete optimization goal
is to find a set of targets such that each unit, including the output, has a
linearly separable problem to solve. Given these targets, the network
decomposes into individual perceptrons, which can then be learned with standard
convex approaches. Based on this, we develop a recursive mini-batch algorithm
for learning deep hard-threshold networks that includes the popular but poorly
justified straight-through estimator as a special case. Empirically, we show
that our algorithm improves classification accuracy in a number of settings,
including for AlexNet and ResNet-18 on ImageNet, when compared to the
straight-through estimator. | http://arxiv.org/pdf/1710.11573 | Abram L. Friesen, Pedro Domingos | cs.LG, cs.CV, cs.NE | 14 pages (9 body, 5 pages of references and appendices) | In Proceedings of the International Conference on Learning
Representations (ICLR) 2018 | cs.LG | 20171031 | 20180416 | [
{
"id": "1710.03740"
},
{
"id": "1606.06160"
},
{
"id": "1512.03385"
}
] |
1710.11469 | 63 | Figure 7: Classiï¬cation for Y â {woman, man}. There is an unknown confounding here as men are very likely to wear glasses in training and test set 1 data, while it is women that are likely to wear glasses in test set 2. Estimators that pool all observations are making use of this confounding and hence fail for test set 2. The conditional variance penalty for the CoRe estimator is computed over groups of images of the same person (and consequently same class label), such as the images in the red box on the left. The number of grouped examples c is 500. We vary the proportion of males in the grouped examples between 50% and 100% (cf. §5.2.1).
To compute the conditional variance penalty, we use again images of the same person. The ID variable is, in other words, the identity of the person and gender Y is constant across all examples with the same ID. Conditioning on (Y, ID) is hence identical to conditioning on ID alone. Another diï¬erence to the other experiments is that we consider a binary style feature here.
# 5.2.1 Label shift in grouped observations | 1710.11469#63 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11573 | 63 | As with AlexNet, we trained ResNet-18 (He et al., 2015b) on ImageNet with sign, qReLU, ReLU, and saturated ReLU activations; however, for ResNet-18 we used a qReLU with k = 5 steps (i.e., 6 quantization levels, requiring 3 bits). We used the ResNet code provided by PyTorch. We optimized the cross-entropy loss with SGD with learning rate 0.1 and momentum 0.9 for 90 epochs, decaying the learning rate by a factor of 0.1 after 30 and 60 epochs. For the sign activation, we used a weight decay of 5eâ7. For the ReLU and saturated ReLU activations, we used a weight decay of 1e-4. For the qReLU activation, we used a weight decay of 1e-5.
13
A.4 LEARNING CURVES FOR IMAGENET
â Sign (FTP-SH)| |ââqReLU (FTP-SH)| 55 3 â Sign (SSTE) qReLU (SSTE) â__ âReLU 50 3B | [Saturated ReLU |, iy g i 345 3 Ey ES} 2 < = 40 7 a a & & F 35 F 30 Epoch Epoch | 1710.11573#63 | Deep Learning as a Mixed Convex-Combinatorial Optimization Problem | As neural networks grow deeper and wider, learning networks with
hard-threshold activations is becoming increasingly important, both for network
quantization, which can drastically reduce time and energy requirements, and
for creating large integrated systems of deep networks, which may have
non-differentiable components and must avoid vanishing and exploding gradients
for effective learning. However, since gradient descent is not applicable to
hard-threshold functions, it is not clear how to learn networks of them in a
principled way. We address this problem by observing that setting targets for
hard-threshold hidden units in order to minimize loss is a discrete
optimization problem, and can be solved as such. The discrete optimization goal
is to find a set of targets such that each unit, including the output, has a
linearly separable problem to solve. Given these targets, the network
decomposes into individual perceptrons, which can then be learned with standard
convex approaches. Based on this, we develop a recursive mini-batch algorithm
for learning deep hard-threshold networks that includes the popular but poorly
justified straight-through estimator as a special case. Empirically, we show
that our algorithm improves classification accuracy in a number of settings,
including for AlexNet and ResNet-18 on ImageNet, when compared to the
straight-through estimator. | http://arxiv.org/pdf/1710.11573 | Abram L. Friesen, Pedro Domingos | cs.LG, cs.CV, cs.NE | 14 pages (9 body, 5 pages of references and appendices) | In Proceedings of the International Conference on Learning
Representations (ICLR) 2018 | cs.LG | 20171031 | 20180416 | [
{
"id": "1710.03740"
},
{
"id": "1606.06160"
},
{
"id": "1512.03385"
}
] |
1710.11469 | 64 | # 5.2.1 Label shift in grouped observations
We compare six diï¬erent datasets that vary with respect to the distribution of Y in the grouped observations. In all training datasets, the total number of observations is 16982 and the total number of grouped observations is 500. In the ï¬rst dataset, 50% of the grouped observations correspond to males and 50% correspond to females. In the remaining 5 datasets, we increase the number of grouped observations with Y = âmanâ, denoted by κ, to 75%, 90%, 95%, 99% and 100%, respectively. Table 3 shows the performance obtained for these datasets when using the pooled estimator compared to the CoRe estimator with ËCf,1,θ. The results show that both the pooled estimator as well as the CoRe estimator perform better if the distribution of Y in the grouped observations is more balanced. The CoRe estimator improves the error rate of the pooled estimator by â 28 â 39% on a relative scale. Figure 8 shows the performance for κ = 50% as a function of the CoRe penalty weight. Signiï¬cant improvements can be obtained across a large range of values for the CoRe penalty and the ridge penalty. Test errors become more sensitive to the chosen value of the CoRe penalty for very large values of the ridge penalty weight as the overall amount of regularization is already large. | 1710.11469#64 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11573 | 64 | Figure 6: The top-1 train (thin dashed lines) and test (thicker solid lines) accuracies for AlexNet with different activation functions on ImageNet. The inset ï¬gures show the test accuracy for the ï¬nal 25 epochs in detail. The left ï¬gure shows the network with sign activations. The right ï¬gure shows the network with 2-bit quantized ReLU (qReLU) activations and with the full-precision baselines. Best viewed in color.
Ane Ne âgReLU (FTP-SH) - â Sign (FTP-SH) a & â Sign (SSTE) 3 Reb (SSTE) pg EPPA PS |ââ Rel i ; aa p50 â Saturated ReLU | j.c-\on. funyâ ee S TEES F 8 45 =< 240 fy ssf i i H 65, 70,75 90 8 90 305 65, 70 475 90 8 90 o 10 2 30 40 50 60 70 80 9 0 10 2 30 40 50 60 70 80 90 Epoch Epoch | 1710.11573#64 | Deep Learning as a Mixed Convex-Combinatorial Optimization Problem | As neural networks grow deeper and wider, learning networks with
hard-threshold activations is becoming increasingly important, both for network
quantization, which can drastically reduce time and energy requirements, and
for creating large integrated systems of deep networks, which may have
non-differentiable components and must avoid vanishing and exploding gradients
for effective learning. However, since gradient descent is not applicable to
hard-threshold functions, it is not clear how to learn networks of them in a
principled way. We address this problem by observing that setting targets for
hard-threshold hidden units in order to minimize loss is a discrete
optimization problem, and can be solved as such. The discrete optimization goal
is to find a set of targets such that each unit, including the output, has a
linearly separable problem to solve. Given these targets, the network
decomposes into individual perceptrons, which can then be learned with standard
convex approaches. Based on this, we develop a recursive mini-batch algorithm
for learning deep hard-threshold networks that includes the popular but poorly
justified straight-through estimator as a special case. Empirically, we show
that our algorithm improves classification accuracy in a number of settings,
including for AlexNet and ResNet-18 on ImageNet, when compared to the
straight-through estimator. | http://arxiv.org/pdf/1710.11573 | Abram L. Friesen, Pedro Domingos | cs.LG, cs.CV, cs.NE | 14 pages (9 body, 5 pages of references and appendices) | In Proceedings of the International Conference on Learning
Representations (ICLR) 2018 | cs.LG | 20171031 | 20180416 | [
{
"id": "1710.03740"
},
{
"id": "1606.06160"
},
{
"id": "1512.03385"
}
] |
1710.11573 | 65 | Figure 7: The top-1 train (thin dashed lines) and test (thicker solid lines) accuracies for ResNet-18 with different activation functions on ImageNet. The inset ï¬gures show the test accuracy for the ï¬nal 60 epochs in detail. The left ï¬gure shows the network with sign activations. The right ï¬gure shows the network with 3-bit quantized ReLU (qReLU) activations and with the full-precision baselines. Best viewed in color.
14 | 1710.11573#65 | Deep Learning as a Mixed Convex-Combinatorial Optimization Problem | As neural networks grow deeper and wider, learning networks with
hard-threshold activations is becoming increasingly important, both for network
quantization, which can drastically reduce time and energy requirements, and
for creating large integrated systems of deep networks, which may have
non-differentiable components and must avoid vanishing and exploding gradients
for effective learning. However, since gradient descent is not applicable to
hard-threshold functions, it is not clear how to learn networks of them in a
principled way. We address this problem by observing that setting targets for
hard-threshold hidden units in order to minimize loss is a discrete
optimization problem, and can be solved as such. The discrete optimization goal
is to find a set of targets such that each unit, including the output, has a
linearly separable problem to solve. Given these targets, the network
decomposes into individual perceptrons, which can then be learned with standard
convex approaches. Based on this, we develop a recursive mini-batch algorithm
for learning deep hard-threshold networks that includes the popular but poorly
justified straight-through estimator as a special case. Empirically, we show
that our algorithm improves classification accuracy in a number of settings,
including for AlexNet and ResNet-18 on ImageNet, when compared to the
straight-through estimator. | http://arxiv.org/pdf/1710.11573 | Abram L. Friesen, Pedro Domingos | cs.LG, cs.CV, cs.NE | 14 pages (9 body, 5 pages of references and appendices) | In Proceedings of the International Conference on Learning
Representations (ICLR) 2018 | cs.LG | 20171031 | 20180416 | [
{
"id": "1710.03740"
},
{
"id": "1606.06160"
},
{
"id": "1512.03385"
}
] |
1710.11469 | 67 | Error Penalty value Method Train Test 1 Test 2 5 5-layer CNN = κ . 5-layer CNN + CoRe 0.00% 2.00% 38.54% 22.77 6.43% 5.85% 24.07% 0.01 74.05 1.61 30.67 0.93 5 7 . = κ 5-layer CNN 5-layer CNN + CoRe 0.00% 1.98% 43.41% 8.23 7.61% 6.99% 27.05% 0.00 32.98 1.44 11.76 0.62 9 5-layer CNN = κ . 5-layer CNN + CoRe 0.00% 2.00% 47.64% 9.47 8.76% 7.74% 30.63% 0.00 40.51 1.26 14.37 0.42 5 9 . = κ 5-layer CNN 5-layer CNN + CoRe 0.00% 1.89% 48.96% 13.62 10.45% 9.35% 29.57% 0.00 61.01 0.42 21.26 0.16 9 9 . = κ 5-layer CNN 5-layer CNN + CoRe 0.00% 1.70% 50.11% 20.66 11.10% 10.51% 32.91% 0.00 70.80 0.00 27.80 0.00 1 | 1710.11469#67 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11469 | 69 | # Train Test: Females Test: Males
Table 3: Classiï¬cation for Y â {woman, man}. We compare six diï¬erent datasets that vary with respect to the distribution of Y in the grouped observations. Speciï¬cally, we vary the proportion of images showing men between κ = 0.5 and κ = 1. In all training datasets, the total number of observations is 16982 and the total number of grouped observations is 500. Both the pooled estimator as well as the CoRe estimator perform better if the distribution of Y in the grouped observations is more balanced. The CoRe estimator improves the error rate of the pooled estimator by â 28â39% on a relative scale. Table D.2 in the Appendix additionally contains the standard error of all shown results.
24
Error Method Train Test 1 Test 2 Inception V3 Inception V3 + CoRe 5.74% 5.53% 30.29% 6.15% 5.85% 21.70%
Table 4: Classification for Y ⬠{woman, man} with « = 0.5 Here, we compared ¢2-regularized logistic regression based on Inception V3 features with and without the CORE penalty. The CoRE estimator improves the performance of the pooled estimator by ~ 28% on a relative scale.
# 5.2.2 Using pre-trained Inception V3 features | 1710.11469#69 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11469 | 70 | # 5.2.2 Using pre-trained Inception V3 features
To verify that the above conclusions do not change when using more powerful features, we here compare ¢9-regularized logistic regression using pre-trained Inception V3 featured) with and without the CORE penalty. Table |4| shows the results for « = 0.5. While the results show that both the pooled estimator as well as the CORE estimator perform better using pre-trained Inception features, the relative improvement with the CORE penalty is still 28% on test set 2.
5.2.3 Additional baselines: Unconditional variance regularization and grouping by class label | 1710.11469#70 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11469 | 71 | 5.2.3 Additional baselines: Unconditional variance regularization and grouping by class label
As additional baselines, we consider the following two schemes: (i) we group all examples sharing the same class label and penalize with the conditional variance of the predicted logits, computed over these two groups; (ii) we penalize the overall variance of the predicted logits, i.e., a form of unconditional variance regularization. Figure 9 shows the performance of these two approaches. In contrast to the CoRe penalty, regularizing with the variance of the predicted logits conditional on Y only does not yield performance improvements on test set 2, compared to the pooled estimator (corresponding to a penalty weight of 0). Interestingly, using baseline (i) without a ridge penalty does yield an improvement on test set I, compared to the pooled estimator with various strengths of the ridge penalty.
# 5.3 Eyeglasses detection with known and unknown image quality intervention | 1710.11469#71 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11469 | 72 | # 5.3 Eyeglasses detection with known and unknown image quality intervention
We now revisit the third example from §1.1. We again use the CelebA dataset and consider the problem of classifying whether the person in the image is wearing eyeglasses. Here, we modify the images in the following way: in the training set and in test set 1, we sample the image quality10 for all samples {i : yi = 1} (all samples that show glasses) from a Gaussian distribution with mean µ = 30 and standard deviation Ï = 10. Samples with yi = 0 (no glasses) are unmodiï¬ed. In other words, if the image shows a person wearing glasses, the
9. Retrieved from https://tfhub.dev/google/imagenet/inception_v3/feature_vector/1. 10. We use ImageMagick (https://www.imagemagick.org) to change the quality of the compression through
convert -quality q ij input.jpg output.jpg where qi,j â¼ N (30, 100).
25
(a) Baseline: Grouping-by-Y
(b) Baseline: Grouping-by-Y
# Ridge weight:
Ridge weight: 0.0001 Ridge weight: 0.0005 Ridge weight: 0.001 Ridge weight: 0.005 Ridge weight: 0.01 Test error 1 2 3 Penalty weight | 1710.11469#72 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11469 | 73 | Ridge weight: 0.0001 Ridge weight: 0.0005 Ridge weight: 0.001 Ridge weight: 0.005 Ridge weight: 0.01 Test error 1 2 3 Penalty weight
m¥-*a+*Ridge weight: 0 Ridge weight: 0.0001 Ridge weight: 0.0005 Ridge weight: 0.001 Ridge weight: 0.005 Ridge weight: 0.01
x48 46 N 5 44 £ oO wp 42 wn o F 40 38 0 1 2 3 4 5 Penalty weight
# (c) Baseline: Unconditional variance penalty
(d) Baseline: Unconditional variance penalty
3.0 25 2.0 Test error 1 1.5 0.00 0.02 0.04 0.06 0.08 Penalty weight 0.10 0.12
55 50 45 40 35 0.00 0.02 0.04 0.06 0.08 0.10 0.12 Penalty weight
~
# L 2 o
# rn wn o ~ | 1710.11469#73 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11469 | 74 | 55 50 45 40 35 0.00 0.02 0.04 0.06 0.08 0.10 0.12 Penalty weight
~
# L 2 o
# rn wn o ~
Figure 9: Classiï¬cation for Y â {woman, man} with κ = 0.5, using the baselines which (i) penalize the variance of the predicted logits conditional on the class label Y only; and (ii) penalize the overall variance of the predicted logits (cf. §5.2.3). For baseline (i), panels (a) and (b) show the test error on test data sets 1 and 2 respectively as a function of the âbaseline penalty weightâ for various ridge penalty strengths. For baseline (ii), the equivalent plots are shown in panels (c) and (d). In contrast to the CoRe penalty, regularizing with these two baselines does not yield performance improvements on test set 2, compared to the pooled estimator (corresponding to a penalty weight of 0).
26
Training data (n = 20000):
Test set 1 (n = 5344):
Test set 2 (n = 5344):
5-layer CNN training error: 0% with add. CoRe penalty: 10%
5-layer CNN test error: 2% with add. CoRe penalty: 13% | 1710.11469#74 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11469 | 76 | Figure 10: Eyeglass detection for CelebA dataset with image quality interventions (which are un- known to any procedure used). The JPEG compression level is lowered for Y = 1 (glasses) samples on training data and test set 1 and lowered for Y = 0 (no glasses) samples for test set 2. To the human eye, these interventions are barely visible but the CNN that uses pooled data without CoRe penalty has exploited the correlation between image quality and outcome Y to achieve a (arguably spurious) low test error of 2% on test set 1. However, if the correlation between image quality and Y breaks down, as in test set 2, the CNN that uses pooled data without a CoRe penalty has a 65% misclassiï¬cation rate. The training data on the left show paired observations in two red boxes: these observations share the same label Y and show the same person ID. They are used to compute the conditional variance penalty for the CoRe estimator that does not suï¬er from the same degradation in performance for test set 2.
27
Training data (n = 20000):
Test set 1 (n = 5344):
Test set 2 (n = 5344):
5-layer CNN training error: 0% with added CoRe penalty: 3%
5-layer CNN test error: 2% with added CoRe penalty: 7%
5-layer CNN test error: 65% with add. CoRe penalty: 13% | 1710.11469#76 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11469 | 78 | Figure 11: Eyeglass detection for CelebA dataset with image quality interventions. The only dif- ference to Figure 10 is in the training data where the paired images now use the same underlying image in two diï¬erent JPEG compressions. The compression level is drawn from the same distribution. The CoRe penalty performs better than for the experiment in Figure 10 since we could explicitly control that only X style â¡ image quality varies between grouped examples. On the other hand, the performance of the pooled estima- tor is not changed in a noticeable way if we add augmented images as the (spurious) correlation between image quality and outcome Y still persists in the presence of the extra augmented images. Thus, the pooled estimator continues to be susceptible to image quality interventions.
image quality tends to be lower. In test set 2, the quality is reduced in the same way for yi = 0 samples (no glasses), while images with yi = 1 are not changed. Figure 10 shows examples from the training set and test sets 1 and 2. For the CoRe penalty, we calculate the conditional variance across images that share the same ID if Y = 1, that is across images that show the same person wearing glasses on all images. Observations with Y = 0 (not wearing glasses) are not grouped. Two examples are shown in the red box of Figure 10. Here, we have c = 5000 grouped observations among a total sample size of n = 20000. | 1710.11469#78 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11469 | 79 | Figure [10|shows misclassification rates for CORE and the pooled estimator on test sets 1 and 2. The pooled estimator (only penalized with an 2 penalty) achieves low error rates of 2% on test set 1, but suffers from a 65% misclassification error on test set 2, as now the relation between Y and the implicit X** variable (image quality) has been flipped. The CoRE estimator has a larger error of 13% on test set 1 as image quality as a feature is penalized by CORE implicitly and the signal is less strong if image quality has been removed as a dimension. However, in test set 2 the performance of the CORE estimator is 28% and improves substantially on the 65% error of the pooled estimator. The reason is again the same: the CORE penalty ensures that image quality is not used as a feature to the same extent as for the pooled estimator. This increases the test error slightly if the samples are generated from the same distribution as training data (as here for test set 1) but substantially improves the test error if the distribution of image quality, conditional on the class label, is changed on test data (as here for test set 2).
28 | 1710.11469#79 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11469 | 80 | 28
Eyeglasses detection with known image quality intervention To compare to the above results, we repeat the experiment by changing the grouped observations as follows. Above, we grouped images that had the same person ID when Y = 1. We refer to this scheme of grouping observations with the same (Y, ID) as âGrouping setting 2â. Here, we use an explicit augmentation scheme and augment c = 5000 images with Y = 1 in the following way: each image is paired with a copy of itself and the image quality is adjusted In other words, the only diï¬erence between the two images is that as described above. image quality diï¬ers slightly, depending on the value that was drawn from the Gaussian distribution with mean µ = 30 and standard deviation Ï = 10, determining the strength of the image quality intervention. Both the original and the copy get the same value of identiï¬er variable ID. We call this grouping scheme âGrouping setting 1â. Compare the left panels of Figures 10 and 11 for examples. | 1710.11469#80 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11469 | 81 | While we used explicit changes in image quality in both above and here, we referred to grouping setting 2 as âunknown image quality interventionsâ as the training sample as in the left panel of Figure 10 does not immediately reveal that image quality is the important style variable. In contrast, the augmented data samples (grouping setting 1) we use here diï¬er only in their image quality for a constant (Y, ID).
Figure 11 shows examples and results. The pooled estimator performs more or less identical to the previous dataset. The explicit augmentation did not help as the association between image quality and whether eyeglasses are worn is not changed in the pooled data after including the augmented data samples. The misclassiï¬cation error of the CoRe esti- mator is substantially better than the error rate of the pooled estimator. The error rate on test set 2 of 13% is also improving on the rate of 28% of the CoRe estimator in grouping setting 2. We see that using grouping setting 1 works best since we could explicitly control that only X style â¡ image quality varies between grouped examples. In grouping setting 2, diï¬erent images of the same person can vary in many factors, making it more challenging to isolate image quality as the factor to be invariant against.
# 5.4 Stickmen image-based age classiï¬cation with unknown movement interventions | 1710.11469#81 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11469 | 82 | # 5.4 Stickmen image-based age classiï¬cation with unknown movement interventions
In this example we consider synthetically generated stickmen images; see Figure 12 for some examples. The target of interest is Y â {adult, child}. The core feature X core is here the height of each person. The class Y is causal for height and height cannot be easily intervened on or change in diï¬erent domains. Height is thus a robust predictor for diï¬erentiating between children and adults. As style feature we have here the movement of a person (distribution of angles between body, arms and legs). For the training data we created a dependence between age and the style feature âmovementâ, which can be thought to arise through a hidden common cause D, namely the place of observation. The data generating process is illustrated in Figure D.6. For instance, the images of children might mostly show children playing while the images of adults typically show them in more âstaticâ postures. The left panel of Figure 12 shows examples from the training set where large movements are associated with children and small movements are associated with adults. Test set 1 follows the same distribution, as shown in the middle panel. A standard CNN will exploit this relationship between movement and the label Y of interest, whereas this is discouraged
29
Training data (n = 20000):
Test set 1 (n = 20000): | 1710.11469#82 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11469 | 84 | Figure 12: Classiï¬cation into {adult, child} based on stickmen images, where children tend to be smaller and adults taller. In training and test set 1 data, children tend to have stronger movement whereas adults tend to stand still. In test set 2 data, adults show stronger movement. The two red boxes in the panel with the training data show two out of the c = 50 pairs of examples over which the conditional variance is calculated. The CoRe penalty leads to a network that generalizes better for test set 2 data, where the spurious correlation between age and movement is reversed, if compared to the training data. | 1710.11469#84 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11469 | 85 | by the conditional variance penalty of CoRe. The latter is pairing images of the same person in slightly diï¬erent movements as shown by the red boxes in the leftmost panel of Figure 12. If the learned model exploits this dependence between movement and age for predicting Y , it will fail when presented images of, say, dancing adults. The right panel of Figure 12 shows such examples (test set 2). The standard CNN suï¬ers in this case from a 41% misclassiï¬cation rate, as opposed to the 3% on test set 1 data. For as few as c = 50 paired observations, the network with an added CoRe penalty, in contrast, achieves also 4% on test set 1 data and succeeds in achieving an 9% performance on test set 2, whereas the pooled estimator fails on this dataset with a test error of 41%.
These results suggest that the learned representation of the pooled estimator uses move- ment as a predictor for age while CoRe does not use this feature due to the conditional variance regularization. Importantly, including more grouped examples would not improve the performance of the pooled estimator as these would be subject to the same bias and hence also predominantly have examples of heavily moving children and âstaticâ adults (also see Figure D.7 which shows results for c â {20, 500, 2000}). | 1710.11469#85 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11469 | 86 | 5.5 MNIST: more sample eï¬cient data augmentation The goal of using CoRe in this example is to make data augmentation more eï¬cient in terms of the required samples. In data augmentation, one creates additional samples by modifying the original inputs, e.g. by rotating, translating, or ï¬ipping the images (Sch¨olkopf et al., 1996). In other words, additional samples are generated by interventions on style fea- tures. Using this augmented data set for training results in invariance of the estimator with respect to the transformations (style features) of interest. For CoRe we can use the group- ing information that the original and the augmented samples belong to the same object. This enforces the invariance with respect to the style features more strongly compared to normal data augmentation which just pools all samples. We assess this for the style feature
30 | 1710.11469#86 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11469 | 88 | Figure 13: Data augmentation for MNIST images. The left shows training data with a few ro- tated images. Evaluating on only rotated images from the test set, a standard network achieves only 22% accuracy. We can add the CoRe penalty by computing the condi- tional variance over images that were generated from the same original image. The test error is then lowered to 10% on the test data of rotated images.
ârotationâ on MNIST (LeCun et al., 1998) and only include c = 200 augmented training examples for m = 10000 original samples, resulting in a total sample size of n = 10200. The degree of the rotations is sampled uniformly at random from [35, 70]. Figure 13 shows examples from the training set. By using CoRe the average test error on rotated exam- ples is reduced from 22% to 10%. Very few augmented sample are thus suï¬cient to lead to stronger rotational invariance. The standard approach of creating augmented data and pooling all images requires, in contrast, many more samples to achieve the same eï¬ect. Additional results for m â {1000, 10000} and c ranging from 100 to 5000 can be found in Figure D.5 in Appendix §D.4.
# 5.6 Elmer the Elephant | 1710.11469#88 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11469 | 89 | # 5.6 Elmer the Elephant
In this example, we want to assess whether invariance with respect to the style feature âcolorâ can be achieved. In the childrenâs book âElmer the elephantâ11 one instance of a colored elephant suï¬ces to recognize it as being an elephant, making the color âgrayâ no longer an integral part of the object âelephantâ. Motivated by this process of concept formation, we would like to assess whether CoRe can exclude âcolorâ from its learned representation by penalizing conditional variance appropriately.
We work with the âAnimals with attributes 2â (AwA2) dataset (Xian et al., 2017) and consider classifying images of horses and elephants. We include additional examples by adding grayscale images for c = 250 images of elephants. These additional examples do not distinguish themselves strongly from the original training data as the elephant images are already close to grayscale images. The total training sample size is 1850.
11. https://en.wikipedia.org/wiki/Elmer_the_Patchwork_Elephant
31
Training data (n = 1850): Test data 1 (n = 414): Test data 2 (n = 414): | 1710.11469#89 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11469 | 91 | Figure 14: Elmer-the-Elephant dataset. The left panel shows training data with a few additional grayscale elephants. The pooled estimator learns that color is predictive for the animal class and achieves test error of 24% on test set 1 where this association is still true but suï¬ers a misclassiï¬cation error of 53% on test set 2 where this association breaks down. By adding the CoRe penalty, the test error is consistently around 30%, irrespective of the color distribution of horses and elephants. | 1710.11469#91 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11469 | 92 | Figure 14 shows examples and misclassiï¬cation rates from the training set and test sets for CoRe and the pooled estimator on diï¬erent test sets. Examples from these and more test sets can be found in Figure D.10. Test set 1 contains original, colored images only. In test set 2 images of horses are in grayscale and the colorspace of elephant images is modiï¬ed, eï¬ectively changing the color gray to red-brown. We observe that the pooled estimator does not perform well on test set 2 as its learned representation seems to exploit the fact that âgrayâ is predictive for âelephantâ in the training set. This association is no longer valid for test set 2. In contrast, the predictive performance of CoRe is hardly aï¬ected by the changing color distributions. More details can be found in Appendix §D.7.
It is noteworthy that a colored elephant can be recognized as an elephant by adding a few examples of a grayscale elephant to the very lightly colored pictures of natural elephants. If we just pool over these examples, there is still a strong bias that elephants are gray. The CoRe estimator, in contrast, demands invariance of the prediction for instances of the same elephant and we can learn color invariance with a few added grayscale images.
# 5.7 Eyeglasses detection: unknown brightness intervention | 1710.11469#92 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11469 | 93 | # 5.7 Eyeglasses detection: unknown brightness intervention
As in §5.3 we work with the CelebA dataset and try to classify whether the person in the image is wearing eyeglasses. Here we analyze a confounded setting that could arise as follows. Say the hidden common cause D of Y and X style is a binary variable and indicates whether the image was taken outdoors or indoors. If it was taken outdoors, then the person tends to wear (sun-)glasses more often and the image tends to be brighter. If the image was taken indoors, then the person tends not to wear (sun-)glasses and the image tends to be darker. In other words, the style variable X style is here equivalent to brightness and the structure of the data generating process is equivalent to the one shown in Figure D.6. Figure 15 shows examples from the training set and test sets. As previously, we compute the conditional variance over images of the same person, sharing the same class label (and
32 | 1710.11469#93 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11469 | 95 | Figure 15: Eyeglass detection for CelebA dataset with brightness interventions (which are unknown to any procedure used). On training data and test set 1 data, images where people wear glasses tend to be brighter whereas on test set 2 images where people do not wear glasses tend to be brighter.
the CoRe estimator is hence not using the knowledge that brightness is important). Two alternatives for constructing grouped observations in this setting are discussed in §D.2. We use c = 2000 and n = 20000. For the brightness intervention, we sample the value for the magnitude of the brightness increase resp. decrease from an exponential distribution with mean β = 20. In the training set and test set 1, we sample the brightness value as bi,j = [100+yiei,j]+ where ei,j â¼ Exp(βâ1) and yi â {â1, 1}, where yi = 1 indicates presence of glasses and yi = â1 indicates absence.12 For test set 2, we use instead bi,j = [100âyiei,j]+, so that the relation between brightness and glasses is ï¬ipped. | 1710.11469#95 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11469 | 96 | Figure 15 shows misclassiï¬cation rates for CoRe and the pooled estimator on diï¬erent test sets. Examples from all test sets can be found in Figure D.3. First, we notice that the pooled estimator performs better than CoRe on test set 1. This can be explained by the fact that it can exploit the predictive information contained in the brightness of an image while CoRe is restricted not to do so. Second, we observe that the pooled estimator does not perform well on test set 2 as its learned representation seems to use the imageâs brightness as a predictor for the response which fails when the brightness distribution in the test set diï¬ers signiï¬cantly from the training set. In contrast, the predictive performance of CoRe is hardly aï¬ected by the changing brightness distributions. Results for β â {5, 10, 20} and c â {200, 5000} can be found in Figure D.4 in Appendix §D.2.
# 6. Further related work
Encoding certain invariances in estimators is a well-studied area in computer vision and machine learning with an extensive body of literature. While a large part of this work assumes the desired invariance to be known, fewer approaches aim to learn the required | 1710.11469#96 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11469 | 98 | invariances from data and the focus often lies on geometric transformations of the input data or explicitly creating augmented observations (Sohn and Lee, 2012; Khasanova and Frossard, 2017; Hashimoto et al., 2017; Devries and Taylor, 2017). The main diï¬erence between this line of work and CoRe is that we do not require to know the style feature explicitly, the set of possible style features is not restricted to a particular class of transformations and we do not aim to create augmented observations in a generative framework.
Recently, various approaches have been proposed that leverage causal motivations for deep learning or use deep learning for causal inference, related to e.g. the problems of cause- eï¬ect inference and generative adversarial networks (Chalupka et al., 2014; Lopez-Paz et al., 2017; Lopez-Paz and Oquab, 2017; Goudet et al., 2017; Bahadori et al., 2017; Besserve et al., 2018; Kocaoglu et al., 2018). | 1710.11469#98 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11469 | 99 | Kilbertus et al. (2017) exploit causal reasoning to characterize fairness considerations in machine learning. Distinguishing between the protected attribute and its proxies, they derive causal non-discrimination criteria. The resulting algorithms avoiding proxy discrim- ination require classiï¬ers to be constant as a function of the proxy variables in the causal graph, thereby bearing some structural similarity to our style features.
Distinguishing between core and style features can be seen as some form of disentangling factors of variation. Estimating disentangled factors of variation has gathered a lot of interested in the context of generative modeling. As in CoRe, Bouchacourt et al. (2018) exploit grouped observations. In a variational autoencoder framework, they aim to separate style and contentâthey assume that samples within a group share a common but unknown value for one of the factors of variation while the style can diï¬er. Denton and Birodkar (2017) propose an autoencoder framework to disentangle style and content in videos using an adversarial loss term where the grouping structure induced by clip identity is exploited. Here we try to solve a classiï¬cation task directly without estimating the latent factors explicitly as in a generative framework. | 1710.11469#99 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11469 | 100 | In the computer vision literature, various works have used identity information to achieve pose invariance in the context of face recognition 2017). More generally, the idea of exploiting various observations of the same underlying object is related to multi-view learning 2013). In the context of adversarial examples, recently proposed the defense âAdversarial logit pairingâ which is methodologically equivalent to the CORE penalty Cy1,9 when using the squared error loss. Several empirical studies have shown mixed results regarding the performance on fs perturbations (Engstrom et al.| |2018} [Mosbach et al.||2018), so far this setting has not been analyzed theoretically and hence it is an open question whether a CORE-type penalty constitutes an effective defense against adversarial examples.
# 7. Conclusion
Distinguishing the latent features in an image into core and style features, we have proposed conditional variance regularization (CoRe) to achieve robustness with respect to arbitrarily large interventions on the style or âorthogonalâ features. The main idea of the CoRe estimator is to exploit the fact that we often have instances of the same object in the training data. By demanding invariance of the classiï¬er amongst a group of instances that relate to the same object, we can achieve invariance of the classiï¬cation performance with
34 | 1710.11469#100 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11469 | 101 | 34
respect to interventions on style features such as image quality, fashion type, color, or body posture. The training also works despite sampling biases in the data.
There are two main application areas:
1. If the style features are known explicitly, we can achieve the same classiï¬cation perfor- mance as standard data augmentation approaches with substantially fewer augmented samples, as shown for example in §5.5.
2. Perhaps more interesting are settings in which it is unknown what the style features are, with examples in §5.1, §5.2, §5.3, §5.4 and §5.7. CoRe regularization forces predictions to be based on features that do not vary strongly between instances of the same object. We could show in the examples and in Theorems 1 and 2 that this regularization achieves distributional robustness with respect to changes in the distribution of the (unknown) style variables. | 1710.11469#101 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11469 | 102 | An interesting line of work would be to use larger models such as Inception or large ResNet architectures (Szegedy et al., 2015; He et al., 2016). These models have been trained to be invariant to an array of explicitly deï¬ned style features. In §5.2 we include results which show that using Inception V3 features does not guard against interventions on more implicit style features. We would thus like to assess what beneï¬ts CoRe can bring for training Inception-style models end-to-end, both in terms of sample eï¬ciency and in terms of generalization performance.
35
# Acknowledgments
We thank Brian McWilliams, Jonas Peters, and Martin Arjovsky for helpful comments and discussions and CSCS for provision of computational resources. A preliminary version of this work was presented at the NIPS 2017 Interpretable ML Symposium and we thank participants of the symposium for very helpful discussions. We would also like to thank three anonymous referees and the action editor Edo Airoli for detailed and very helpful feedback on an earlier version of the manuscript.
# References | 1710.11469#102 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11469 | 103 | # References
M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin, S. Ghemawat, I. Goodfellow, A. Harp, G. Irving, M. Isard, Y. Jia, R. Jozefowicz, L. Kaiser, M. Kudlur, J. Levenberg, D. Man´e, R. Monga, S. Moore, D. Murray, C. Olah, M. Schuster, J. Shlens, B. Steiner, I. Sutskever, K. Talwar, P. Tucker, V. Vanhoucke, V. Vasudevan, F. Vi´egas, O. Vinyals, P. Warden, M. Wat- tenberg, M. Wicke, Y. Yu, and X. Zheng. TensorFlow: Large-scale machine learning on heterogeneous systems, 2015. URL https://www.tensorflow.org/. Software available from tensorï¬ow.org.
J. Aldrich. Autonomy. Oxford Economic Papers, 41:15â34, 1989. | 1710.11469#103 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11469 | 104 | J. Aldrich. Autonomy. Oxford Economic Papers, 41:15â34, 1989.
J. Bagnell. Robust supervised learning. In Proceedings of the national conference on artiï¬cial intelligence, volume 20, page 714. Menlo Park, CA; Cambridge, MA; London; AAAI Press; MIT Press; 1999, 2005.
M. T. Bahadori, K. Chalupka, E. Choi, R. Chen, W. F. Stewart, and J. Sun. Causal regularization. arXiv preprint arXiv:1702.02604, 2017.
S. Barocas and A. D. Selbst. Big Dataâs Disparate Impact. 104 California Law Review 671, 2016.
M. S. Bartlett and T. J. Sejnowski. Viewpoint invariant face recognition using independent component analysis and attractor networks. In Proceedings of the 9th International Con- ference on Neural Information Processing Systems, NIPSâ96, pages 817â823, Cambridge, MA, USA, 1996. MIT Press.
M. Belkin, P. Niyogi, and V. Sindhwani. Manifold regularization: A geometric framework for learning from labeled and unlabeled examples. Journal of machine learning research, 7(Nov):2399â2434, 2006. | 1710.11469#104 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11469 | 105 | S. Ben-David, J. Blitzer, K. Crammer, and F. Pereira. Analysis of representations for domain adaptation. In Advances in Neural Information Processing Systems 19. 2007.
A. Ben-Tal, D. Den Hertog, A. De Waegenaere, B. Melenberg, and G. Rennen. Robust solu- tions of optimization problems aï¬ected by uncertain probabilities. Management Science, 59(2):341â357, 2013.
36
M. Besserve, N. Shajarisales, B. Sch¨olkopf, and D. Janzing. Group invariance principles for causal generative models. In Proceedings of the 21st International Conference on Artiï¬cial Intelligence and Statistics (AISTATS), volume 84 of Proceedings of Machine Learning Research, pages 557â565. PMLR, 2018.
T. Bolukbasi, K.-W. Chang, J. Y. Zou, V. Saligrama, and A. T. Kalai. Man is to computer programmer as woman is to homemaker? debiasing word embeddings. In Advances in Neural Information Processing Systems 29. 2016. | 1710.11469#105 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11469 | 106 | D. Bouchacourt, R. Tomioka, and S. Nowozin. Multi-level variational autoencoder: Learn- In AAAI Conference on ing disentangled representations from grouped observations. Artiï¬cial Intelligence. 2018.
K. Chalupka, P. Perona, and F. Eberhardt. Visual Causal Feature Learning. Uncertainty in Artiï¬cial Intelligence, 2014.
The New York Times, June 25 2016, 2016. URL https://www.nytimes.com/2016/06/26/opinion/sunday/ artificial-intelligences-white-guy-problem.html.
G. Csurka. A comprehensive survey on domain adaptation for visual applications. Domain Adaptation in Computer Vision Applications., pages 1â35. 2017. In
E. L. Denton and V. Birodkar. Unsupervised learning of disentangled representations from video. In Advances in Neural Information Processing Systems 30. 2017.
T. Devries and G. W. Taylor. Dataset augmentation in feature space. ICLR Workshop Track, 2017.
J. Emspak. How a machine Scientiï¬c American, De- URL https://www.scientificamerican.com/article/ learns prejudice. cember 29 2016, 2016. how-a-machine-learns-prejudice/. | 1710.11469#106 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11469 | 107 | L. Engstrom, A. Ilyas, and A. Athalye. Evaluating and understanding the robustness of adversarial logit pairing. arXiv preprint arXiv:1807.10272, 2018.
Y. Ganin, E. Ustinova, H. Ajakan, P. Germain, H. Larochelle, F. Laviolette, M. Marchand, and V. Lempitsky. Domain-adversarial training of neural networks. Journal of Machine Learning Research, 17(1), 2016.
R. Gao, X. Chen, and A. Kleywegt. arXiv preprint arXiv:1712.06050, 2017.
B. Gong, K. Grauman, and F. Sha. Reshaping visual datasets for domain adaptation. In Advances in Neural Information Processing Systems 26, pages 1286â1294. Curran Associates, Inc., 2013.
M. Gong, K. Zhang, T. Liu, D. Tao, C. Glymour, and B. Sch¨olkopf. Domain adaptation with conditional transferable components. In International Conference on Machine Learning, 2016.
37
I. Goodfellow, J. Shlens, and C. Szegedy. Explaining and harnessing adversarial examples. In International Conference on Learning Representations, 2015. | 1710.11469#107 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11469 | 108 | 37
I. Goodfellow, J. Shlens, and C. Szegedy. Explaining and harnessing adversarial examples. In International Conference on Learning Representations, 2015.
O. Goudet, D. Kalainathan, P. Caillou, D. Lopez-Paz, I. Guyon, M. Sebag, A. Tritas, and P. Tubaro. Learning Functional Causal Models with Generative Neural Networks. arXiv preprint arXiv:1709.05321, 2017.
T. Haavelmo. The probability approach in econometrics. Econometrica, 12:S1âS115 (sup- plement), 1944.
David A Harville. Bayesian inference for variance components using only error contrasts. Biometrika, 61(2):383â385, 1974.
T. B Hashimoto, P. S Liang, and J. C Duchi. Unsupervised transformation learning via In Advances in Neural Information Processing Systems 30, pages convex relaxations. 6875â6883. Curran Associates, Inc., 2017. | 1710.11469#108 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11469 | 109 | K. He, X. Zhang, S. Ren, and J. Sun. Delving deep into rectiï¬ers: Surpassing human- level performance on imagenet classiï¬cation. Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), pages 1026â1034, 2015.
K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 770â778, 2016.
J. Hoï¬man, B. Kulis, T. Darrell, and K. Saenko. Discovering latent domains for multi- source domain adaptation. In Computer Vision â ECCV 2012, pages 702â715, Berlin, Heidelberg, 2012. Springer Berlin Heidelberg.
H. Kannan, A. Kurakin, and I. J. Goodfellow. Adversarial logit pairing. arXiv preprint arXiv:1803.06373, 2018.
R. Khasanova and P. Frossard. Graph-based isometry invariant representation learning. In Proceedings of the 34th International Conference on Machine Learning, volume 70, pages 1847â1856, 2017. | 1710.11469#109 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11469 | 110 | N. Kilbertus, M. Rojas Carulla, G. Parascandolo, M. Hardt, D. Janzing, and B. Sch¨olkopf. Avoiding discrimination through causal reasoning. Advances in Neural Information Pro- cessing Systems 30, pages 656â666, 2017.
D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. International Conference on Learning Representations (ICLR), 2015.
M. Kocaoglu, C. Snyder, A. Dimakis, and S. Vishwanath. CausalGAN: Learning causal im- plicit generative models with adversarial training. International Conference on Learning Representations, 2018.
A. Krizhevsky, I. Sutskever, and G. E Hinton. Imagenet classiï¬cation with deep convolu- tional neural networks. In Advances in Neural Information Processing Systems 25. 2012.
38
A. Kuehlkamp, B. Becker, and K. Bowyer. Gender-from-iris or gender-from-mascara? In 2017 IEEE Winter Conference on Applications of Computer Vision (WACV), 2017.
Y. LeCun, L. Bottou, Y. Bengio, and P. Haï¬ner. Gradient-based learning applied to docu- ment recognition. Proceedings of the IEEE, 1998. | 1710.11469#110 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11469 | 111 | Ker-Chau Li. Sliced inverse regression for dimension reduction. Journal of the American Statistical Association, 86(414):316â327, 1991.
Z. Liu, P. Luo, X. Wang, and X. Tang. Deep learning face attributes in the wild. Proceedings of International Conference on Computer Vision (ICCV), 2015. In
D. Lopez-Paz and M. Oquab. Revisiting Classiï¬er Two-Sample Tests. International Con- ference on Learning Representations (ICLR), 2017.
D. Lopez-Paz, R. Nishihara, S. Chintala, B. Sch¨olkopf, and L. Bottou. Discovering causal signals in images. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2017), 2017.
S. Magliacane, T. van Ommen, T. Claassen, S. Bongers, P. Versteeg, and J. Mooij. Do- main adaptation by using causal inference to predict invariant conditional distributions. Advances in Neural Information Processing Systems, 2018.
N. Meinshausen. Causality from a distributional robustness point of view. In 2018 IEEE Data Science Workshop (DSW), pages 6â10, 2018. | 1710.11469#111 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11469 | 112 | N. Meinshausen. Causality from a distributional robustness point of view. In 2018 IEEE Data Science Workshop (DSW), pages 6â10, 2018.
M. Mosbach, M. Andriushchenko, T. Trost, M. Hein, and D. Klakow. Logit pairing methods can fool gradient-based attacks. arXiv preprint arXiv:1810.12042, 2018.
H. Namkoong and J.C. Duchi. Variance-based regularization with convex objectives. In Advances in Neural Information Processing Systems, pages 2975â2984, 2017.
J. Pearl. Causality: Models, Reasoning, and Inference. Cambridge University Press, New York, USA, 2nd edition, 2009.
J. Peters, P. B¨uhlmann, and N. Meinshausen. Causal inference using invariant prediction: identiï¬cation and conï¬dence intervals. Journal of the Royal Statistical Society, Series B, 78:947â1012, 2016.
J. Quionero-Candela, M. Sugiyama, A. Schwaighofer, and N. D. Lawrence. Dataset Shift in Machine Learning. The MIT Press, 2009. | 1710.11469#112 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11469 | 113 | T. Richardson and J. M. Robins. Single world intervention graphs (SWIGs): A uniï¬cation of the counterfactual and graphical approaches to causality. Center for the Statistics and the Social Sciences, University of Washington Series. Working Paper 128, 30 April 2013, 2013.
M. Rojas-Carulla, B. Sch¨olkopf, R. Turner, and J. Peters. Causal transfer in machine learning. To appear in Journal of Machine Learning Research., 2018.
39
D. Rothenh¨ausler, P. B¨uhlmann, N. Meinshausen, and J. Peters. Anchor regression: het- erogeneous data meets causality. arXiv preprint arXiv:1801.06229, 2018.
B. Sch¨olkopf, C. Burges, and V. Vapnik. Incorporating invariances in support vector learning machines. In Artiï¬cial Neural Networks â ICANN 96, pages 47â52, Berlin, Heidelberg, 1996. Springer Berlin Heidelberg. | 1710.11469#113 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11469 | 114 | B. Sch¨olkopf, D. Janzing, J. Peters, E. Sgouritsa, K. Zhang, and J. Mooij. On causal and anticausal learning. In Proceedings of the 29th International Conference on Machine Learning (ICML), pages 1255â1262, 2012.
S. Shaï¬eezadeh-Abadeh, D. Kuhn, and P. Esfahani. Regularization via mass transportation. arXiv preprint arXiv:1710.10016, 2017.
A. Sinha, H. Namkoong, and J. Duchi. Certiï¬able distributional robustness with principled adversarial training. In International Conference on Learning Representations, 2018.
In Proceedings of the 29th International Coference on International Conference on Machine Learning, ICMLâ12, pages 1339â1346, USA, 2012. Omnipress.
C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fer- gus. Intriguing properties of neural networks. In International Conference on Learning Representations, 2014. | 1710.11469#114 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11469 | 115 | C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. In Computer Vision and Pattern Recognition (CVPR), 2015.
A. Torralba and A. A. Efros. Unbiased look at dataset bias. In Computer Vision and Pattern Recognition (CVPR), 2011.
L. Tran, X. Yin, and X. Liu. Disentangled representation learning gan for pose-invariant face recognition. In In Proceeding of IEEE Computer Vision and Pattern Recognition, Honolulu, HI, July 2017.
Geert Verbeke and Geert Molenberghs. Linear mixed models for longitudinal data. Springer Science & Business Media, 2009.
C. Villani. Topics in optimal transportation. Number 58. American Mathematical Soc., 2003.
Riccardo Volpi, Hongseok Namkoong, Ozan Sener, John Duchi, Vittorio Murino, and Silvio Savarese. Generalizing to unseen domains via adversarial data augmentation. arXiv preprint arXiv:1805.12018, 2018. | 1710.11469#115 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11469 | 116 | Y. Xian, C. H. Lampert, B. Schiele, and Z. Akata. Zero-shot learning - A comprehensive evaluation of the good, the bad and the ugly. arXiv preprint arXiv:1707.00600, 2017.
C. Xu, D. Tao, and C. Xu. A survey on multi-view learning. arXiv preprint arXiv:1304.5634, 2013.
40
H. Xu, C. Caramanis, and S. Mannor. Robust regression and lasso. In Advances in Neural Information Processing Systems, pages 1801â1808, 2009.
X. Yu, T. Liu, M. Gong, K. Zhang, and D. Tao. Transfer learning with label noise. arXiv preprint arXiv:1707.09724, 2017.
K. Zhang, B. Sch¨olkopf, K. Muandet, and Z. Wang. Domain adaptation under target and conditional shift. In International Conference on Machine Learning, 2013.
K. Zhang, M. Gong, and B. Sch¨olkopf. Multi-source domain adaptation: A causal view. In Proceedings of the Twenty-Ninth AAAI Conference on Artiï¬cial Intelligence, 2015.
41
# Appendix
# Appendix A. Proof of Theorem 1 | 1710.11469#116 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11469 | 117 | 41
# Appendix
# Appendix A. Proof of Theorem 1
First part. To show the ï¬rst part, namely that with probability 1,
Lio (OP) = 0, we need to show that W#6P22 # 0 with probability 1. The reason this is sufficient is as follows: if Wâ@ 4 0, then L,(0) = oo as we can then find av ⬠R% such that y := 0° Wo £ 0. Assume without limitation of generality that v is normed such that E(E(v'S) gvlÂ¥ = y,ID = id)) = 1. Setting Ag = â¬v for ⬠⬠R, we have that (ID, Y, Xstvle + Ae) is in the class Fig if the distribution of (ID, Y, X*Â¥'*) is equal to Fy. Furthermore, x(Ag)'@ = «(A = 0)'@ + â¬y. Hence log(1 + exp(ây - #(Ag)'8)) + 00 for either ⬠-+ 00 or ⬠+ âon.
To show that W#6Pee! # 0 with probability 1, let 6* be the oracle estimator that is constrained to be orthogonal to the column space of W: | 1710.11469#117 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11469 | 118 | To show that W#6Pee! # 0 with probability 1, let 6* be the oracle estimator that is constrained to be orthogonal to the column space of W:
P ; ; i< . & = argming.yegâo Ln(9) with L,(8) = YS? (yi, fo(as)). (15) i=l
We show W#6pee! # 0 by contradiction. Assume hence that W'gre! â 0. If this is indeed the case, then the constraint W'@ = 0 in becomes non-active and we have 6?! = 6. This would imply that taking the directional derivative of the training loss with respect to any 6 ⬠R? in the column space of W should vanish at the solution 6*. In other words, define the gradient as g(0) = VoLn(@) ⬠RP. The implication is then that for all 6 in the column-space of W,
δtg(Ëθâ) = 0 (16)
and we will show the latter condition is violated almost surely. | 1710.11469#118 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11469 | 119 | δtg(Ëθâ) = 0 (16)
and we will show the latter condition is violated almost surely.
As we work with the logistic loss and Y ⬠{â1,1}, the loss is given by ¢(y;, fo(ai)) = log(1+exp(ây;x'0)). Define r;(9) := y;/(1t+exp(yx'6)). For alli =1,...,n we have r; 4 0. Then
9(0*) = on). (17) i=1
The training images can be written according to the model as xi = x0 , where X 0 := kx(X core, εX ) are the images in absence of any style variation. Since the style features only have an eï¬ect on the column space of W in X, the oracle estimator Ëθâ is identical under the true training data and the (hypothetical) training data x0 i , i = 1, . . . , n in absence of style variation. As X â X 0 = W X style, equation (17) can also be written as | 1710.11469#119 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11469 | 120 | n n jx 1 ix 1 jx 1 6'g(6*) = - » 1(0*)(a2)'6 + - > ri(O*) (as) WS, (18) i= i=
Since δ is in the column-space of W , there exists u â Rq such that δ = W u and we can write (18) as
1 n 1 n jis jis ie 1 5'g(6") = = So ri(6")(a?)'Wu + - Sori (O(a) WW. (19) i=l i=l
42
From (A2) we have that the eigenvalues of W tW are all positive. Also ri(Ëθâ) is not a function of the interventions xstyle , i = 1, . . . , n since, as above, the estimator Ëθâ is identical whether trained on the original data xi or on the intervention-free data x0 i , i = 1, . . . , n. If we condition on everything except for the random interventions by conditioning on (x0 i , yi) for i = 1, . . . , n, then the rhs of (19) can be written as
atu + Btu, | 1710.11469#120 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11469 | 121 | atu + Btu,
where a ⬠Ris fixed (conditionally) and B = +57", r)(6")(a"°)'W'W ⬠R¢ is a random vector and B 4 âa ⬠R¢ with probability 1 by (Al) and (A2) Hence the left hand side of is not identically 0 with probability 1 for any given 6 in the column-space of W. This shows that the implication is incorrect with probability 1 and hence completes the proof of the first part by contradiction.
Invariant parameter space. Before continuing with the second part of the proof, some deï¬nitions. Let I be the invariant parameter space
I := {θ : fθ(x(â)) is constant as function of â â Rq for all x â Rp}.
For all θ â I, the loss (7) for any F â Fξ is identical to the loss under F0. That is for all ξ ⥠0,
if eT, then sup Er |é(Y, fo(X))| = Em e(. folX))]- FeFe
The optimal predictor in the invariant space I is
6* = argming Ep, lew, fo(X))| such that 6 ⬠I. (20) | 1710.11469#121 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11469 | 122 | The optimal predictor in the invariant space I is
6* = argming Ep, lew, fo(X))| such that 6 ⬠I. (20)
If fθ is only a function of the core features X core, then θ â I. The challenge is that the core features are not directly observable and we have to infer the invariant space I from data.
Second part. For the second part, we ï¬rst show that with probability at least pn, as deï¬ned in (A3), Ëθcore = Ëθâ with Ëθâ deï¬ned as in (15). The invariant space for this model is the linear subspace I = {θ : W tθ = 0} and by their respective deï¬nitions,
- 1~ 0* = argming â Seyi, fo(xi)) such that 6 ⬠I, n i=l - 1 ere = argming â Seu, fo(xi)) such that 6 ⬠In. no
Since we use In = In(Ï ) with Ï = 0, | 1710.11469#122 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11469 | 123 | Since we use In = In(Ï ) with Ï = 0,
I, = {0 : E(Var(fa(X)|Y, ID)) = 0}. This implies that for 6 ⬠In, fo(wi) = fo(ai) if i,iâ ⬠S; for some j ⬠{1,... a Since fo(x) = fo(2â) implies (x â x')'0 = 0, it follows that (x; â x)â = 0 if 7,7â ⬠S; for some j ⬠{1,...,m} and hence
In C {0 (aj â aj)'0 = 0 if 1,7â ⬠S; for some j ⬠{1,.. .,mh}.
13. recall that (yi,id;) = (yi, idj-) if i,iâ ⬠S; as the subsets S;, 7 = 1,...,m, collect all observations that have a unique realization of (Y,ID)
43 | 1710.11469#123 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11469 | 124 | 43
Since XS*'© has a linear influence on X in (P), ajâ cy = W(A; â A;) if i,â are in the same group S; of observations for some j ⬠{1,...,m}. Note that the number of grouped examples n â m is equal to or exceeds the rank q of W with probability pn, using (A3), and p, â 1 for n + oo. By (A2), it follows then with probability at least p, that I, C {0: WO =0} =I. As, by definition, I C I, is always true, we have with probability Pn that I = In. Hence, with probability pp (and pn > 1 for n â ov), gore = §*. It thus remains to be shown that
Lâ(Ëθâ) âp inf θ Lâ(θ). (21)
0 Since 6* is in I, we have (y,7(A)) = C(y, 2°), where x° are the previously defined data in absence of any style variance. Hence
~ 1~ 0* = argming â > (yi, fo(x2)) such that 6 ⬠J, (22) n i=1 | 1710.11469#124 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11469 | 125 | ~ 1~ 0* = argming â > (yi, fo(x2)) such that 6 ⬠J, (22) n i=1
that is the estimator is unchanged if we use the (hypothetical) data x0 training data. The population optimal parameter vector deï¬ned in (20) as
0* = argming Er, lew, fo(X))| such that 6 ⬠I. (23)
is for all ξ ⥠0 identical to
argming sup Er lew, fo (x))] such that 6 ⬠I. FeFe
Hence (22) and (23) can be written as
1 n ; 6* = argming.g<y L{(8) with L{) (6) := So ey, fo(#?)) i=1 O* = argmi LO (6) with L (6) := E[e(Y, fo(X°))] SMiNg.ge7 with : C(Y, to :
i )) | 1710.11469#125 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11469 | 126 | i ))
By uniform convergence of L(0) n to the population loss L(0), we have L(0)(Ëθâ) âp L(0)(θâ). â = Lâ(θâ) = L(0)(θâ). As Ëθâ is in I, we also have By deï¬nition of I and θâ, we have Lâ Lâ(Ëθâ) = L(0)(Ëθâ). Since, from above, L(0)(Ëθâ) âp L(0)(θâ), this also implies Lâ(Ëθâ) âp â. Using the previously established result that Ëθcore = Ëθâ with probability at Lâ(θâ) = Lâ least pn and pn â 1 for n â â, this completes the proof. | 1710.11469#126 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11469 | 127 | Appendix B. Proof of Theorem 2 Let F0 be the training distribution of (ID, Y, X style) and F a distribution for (ID, Y, ËX style) in Fξ. By deï¬nition of Fξ, we can write ËX style = X style + â for a suitable random variable â â Rq with
â â Uξ, where Uξ = {â : E(E(âtΣâ1 Y,IDâ|Y, ID)) ⤠ξ}.
if we can write ËX style = X style + â with â â Uξ, then the distribution is in Vice versa: Fξ. While X under F0 can be written as X(â = 0), the distribution of X under F is of
44
â
the form X(â) or, alternatively, X( constraint that U â U1, and using (B2), ξU ) with U â U1. Adopting from now on the latter
Ep|E(Y, fo(X)] = Er, [ho(0)] + VE Ex, | (Who)'U] + 0(6),
where âhθ is the gradient of hθ(δ) with respect to δ, evaluated at δ â¡ 0. Hence | 1710.11469#127 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11469 | 128 | where âhθ is the gradient of hθ(δ) with respect to δ, evaluated at δ â¡ 0. Hence
sup Er [ro(A)] = Ep, [0(0)] + /⬠sup En [(Who)'0] +0(â¬). FeFe Ue
The proof is complete if we can show that
Coi/20 = sup Er, [(Who)'U| + O(¢). Uh
On the one hand,
sup ER, [(Wh0)'U| = En I / (Vho)!Dya0(Vho)) UM
.
This follows for a matrix Σ with Cholesky decomposition Σ = CtC,
max (Vhe)'u= max (Vhe)'Ctw uutS-lu<. w:|jwl|3<1 = ||C(VA)|l2 = V(VA)EX(VA).
On the other hand, the conditional-variance-of-loss can be expanded as
Coajo9 = Em [V Var, fo X))1Â¥s D)] = Exy | y/(Vho)'E ya (Vho)] + 06),
which completes the proof.
# Appendix C. Network architectures | 1710.11469#128 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11469 | 129 | which completes the proof.
# Appendix C. Network architectures
We implemented the considered models in TensorFlow (Abadi et al., 2015). The model architectures used are detailed in Table C.1. CoRe and the pooled estimator use the same network architecture and training procedure; merely the loss function diï¬ers by the CoRe regularization term. In all experiments we use the Adam optimizer (Kingma and Ba, 2015). All experimental results are based on training the respective model ï¬ve times (using the same data) to assess the variance due to the randomness in the training procedure. In each epoch of the training, the training data xi, i = 1, . . . , n are randomly shuï¬ed, keeping the grouped observations (xi)iâIj for j â {1, . . . , m} together to ensure that mini batches will contain grouped observations. In all experiments the mini batch size is set to 120. For small c this implies that not all mini batches contain grouped observations, making the optimization more challenging.
45 | 1710.11469#129 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11469 | 130 | 45
Dataset Optimizer Architecture MNIST Adam Input CNN Conv 5 Ã 5 Ã 16, 5 Ã 5 Ã 32 28 Ã 28 Ã 1 Stickmen CelebA (all experiments using CelebA) AwA2 Adam Adam Adam (same padding, strides = 2, ReLu activation), fully connected, softmax layer 64 Ã 64 Ã 1 Input CNN Conv 5 Ã 5 Ã 16, 5 Ã 5 Ã 32, 5 Ã 5 Ã 64, 5 Ã 5 Ã 128 (same padding, strides = 2, leaky ReLu activation), fully connected, softmax layer 64 Ã 48 Ã 3 Input CNN Conv 5 Ã 5 Ã 16, 5 Ã 5 Ã 32, 5 Ã 5 Ã 64, 5 Ã 5 Ã 128 (same padding, strides = 2, leaky ReLu activation), fully connected, softmax layer 32 Ã 32 Ã 3 Input CNN Conv 5 Ã 5 Ã 16, 5 Ã 5 Ã 32, 5 Ã 5 Ã 64, 5 Ã 5 Ã 128 (same padding, strides = 2, leaky ReLu activation), fully connected, softmax layer
Table C.1: Details of the model architectures used.
# Appendix D. Additional experiments
# D.1 Eyeglasses detection with small sample size | 1710.11469#130 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11469 | 131 | Table C.1: Details of the model architectures used.
# Appendix D. Additional experiments
# D.1 Eyeglasses detection with small sample size
Figure D.1 shows the numerator and the denominator of the variance ratio deï¬ned in Eq. (14) separately as a function of the CoRe penalty weight. In conjunction with Fig- ure 6(b), we observe that a ridge penalty decreases both the within- and between-group variance while the CoRe penalty penalizes the within-group variance selectively.
# D.2 Eyeglasses detection: known and unknown brightness interventions
Here, we show additional results for the experiment discussed in §5.7. Recall that we work with the CelebA dataset and consider the problem of classifying whether the person in the image is wearing eyeglasses. We discuss two alternatives for constructing diï¬erent test sets and we vary the number of grouped observations in c â {200, 2000, 5000} as well as the strength of the brightness interventions in β â {5, 10, 20}, all with sample size n = 20000. Generation of training and test sets 1 and 2 were already described in §5.7. Here, we consider additionally test set 3 where all images are left unchanged (no brightness interventions at all) and in test set 4 the brightness of all images is increased. | 1710.11469#131 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11469 | 132 | In §5.7 we used images of the same person to create a grouped observation by sampling a diï¬erent value for the brightness intervention. We refer to this as âGrouping setting 2â here. An alternative is to use the same image of the same person in diï¬erent brightnesses (drawn from the same distribution) as a group over which the conditional variance is calculated. We call this âGrouping setting 1â and it can be useful if we know that we want to protect against brightness interventions in the future. For comparison, we also evaluate grouping with an image of a diï¬erent person (but sharing the same class label) as a baseline (âGrouping
46
(a) (b)
Figure D.1: Eyeglass detection, trained on a small subset (DS1) of the CelebA dataset with disjoint identities. Panel (a) shows the numerator of the variance ratio deï¬ned in Eq. (14) on test data as a function of both the CoRe and ridge penalty weights. Panel (b) shows the equivalent plot for the denominator. A ridge penalty decreases both the within- and between-group variance while the CoRe penalty penalizes the within-group variance selectively (the latter can be seen more clearly in Figure 6(b)). | 1710.11469#132 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11469 | 134 | (a) Misclassiï¬ed examples from the test sets. (b) Misclassiï¬cation rates for β = 20 and c = 2000. Results for diï¬erent test sets, grouping settings, β â {5, 10, 20} and c â {200, 5000} can be found in Figure D.4.
47
(a) Grouping setting 1, β = 5
Asa Ab f@ âARA0oha Meskel
Asa Ab
f@
(d) Grouping setting 2, β = 5
# = * $ a i oO
= * $ a i oO
(b) Grouping setting 1, β = 10
(e) Grouping setting 2, β = 10
(c) Grouping setting 1, β = 20
ASS ABH Aeon ARONA Mack
ASS
(f) Grouping setting 2, β = 20
(g) Grouping setting 3, β = 5
(h) Grouping setting 3, β = 10
(i) Grouping setting 3, β = 20
# HSoAnR ASeAne AGAADS | 1710.11469#134 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11469 | 135 | Figure D.3: Examples from the CelebA eyeglasses detection with brightness interventions, grouping settings 1â3 with β â {5, 10, 20}. In all rows, the ï¬rst three images from the left have y â¡ no glasses; the remaining three images have y â¡ glasses. Connected images are grouped examples. In panels (a)â(c), row 1 shows examples from the training set, rows 2â4 contain examples from test sets 2â4, respectively. Panels (d)â(i) show examples from the respective training sets.
setting 3â). Examples from the training sets using grouping settings 1, 2 and 3 can be found in Figure D.3.
Results for all grouping settings, β â {5, 10, 20} and c â {200, 5000} can be found in Figure D.4. We see that using grouping setting 1 works best since we could explicitly control that only X style â¡ brightness varies between grouping examples. In grouping setting 2, diï¬erent images of the same person can vary in many factors, making it more challenging to isolate brightness as the factor to be invariant against. Lastly, we see that if we group images of diï¬erent persons (âGrouping setting 3â), the diï¬erence between CoRe estimator and the pooled estimator becomes much smaller than in the previous settings. | 1710.11469#135 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11469 | 136 | Regarding the results for grouping setting 1 in Figure D.2, we notice that the pooled estimator performs better than CoRe on test set 1. This can be explained by the fact that it can exploit the predictive information contained in the brightness of an image while CoRe is restricted not to do so. Second, we observe that the pooled estimator does not perform well on test sets 2 and 4 as its learned representation seems to use the imageâs brightness as a predictor for the response which fails when the brightness distribution in the test set diï¬ers signiï¬cantly from the training set. In contrast, the predictive performance of CoRe is hardly aï¬ected by the changing brightness distributions.
48
(a) Grouping setting 1, c = 200
(b) Grouping setting 1, c = 2000
Method [ffl] CORE })§ poote
Method [ff] core jf pooled
_~ mean: 5 mean: 10 mean: 20 40 Zz ty 30 3 = 20 oO 2 < 10 g lk = ° Tr Te1Te2Te3Te4 Tr TetTe2Te3Te4 Tr Te1Te2Te3 Ted Dataset | 1710.11469#136 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11469 | 137 | _~ mean: 5 mean: 10 mean: 20 40 Zz ty 30 3 = 20 oO 2 < 10 2 Lilia: o lle 1h: = ° Tr Te1Te2Te3Te4 Tr TetTe2Te3Te4 Tr Te1 Te2Te3 Ted Dataset
(c) Grouping setting 2, c = 2000
(d) Grouping setting 2, c = 5000
_~ mean: 5 mean: 10 mean: 20 40 Zz ty 30 3 = 20 7) Q < 10 Samii ool 2 Tr Te1Te2Te3Te4 Tr TetTe2Te3Te4 Tr Te1Te2Te3 Ted Dataset
_~ mean: 5 mean: 10 mean: 20 40 Zz iy 30 3 = 20 7) Q < 10 $ lemon om: By ah Tr Te1Te2Te3Te4 Tr TetTe2Te3Te4 Tr Te1 Te2Te3 Ted
# Dataset
(e) Grouping setting 3, c = 2000
(f) Grouping setting 3, c = 5000
_~ mean: 5 mean: 10 mean: 20 40 Zz ww 30 3 = 20 7) Q < 10 2 ood ool 2 Tr Te1Te2Te3Te4 Tr TetTe2Te3Te4 Tr Te1Te2Te3 Ted Dataset | 1710.11469#137 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11469 | 138 | mean: 5 mean: 10 mean: 20 40 30 = 20 10 iE oi olf om Tr Te1Te2Te3Te4 Tr TetTe2Te3Te4 Tr Te1 Te2Te3 Ted Dataset
_~
# Zz
# uw 3
7) Q < 5 2
Figure D.4: Misclassiï¬cation rates for the CelebA eyeglasses detection with brightness interven- tions, grouping settings 1â3 with c â {200, 2000, 5000} and the mean of the exponential distribution β â {5, 10, 20}.
49
# D.3 Gender classiï¬cation
Table D.2 additionally reports the standard errors for the results discussed in §5.2.
50 | 1710.11469#138 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11469 | 139 | e u l a v y t l a n e P r o r r E s e l a M : t s e T s e l a m e F : t s e T n i a r T 2 t s e T 1 t s e T n i a r T ) 8 8 . 0 ( ) 9 0 . 0 ( 7 6 . 0 3 3 9 . 0 ) 7 1 . 2 ( ) 4 1 . 0 ( 5 0 . 4 7 1 6 . 1 ) 7 2 . 0 ( ) 1 0 . 0 ( 7 7 . 2 2 1 0 . 0 ) ) % 1 8 . 0 ( % 0 8 . 0 ( % 4 5 . 8 3 % 7 0 . 4 2 ) ) % 7 1 . 0 ( % 7 1 . 0 ( % 0 0 . 2 % 5 8 . 5 ) ) % 0 0 . 0 ( % 5 2 . 0 ( % 0 0 . 0 % 3 4 . 6 ) 9 5 . 0 ( ) 3 0 . 0 ( 6 7 . 1 1 2 6 . 0 ) 4 4 . 1 ( ) 0 2 . 0 ( 8 9 . 2 3 4 4 . 1 ) 3 3 . 0 ( ) 0 0 . 0 ( 3 2 . 8 0 0 . 0 ) ) % 4 4 . 0 ( % 2 3 . 1 ( % 1 4 . 3 4 % 5 0 . 7 2 ) ) % 9 0 . 0 ( % 2 4 . 0 ( % | 1710.11469#139 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11469 | 140 | ) % 4 4 . 0 ( % 2 3 . 1 ( % 1 4 . 3 4 % 5 0 . 7 2 ) ) % 9 0 . 0 ( % 2 4 . 0 ( % 8 9 . 1 % 9 9 . 6 ) ) % 0 0 . 0 ( % 4 4 . 0 ( % 0 0 . 0 % 1 6 . 7 ) 2 1 . 1 ( ) 8 0 . 0 ( 7 3 . 4 1 2 4 . 0 ) 2 6 . 1 ( ) 1 3 . 0 ( 1 5 . 0 4 6 2 . 1 ) 5 7 . 0 ( ) 1 0 . 0 ( 7 4 . 9 0 0 . 0 ) ) % 1 1 . 1 ( % 3 7 . 1 ( % 4 6 . 7 4 % 3 6 . 0 3 ) ) % 1 1 . 0 ( % 8 6 . 0 ( % 0 0 . 2 % 4 7 . 7 ) ) % 0 0 . 0 ( % 9 5 . 0 ( % 0 0 . 0 % 6 7 . 8 ) 6 6 . 1 ( ) 1 1 . 0 ( 6 2 . 1 2 6 1 . 0 ) 2 2 . 1 ( ) 0 3 . 0 ( 1 0 . 1 6 2 4 . 0 ) 4 6 . 1 ( ) 0 0 . 0 ( 2 6 . 3 1 0 0 . 0 ) ) % 2 0 . 1 ( % 3 4 . 2 ( % 6 9 . 8 4 % 7 5 . 9 2 ) | 1710.11469#140 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11469 | 141 | 0 . 0 ( 2 6 . 3 1 0 0 . 0 ) ) % 2 0 . 1 ( % 3 4 . 2 ( % 6 9 . 8 4 % 7 5 . 9 2 ) ) % 6 1 . 0 ( % 9 7 . 1 ( % 9 8 . 1 % 5 3 . 9 ) % 0 0 . 0 ( ) % 2 4 . 1 ( % 0 0 . 0 % 5 4 . 0 1 ) 2 1 . 1 ( ) 0 0 . 0 ( 0 8 . 7 2 0 0 . 0 ) 1 8 . 1 ( ) 0 0 . 0 ( 0 8 . 0 7 0 0 . 0 ) 5 2 . 1 ( ) 0 0 . 0 ( 6 6 . 0 2 0 0 . 0 ) ) % 5 6 . 0 ( % 1 8 . 0 ( % 1 1 . 0 5 % 1 9 . 2 3 ) % 0 1 . 0 ( ) % 6 2 . 1 ( % 0 7 . 1 % 1 5 . 0 1 ) % 0 0 . 0 ( ) % 7 1 . 1 ( % 0 0 . 0 % 0 1 . 1 1 ) 4 8 . 4 8 1 ( ) 1 0 . 0 ( 1 2 . 3 5 2 1 1 0 . 0 ) 6 9 . 9 1 2 ( ) 2 0 . 0 ( 7 7 . 4 2 5 2 2 0 . 0 ) 8 6 . 4 4 1 ( ) 0 0 . 0 ( 2 3 . 1 2 8 0 0 | 1710.11469#141 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11469 | 142 | ( ) 2 0 . 0 ( 7 7 . 4 2 5 2 2 0 . 0 ) 8 6 . 4 4 1 ( ) 0 0 . 0 ( 2 3 . 1 2 8 0 0 . 0 ) ) % 0 5 . 1 ( % 3 0 . 2 ( % 1 4 . 9 4 % 8 6 . 5 3 ) % 5 0 . 0 ( ) % 2 3 . 0 ( % 3 9 . 1 % 1 1 . 0 1 ) % 0 0 . 0 ( ) % 4 3 . 0 ( % 0 0 . 0 % 2 1 . 1 1 e h t n i Y f o n o i t u b i r t s i d e h t o t t c e p s e r h t i w y r a v t a h t s t e s a t a d t n e r e ï¬ d i x i s e r a p m o c e W . } n a m , n a m o w { â Y r o f n o i t a c ï¬ i s s a l C g n i n i a r t l l a n I . 1 = κ d n a 5 . 0 = κ n e e w t e b n e m g n w o h s i s e g a m i f o n o i t r o p o r p e h t y r a v e w , y l l a | 1710.11469#142 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11469 | 143 | m g n w o h s i s e g a m i f o n o i t r o p o r p e h t y r a v e w , y l l a c ï¬ i c e p S . s n o i t a v r e s b o d e p u o r g d e l o o p e h t h t o B . 0 0 5 s i s n o i t a v r e s b o d e p u o r g f o r e b m u n l a t o t e h t d n a 2 8 9 6 1 s i s n o i t a v r e s b o f o r e b m u n l a t o t e h t , s t e s a t a d . d e c n a l a b e r o m s i s n o i t a v r e s b o d e p u o r g e h t n i Y f o n o i t u b i r t s i d e h t f i r e t t e b m r o f r e p r o t a m i t s e e R o C e h t s a l l e w s a r o t a m i t s e . e l a c s e v i t a l e r a n o % 9 3 â 8 2 | 1710.11469#143 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11469 | 145 | # d o h t e
# M
N N C r e y a l - 5
e R o C +
N N C r e y a l - 5
e R o C +
N N C r e y a l - 5
e R o C +
N N C r e y a l - 5
e R o C +
N N C r e y a l - 5
e R o C +
N N C r e y a l - 5
e R o C +
# : 2 . D e l
κ=.5
κ=.75
κ=.9
κ=.95
κ=.99
κ=1
b a T
51
(a) m = 1000 (b) m = 10000
Figure D.5: Data augmentation setting: Misclassiï¬cation rates for MNIST and X style â¡ rotation. In test set 1 all digits are rotated by a degree randomly sampled from [35, 70]. Test set 2 is the usual MNIST test set.
place of observation D person ID adult/child Y â height X core movement X style(â) image X(â) fθ ËY (X(â))
Figure D.6: Data generating process for the stickmen example.
# D.4 MNIST: more sample eï¬cient data augmentation | 1710.11469#145 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.