doi
stringlengths
10
10
chunk-id
int64
0
936
chunk
stringlengths
401
2.02k
id
stringlengths
12
14
title
stringlengths
8
162
summary
stringlengths
228
1.92k
source
stringlengths
31
31
authors
stringlengths
7
6.97k
categories
stringlengths
5
107
comment
stringlengths
4
398
journal_ref
stringlengths
8
194
primary_category
stringlengths
5
17
published
stringlengths
8
8
updated
stringlengths
8
8
references
list
1610.02413
33
√ L(Y, Y) < E€(Y*, Y) +2V2-dk(R,R’), ∗ where R is the Bayes optimal regressor. The same claim is true for equal opportunity. 11 Proof of Theorem 5.6. We prove the claim for equalized odds. The case of equal opportunity is analogous. Fix the loss function £ and the regressor R. Take Y* to be the predictor derived from the Bayes optimal regressor R* and A. By Corollary 5.3, we know that this is an optimal equalized odds predictor as required by the lemma. It remains to construct a derived equalized odds predictor Y and relate its loss to that of Y*. Recall the optimization problem for defining the optimal derived equalized odds predictor. Let D, be the constraint region defined by R. Likewise, let D* be the constraint region under R’. The optimal classifier Y* corresponds to a point pre € Dj M Dj. As a consequence of Lemma 5.5, we can find (not necessarily identical) points qg € Dy and q, € D, such that for all a € {0,1}, llp* — qall2 < V2- dx (R,R’). We claim that this means we can also find a feasible point q € Do # MD, such that
1610.02413#33
Equality of Opportunity in Supervised Learning
We propose a criterion for discrimination against a specified sensitive attribute in supervised learning, where the goal is to predict some target based on available features. Assuming data about the predictor, target, and membership in the protected group are available, we show how to optimally adjust any learned predictor so as to remove discrimination according to our definition. Our framework also improves incentives by shifting the cost of poor classification from disadvantaged groups to the decision maker, who can respond by improving the classification accuracy. In line with other studies, our notion is oblivious: it depends only on the joint statistics of the predictor, the target and the protected attribute, but not on interpretation of individualfeatures. We study the inherent limits of defining and identifying biases based on such oblivious measures, outlining what can and cannot be inferred from different oblivious tests. We illustrate our notion using a case study of FICO credit scores.
http://arxiv.org/pdf/1610.02413
Moritz Hardt, Eric Price, Nathan Srebro
cs.LG
null
null
cs.LG
20161007
20161007
[]
1610.02136
34
8 Published as a conference paper at ICLR 2017 # 5 DISCUSSION AND FUTURE WORK The abnormality module demonstrates that in some cases the baseline can be beaten by exploiting the representations of a network, suggesting myriad research directions. Some promising future avenues may utilize the intra-class variance: if the distance from an example to another of the same predicted class is abnormally high, it may be out-of-distribution (Giryes et al., 2015). Another path is to feed in a vector summarizing a layer’s activations into an RNN, one vector for each layer. The RNN may determine that the activation patterns are abnormal for out-of-distribution examples. Others could make the detections fine-grained: is the out-of-distribution example a known-unknown or an unknown-unknown? A different avenue is not just to detect correct classifications but to output the probability of a correct detection. These are but a few ideas for improving error and out-of-distribution detection.
1610.02136#34
A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks
We consider the two related problems of detecting if an example is misclassified or out-of-distribution. We present a simple baseline that utilizes probabilities from softmax distributions. Correctly classified examples tend to have greater maximum softmax probabilities than erroneously classified and out-of-distribution examples, allowing for their detection. We assess performance by defining several tasks in computer vision, natural language processing, and automatic speech recognition, showing the effectiveness of this baseline across all. We then show the baseline can sometimes be surpassed, demonstrating the room for future research on these underexplored detection tasks.
http://arxiv.org/pdf/1610.02136
Dan Hendrycks, Kevin Gimpel
cs.NE, cs.CV, cs.LG
Published as a conference paper at ICLR 2017. 1 Figure in 1 Appendix. Minor changes from the previous version
International Conference on Learning Representations 2017
cs.NE
20161007
20181003
[]
1610.02413
34
llp* — qall2 < V2- dx (R,R’). We claim that this means we can also find a feasible point q € Do # MD, such that IIp* allo < 2-dx(RR’). To see this, assume without loss of generality that the first coordinate of q; is greater than the first coordinate of qo, and that all points p*,qo,q, lie above the main diagonal. By definition of D,, we know that the entire line segment L) from (0,0) to q; is contained in Dy. Similarly, the entire line segment Ly between qq and (1,1) is contained in Do. Now, take q € Lp NL. By construction, q € Do MD, defines a classifier Y derived from R and A. Moreover, MD, defines a classifier Y derived from R and A. Moreover, llp* - all5 <llp* - goll3 + Ilp* - gol < 4-dk(RR*). # llp* √ Finally, by assumption on the loss function, there is a vector v with |lvll2 < V2 such that L(Y, Y) =(v,q) and E€(Y*, Y) =(v, p*). Applying Cauchy-Schwarz, p*). Applying Cauchy-Schwarz,
1610.02413#34
Equality of Opportunity in Supervised Learning
We propose a criterion for discrimination against a specified sensitive attribute in supervised learning, where the goal is to predict some target based on available features. Assuming data about the predictor, target, and membership in the protected group are available, we show how to optimally adjust any learned predictor so as to remove discrimination according to our definition. Our framework also improves incentives by shifting the cost of poor classification from disadvantaged groups to the decision maker, who can respond by improving the classification accuracy. In line with other studies, our notion is oblivious: it depends only on the joint statistics of the predictor, the target and the protected attribute, but not on interpretation of individualfeatures. We study the inherent limits of defining and identifying biases based on such oblivious measures, outlining what can and cannot be inferred from different oblivious tests. We illustrate our notion using a case study of FICO credit scores.
http://arxiv.org/pdf/1610.02413
Moritz Hardt, Eric Price, Nathan Srebro
cs.LG
null
null
cs.LG
20161007
20161007
[]
1610.02136
35
We hope that any new detection methods are tested on a variety of tasks and architectures of the researcher’s choice. A basic demonstration could include the following datasets: MNIST, CIFAR, IMDB, and tweets because vision-only demonstrations may not transfer well to other architectures and datasets. Reporting the AUPR and AUROC values is important, and so is the underlying classi- fier’s accuracy since an always-wrong classifier gets a maximum AUPR for error detection if error is the positive class. Also, future research need not use the exact values from this paper for com- parisons. Machine learning systems evolve, so tethering the evaluations to the exact architectures and datasets in this paper is needless. Instead, one could simply choose a variety of datasets and architectures possibly like those above and compare their detection method with a detector based on the softmax prediction probabilities from their classifiers. These are our basic recommendations for others who try to surpass the baseline on this underexplored challenge. # 6 CONCLUSION
1610.02136#35
A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks
We consider the two related problems of detecting if an example is misclassified or out-of-distribution. We present a simple baseline that utilizes probabilities from softmax distributions. Correctly classified examples tend to have greater maximum softmax probabilities than erroneously classified and out-of-distribution examples, allowing for their detection. We assess performance by defining several tasks in computer vision, natural language processing, and automatic speech recognition, showing the effectiveness of this baseline across all. We then show the baseline can sometimes be surpassed, demonstrating the room for future research on these underexplored detection tasks.
http://arxiv.org/pdf/1610.02136
Dan Hendrycks, Kevin Gimpel
cs.NE, cs.CV, cs.LG
Published as a conference paper at ICLR 2017. 1 Figure in 1 Appendix. Minor changes from the previous version
International Conference on Learning Representations 2017
cs.NE
20161007
20181003
[]
1610.02413
35
p*). Applying Cauchy-Schwarz, E&(Y, Y) - E€(Y*, Y) = (v,q—-p*) <[lvlla-Ilq— p'lls < 2V2-dx(R,R’). This completes the proof. # 6 Oblivious identifiability of discrimination Before turning to analyzing data, we pause to consider to what extent “black box” oblivious tests like ours can identify discriminatory predictions. To shed light on this issue, we introduce two possible scenarios for the dependency structure of the score, the target and the protected attribute. We will argue that while these two scenarios can have fundamentally different interpretations from the point of view of fairness, they can be indistinguishable from their joint distribution. In particular, no oblivious test can resolve which of the two scenarios applies. Scenario I Consider the dependency structure depicted in Figure 4. Here, X1 is a feature highly (even deterministically) correlated with the protected attribute A, but independent of the target Y given A. For example, X1 might be “languages spoken at home” or “great great grandfather’s profession”. The target Y has a statistical correlation with the protected attribute. There’s a second real-valued feature X2 correlated with Y , but only related to A through Y . For example, X2
1610.02413#35
Equality of Opportunity in Supervised Learning
We propose a criterion for discrimination against a specified sensitive attribute in supervised learning, where the goal is to predict some target based on available features. Assuming data about the predictor, target, and membership in the protected group are available, we show how to optimally adjust any learned predictor so as to remove discrimination according to our definition. Our framework also improves incentives by shifting the cost of poor classification from disadvantaged groups to the decision maker, who can respond by improving the classification accuracy. In line with other studies, our notion is oblivious: it depends only on the joint statistics of the predictor, the target and the protected attribute, but not on interpretation of individualfeatures. We study the inherent limits of defining and identifying biases based on such oblivious measures, outlining what can and cannot be inferred from different oblivious tests. We illustrate our notion using a case study of FICO credit scores.
http://arxiv.org/pdf/1610.02413
Moritz Hardt, Eric Price, Nathan Srebro
cs.LG
null
null
cs.LG
20161007
20161007
[]
1610.02136
36
# 6 CONCLUSION We demonstrated a softmax prediction probability baseline for error and out-of-distribution detec- tion across several architectures and numerous datasets. We then presented the abnormality module, which provided superior scores for discriminating between normal and abnormal examples on tested cases. The abnormality module demonstrates that the baseline can be beaten in some cases, and this implies there is room for future research. Our hope is that other researchers investigate architec- tures which make predictions in view of abnormality estimates, and that others pursue more reliable methods for detecting errors and out-of-distribution inputs because knowing when a machine learn- ing system fails strikes us as highly important. # ACKNOWLEDGMENTS We would like to thank John Wieting, Hao Tang, Karen Livescu, Greg Shakhnarovich, and our reviewers for their suggestions. We would also like to thank NVIDIA Corporation for donating several TITAN X GPUs used in this research. # REFERENCES Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, and Dan Man´e. Con- crete problems in ai safety. arXiv, 2016. Ann Bies, Justin Mott, Colin Warner, and Seth Kulick. English Web Treebank, 2012. Yaroslav Bulatov. notMNIST dataset. 2011.
1610.02136#36
A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks
We consider the two related problems of detecting if an example is misclassified or out-of-distribution. We present a simple baseline that utilizes probabilities from softmax distributions. Correctly classified examples tend to have greater maximum softmax probabilities than erroneously classified and out-of-distribution examples, allowing for their detection. We assess performance by defining several tasks in computer vision, natural language processing, and automatic speech recognition, showing the effectiveness of this baseline across all. We then show the baseline can sometimes be surpassed, demonstrating the room for future research on these underexplored detection tasks.
http://arxiv.org/pdf/1610.02136
Dan Hendrycks, Kevin Gimpel
cs.NE, cs.CV, cs.LG
Published as a conference paper at ICLR 2017. 1 Figure in 1 Appendix. Minor changes from the previous version
International Conference on Learning Representations 2017
cs.NE
20161007
20181003
[]
1610.02413
36
(A) (Y) (X) CY (x) (R°) (R) 12 Figure 4: Graphical model for Scenario I. Oo might capture an applicant’s driving record if applying for insurance, financial activity if applying for a loan, or criminal history in criminal justice situations. An intuitively “fair” predictor here is to use only the feature X through the score R= Xp. The score R satisfies equalized odds, since X and A are independent conditional on Y. Because of the statistical correlation between A and Y, a better statistical predictor, with greater power, can be obtained by taking into account also the protected attribute A, or perhaps its surrogate X;. The statistically optimal predictor would have the form R* = 1;(Xp, X;), biasing the score according to the protected attribute A. The score R* does not satisfy equalized odds, and in a sense seems to be “profiling” based on A.
1610.02413#36
Equality of Opportunity in Supervised Learning
We propose a criterion for discrimination against a specified sensitive attribute in supervised learning, where the goal is to predict some target based on available features. Assuming data about the predictor, target, and membership in the protected group are available, we show how to optimally adjust any learned predictor so as to remove discrimination according to our definition. Our framework also improves incentives by shifting the cost of poor classification from disadvantaged groups to the decision maker, who can respond by improving the classification accuracy. In line with other studies, our notion is oblivious: it depends only on the joint statistics of the predictor, the target and the protected attribute, but not on interpretation of individualfeatures. We study the inherent limits of defining and identifying biases based on such oblivious measures, outlining what can and cannot be inferred from different oblivious tests. We illustrate our notion using a case study of FICO credit scores.
http://arxiv.org/pdf/1610.02413
Moritz Hardt, Eric Price, Nathan Srebro
cs.LG
null
null
cs.LG
20161007
20161007
[]
1610.02136
37
Ann Bies, Justin Mott, Colin Warner, and Seth Kulick. English Web Treebank, 2012. Yaroslav Bulatov. notMNIST dataset. 2011. Jesse Davis and Mark Goadrich. The relationship between precision-recall and ROC curves. International Conference on Machine Learning (ICML), 2006. In Tom Fawcett. An introduction to ROC analysis. Pattern Recognition Letters, 2005. John Garofolo, Lori Lamel, William Fisher, Jonathan Fiscus, David Pallett, Nancy Dahlgren, and Victor Zue. TIMIT Acoustic-Phonetic Continuous Speech Corpus. Linguistic Data Consortium, 1993. 9 Published as a conference paper at ICLR 2017 Kevin Gimpel, Nathan Schneider, Brendan O'Connor, Dipanjan Das, Daniel Mills, Jacob Eisenstein, Michael Heilman, Dani Yogatama, Jeffrey Flanigan, and Noah A. Smith. Part-of-Speech Tagging for Twitter: Annotation, Features, and Experiments. Association for Computational Linguistics (ACL), 2011. Raja Giryes, Guillermo Sapiro, and Alex M. Bronstein. Deep neural networks with random gaussian weights: A universal classification strategy? arXiv, 2015.
1610.02136#37
A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks
We consider the two related problems of detecting if an example is misclassified or out-of-distribution. We present a simple baseline that utilizes probabilities from softmax distributions. Correctly classified examples tend to have greater maximum softmax probabilities than erroneously classified and out-of-distribution examples, allowing for their detection. We assess performance by defining several tasks in computer vision, natural language processing, and automatic speech recognition, showing the effectiveness of this baseline across all. We then show the baseline can sometimes be surpassed, demonstrating the room for future research on these underexplored detection tasks.
http://arxiv.org/pdf/1610.02136
Dan Hendrycks, Kevin Gimpel
cs.NE, cs.CV, cs.LG
Published as a conference paper at ICLR 2017. 1 Figure in 1 Appendix. Minor changes from the previous version
International Conference on Learning Representations 2017
cs.NE
20161007
20181003
[]
1610.02413
37
Scenario II Now consider the dependency structure depicted in Figure 5. Here X;3 is a feature, e.g. “wealth” or “annual income”, correlated with the protected attribute A and directly predictive of the target Y. That is, in this model, the probability of paying back of a loan is just a function of an individual’s wealth, independent of their race. Using X3 on its own as a predictor, e.g. using the score R* = X3, does not naturally seem directly discriminatory. However, as can be seen from the dependency structure, this score does not satisfy equalized odds. We can correct it to satisfy equalized odds and consider the optimal non-discriminating predictor R=711(X3,A) that does satisfy equalized odds. If A and X3, and thus A and Y, are positively correlated, then R would depend inversely on A (see numerical construction below), introducing a form of “corrective discrimination”, so as to make R is independent of A given Y (as is required by equalized odds). Figure 5: Graphical model for Scenario II. # 6.1 Unidentifiability
1610.02413#37
Equality of Opportunity in Supervised Learning
We propose a criterion for discrimination against a specified sensitive attribute in supervised learning, where the goal is to predict some target based on available features. Assuming data about the predictor, target, and membership in the protected group are available, we show how to optimally adjust any learned predictor so as to remove discrimination according to our definition. Our framework also improves incentives by shifting the cost of poor classification from disadvantaged groups to the decision maker, who can respond by improving the classification accuracy. In line with other studies, our notion is oblivious: it depends only on the joint statistics of the predictor, the target and the protected attribute, but not on interpretation of individualfeatures. We study the inherent limits of defining and identifying biases based on such oblivious measures, outlining what can and cannot be inferred from different oblivious tests. We illustrate our notion using a case study of FICO credit scores.
http://arxiv.org/pdf/1610.02413
Moritz Hardt, Eric Price, Nathan Srebro
cs.LG
null
null
cs.LG
20161007
20161007
[]
1610.02136
38
Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. In International Conference on Learning Representations (ICLR), 2015. Alex Graves, Santiago Fern´andez, Faustino Gomez, and J¨urgen Schmidhuber. Connectionist tem- poral classification: Labeling unsegmented sequence data with recurrent neural networks. In International Conference on Machine Learning (ICML), 2006. Dan Hendrycks and Kevin Gimpel. Methods for detecting adversarial images and a colorful saliency map. arXiv, 2016a. Dan Hendrycks and Kevin Gimpel. Bridging nonlinearities and stochastic regularizers with Gaus- sian error linear units. arXiv, 2016b. Dan Hendrycks and Kevin Gimpel. Adjusting for dropout variance in batch normalization and weight initialization. arXiv, 2016c. Hans-G¨unter Hirsch and David Pearce. The Aurora experimental framework for the performance evaluation of speech recognition systems under noisy conditions. ISCA ITRW ASR2000, 2000. Sepp Hochreiter and J¨urgen Schmidhuber. Long short-term memory. Neural Computation, 1997. Minqing Hu and Bing Liu. Mining and Summarizing Customer Reviews. Knowledge Discovery and Data Mining (KDD), 2004.
1610.02136#38
A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks
We consider the two related problems of detecting if an example is misclassified or out-of-distribution. We present a simple baseline that utilizes probabilities from softmax distributions. Correctly classified examples tend to have greater maximum softmax probabilities than erroneously classified and out-of-distribution examples, allowing for their detection. We assess performance by defining several tasks in computer vision, natural language processing, and automatic speech recognition, showing the effectiveness of this baseline across all. We then show the baseline can sometimes be surpassed, demonstrating the room for future research on these underexplored detection tasks.
http://arxiv.org/pdf/1610.02136
Dan Hendrycks, Kevin Gimpel
cs.NE, cs.CV, cs.LG
Published as a conference paper at ICLR 2017. 1 Figure in 1 Appendix. Minor changes from the previous version
International Conference on Learning Representations 2017
cs.NE
20161007
20181003
[]
1610.02413
38
# 6.1 Unidentifiability ∗ is in one case based directly The above two scenarios seem rather different. The optimal score R on A or its surrogate, and in another only on a directly predictive feature, but this is not apparent by considering the equalized odds criterion, suggesting a possible shortcoming of equalized odds. In fact, as we will now see, the two scenarios are indistinguishable using any oblivious test. That is, no test based only on the target labels, the protected attribute and the score would give ∗ in the two scenarios. If it were judged unfair in different indications for the optimal score R one scenario, it would also be judged unfair in the other. We will show this by constructing specific instantiations of the two scenarios where the joint distributions over (Y,A, R’*, R) are identical. The scenarios are thus unidentifiable based only on these joint distributions. We will consider binary targets and protected attributes taking values in A, Y ∈ {−1, 1} and real valued features. We deviate from our convention of {0, 1}-values only to simplify the resulting expressions. In Scenario I, let: • Pr {A = 1} = 1/2, and X1 = A • Y follows a logistic model parametrized based on A: Pr {Y = y | A = a} =
1610.02413#38
Equality of Opportunity in Supervised Learning
We propose a criterion for discrimination against a specified sensitive attribute in supervised learning, where the goal is to predict some target based on available features. Assuming data about the predictor, target, and membership in the protected group are available, we show how to optimally adjust any learned predictor so as to remove discrimination according to our definition. Our framework also improves incentives by shifting the cost of poor classification from disadvantaged groups to the decision maker, who can respond by improving the classification accuracy. In line with other studies, our notion is oblivious: it depends only on the joint statistics of the predictor, the target and the protected attribute, but not on interpretation of individualfeatures. We study the inherent limits of defining and identifying biases based on such oblivious measures, outlining what can and cannot be inferred from different oblivious tests. We illustrate our notion using a case study of FICO credit scores.
http://arxiv.org/pdf/1610.02413
Moritz Hardt, Eric Price, Nathan Srebro
cs.LG
null
null
cs.LG
20161007
20161007
[]
1610.02136
39
Minqing Hu and Bing Liu. Mining and Summarizing Customer Reviews. Knowledge Discovery and Data Mining (KDD), 2004. Mohit Iyyer, Varun Manjunatha, Jordan Boyd-Graber, and Hal Daum´e Iii. Deep Unordered Compo- sition Rivals Syntactic Methods for Text Classification. Association for Computational Linguistics (ACL), 2015. Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. Bag of tricks for efficient text classification. arXiv, 2016. Diederik Kingma and Jimmy Ba. Adam: A Method for Stochastic Optimization. International Conference for Learning Representations (ICLR), 2015. Alex Krizhevsky. Learning Multiple Layers of Features from Tiny Images, 2009. Brenden M. Lake, Ruslan Salakhutdinov, and Joshua B. Tenenbaum. Human-level concept learning through probabilistic program induction. Science, 2015. Ken Lang. Newsweeder: Learning to filter netnews. In International Conference on Machine Learning (ICML), 1995. David D. Lewis, Yiming Yang, Tony G. Rose, and Fan Li. Rcv1: A new benchmark collection for text categorization research. Journal of Machine Learning Research (JMLR), 2004.
1610.02136#39
A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks
We consider the two related problems of detecting if an example is misclassified or out-of-distribution. We present a simple baseline that utilizes probabilities from softmax distributions. Correctly classified examples tend to have greater maximum softmax probabilities than erroneously classified and out-of-distribution examples, allowing for their detection. We assess performance by defining several tasks in computer vision, natural language processing, and automatic speech recognition, showing the effectiveness of this baseline across all. We then show the baseline can sometimes be surpassed, demonstrating the room for future research on these underexplored detection tasks.
http://arxiv.org/pdf/1610.02136
Dan Hendrycks, Kevin Gimpel
cs.NE, cs.CV, cs.LG
Published as a conference paper at ICLR 2017. 1 Figure in 1 Appendix. Minor changes from the previous version
International Conference on Learning Representations 2017
cs.NE
20161007
20181003
[]
1610.02413
39
1 1+exp(−2ay) , • X2 is Gaussian with mean Y : X2 = Y + N (0, 1) * Optimal unconstrained and equalized odds scores are given by: R* = X; +X, =A+ Xp, and R=X) 13 A X3 Y X1 A Y X2 X1 X2 X3 Figure 6: Two possible directed dependency structures for the variables in scenarios I and II. The undirected (infrastructure graph) versions of both graphs are also possible. In Scenario II, let: • Pr {A = 1} = 1/2. • X3 conditional on A = a is a mixture of two Gaussians: N (a + 1, 1) with weight 1 1+exp(−2a) and N (a − 1, 1) with weight 1 1+exp(2a) . Y follows a logistic model parametrized based on X3: Pr {Y = y | X3 = x3 # = Tepay 1 1+exp(−2yx3) . * Optimal unconstrained and equalized odds scores are given by: R* = X3,and R= X,-A The following proposition establishes the equivalence between the scenarios and the optimality of the scores (proof at end of section):
1610.02413#39
Equality of Opportunity in Supervised Learning
We propose a criterion for discrimination against a specified sensitive attribute in supervised learning, where the goal is to predict some target based on available features. Assuming data about the predictor, target, and membership in the protected group are available, we show how to optimally adjust any learned predictor so as to remove discrimination according to our definition. Our framework also improves incentives by shifting the cost of poor classification from disadvantaged groups to the decision maker, who can respond by improving the classification accuracy. In line with other studies, our notion is oblivious: it depends only on the joint statistics of the predictor, the target and the protected attribute, but not on interpretation of individualfeatures. We study the inherent limits of defining and identifying biases based on such oblivious measures, outlining what can and cannot be inferred from different oblivious tests. We illustrate our notion using a case study of FICO credit scores.
http://arxiv.org/pdf/1610.02413
Moritz Hardt, Eric Price, Nathan Srebro
cs.LG
null
null
cs.LG
20161007
20161007
[]
1610.02136
40
Ilya Loshchilov and Frank Hutter. Sgdr: Stochastic gradient descent with restarts. arXiv, 2016. Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. Learning word vectors for sentiment analysis. In Association for Computational Linguistics (ACL), 2011. Chris Manning and Hinrich Sch¨utze. Foundations of Statistical Natural Language Processing. MIT Press, 1999. Mitchell P. Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. Building a large annotated corpus of English: The Penn Treebank. Computational linguistics, 1993. 10 Published as a conference paper at ICLR 2017 Anh Nguyen, Jason Yosinski, and Jeff Clune. Deep neural networks are easily fooled: High con- In Computer Vision and Pattern Recognition fidence predictions for unrecognizable images. (CVPR), 2015. Khanh Nguyen and Brendan O’Connor. Posterior calibration and exploratory analysis for natural language processing models. In Empirical Methods in Natural Language Processing (EMNLP), 2015.
1610.02136#40
A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks
We consider the two related problems of detecting if an example is misclassified or out-of-distribution. We present a simple baseline that utilizes probabilities from softmax distributions. Correctly classified examples tend to have greater maximum softmax probabilities than erroneously classified and out-of-distribution examples, allowing for their detection. We assess performance by defining several tasks in computer vision, natural language processing, and automatic speech recognition, showing the effectiveness of this baseline across all. We then show the baseline can sometimes be surpassed, demonstrating the room for future research on these underexplored detection tasks.
http://arxiv.org/pdf/1610.02136
Dan Hendrycks, Kevin Gimpel
cs.NE, cs.CV, cs.LG
Published as a conference paper at ICLR 2017. 1 Figure in 1 Appendix. Minor changes from the previous version
International Conference on Learning Representations 2017
cs.NE
20161007
20181003
[]
1610.02413
40
The following proposition establishes the equivalence between the scenarios and the optimality of the scores (proof at end of section): Proposition 6.1. The joint distributions of (Y,A, R*, R) are identical in the above two scenarios. Moreover, R* and R are optimal unconstrained and equalized odds scores respectively, in that their ROC curves are optimal and for any loss function an optimal (unconstrained or equalized odds) classifier can be derived from them by thresholding. Not only can an oblivious test (based only on (Y,A,R)) not distinguish between the two scenarios, but even having access to the features is not of much help. Suppose we have access to all three feature, i.e. to a joint distribution over (Y,A,Xj,X2,X3)—since the distributions over (Y,A,R", R) agree, we can construct such a joint distribution with X) = R and X3 = R. The features are correlated with each other, with X3 = X, + X2. Without attaching meaning to the features or making causal assumptions about them, we do not gain any further insight on the two scores. In particular, both causal structures depicted in Figure 6 are possible. 6.2 Comparison of different oblivious measures It is interesting to consider how different oblivious measures apply to the scores R and R* these two scenarios.
1610.02413#40
Equality of Opportunity in Supervised Learning
We propose a criterion for discrimination against a specified sensitive attribute in supervised learning, where the goal is to predict some target based on available features. Assuming data about the predictor, target, and membership in the protected group are available, we show how to optimally adjust any learned predictor so as to remove discrimination according to our definition. Our framework also improves incentives by shifting the cost of poor classification from disadvantaged groups to the decision maker, who can respond by improving the classification accuracy. In line with other studies, our notion is oblivious: it depends only on the joint statistics of the predictor, the target and the protected attribute, but not on interpretation of individualfeatures. We study the inherent limits of defining and identifying biases based on such oblivious measures, outlining what can and cannot be inferred from different oblivious tests. We illustrate our notion using a case study of FICO credit scores.
http://arxiv.org/pdf/1610.02413
Moritz Hardt, Eric Price, Nathan Srebro
cs.LG
null
null
cs.LG
20161007
20161007
[]
1610.02136
41
Khanh Nguyen and Brendan O’Connor. Posterior calibration and exploratory analysis for natural language processing models. In Empirical Methods in Natural Language Processing (EMNLP), 2015. Olutobi Owoputi, Brendan O’Connor, Chris Dyer, Kevin Gimpel, Nathan Schneider, and Noah A. In Smith. North American Chapter of the Association for Computational Linguistics (NAACL), 2013. Improved part-of-speech tagging for online conversational text with word clusters. Bo Pang, Lillian Lee, and Shivakumar Vaithyanathan. Thumbs up? sentiment classification using machine learning techniques. In Empirical Methods in Natural Language Processing (EMNLP), 2002. Foster Provost, Tom Fawcett, and Ron Kohavi. The case against accuracy estimation for comparing induction algorithms. In International Conference on Machine Learning (ICML), 1998. Takaya Saito and Marc Rehmsmeier. The precision-recall plot is more informative than the ROC plot when evaluating binary classifiers on imbalanced datasets. In PLoS ONE. 2015. Michael L. Seltzer, Dong Yu, and Yongqiang Wang. Investigation of deep neural networks for noise robust speech recognition. In IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2013.
1610.02136#41
A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks
We consider the two related problems of detecting if an example is misclassified or out-of-distribution. We present a simple baseline that utilizes probabilities from softmax distributions. Correctly classified examples tend to have greater maximum softmax probabilities than erroneously classified and out-of-distribution examples, allowing for their detection. We assess performance by defining several tasks in computer vision, natural language processing, and automatic speech recognition, showing the effectiveness of this baseline across all. We then show the baseline can sometimes be surpassed, demonstrating the room for future research on these underexplored detection tasks.
http://arxiv.org/pdf/1610.02136
Dan Hendrycks, Kevin Gimpel
cs.NE, cs.CV, cs.LG
Published as a conference paper at ICLR 2017. 1 Figure in 1 Appendix. Minor changes from the previous version
International Conference on Learning Representations 2017
cs.NE
20161007
20181003
[]
1610.02413
41
6.2 Comparison of different oblivious measures It is interesting to consider how different oblivious measures apply to the scores R and R* these two scenarios. As discussed in Section 4.2, a score satisfies equalized odds iff the conditional ROC curves agree for both values of A, which we refer to as having identical ROC curves. Definition 6.2 (Identical ROC Curves). We say that a score R has identical conditional ROC curves if C,(t) = C,(t) for all groups of a,a’ and allteR. In particular, this property is achieved by an equalized odds score R. Within each protected group, i.e. for each value A =a, the score R* differs from R by a fixed monotone transformation, namely an additive shift R* = R+A. Consider a derived threshold predictor Y(R) = 1{R> t} based on R. Any such predictor obeys equalized odds. We can also derive the same predictor 14 # in
1610.02413#41
Equality of Opportunity in Supervised Learning
We propose a criterion for discrimination against a specified sensitive attribute in supervised learning, where the goal is to predict some target based on available features. Assuming data about the predictor, target, and membership in the protected group are available, we show how to optimally adjust any learned predictor so as to remove discrimination according to our definition. Our framework also improves incentives by shifting the cost of poor classification from disadvantaged groups to the decision maker, who can respond by improving the classification accuracy. In line with other studies, our notion is oblivious: it depends only on the joint statistics of the predictor, the target and the protected attribute, but not on interpretation of individualfeatures. We study the inherent limits of defining and identifying biases based on such oblivious measures, outlining what can and cannot be inferred from different oblivious tests. We illustrate our notion using a case study of FICO credit scores.
http://arxiv.org/pdf/1610.02413
Moritz Hardt, Eric Price, Nathan Srebro
cs.LG
null
null
cs.LG
20161007
20161007
[]
1610.02136
42
Jacob Steinhardt and Percy Liang. Unsupervised risk estimation using only conditional indepen- dence structure. In Neural Information Processing Systems (NIPS), 2016. Dong Wang and Xuewei Zhang. Thchs-30 : A free chinese speech corpus. In Technical Report, 2015. Gethin Williams and Steve Renals. Confidence measures for hybrid hmm/ann speech recognition. In Proceedings of EuroSpeech, 1997. Jianxiong Xiao, James Hays, Krista A. Ehinger, Aude Oliva, and Antonio Torralba. Sun database: Large-scale scene recognition from abbey to zoo. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2010. Dong Yu, Jinyu Li, and Li Deng. Calibration of confidence measures in speech recognition. In IEEE Transactions on Audio, Speech, and Language, 2010. Sergey Zagoruyko and Nikos Komodakis. Wide residual networks. British Machine Vision Confer- ence, 2016. Yuting Zhang, Kibok Lee, and Honglak Lee. Augmenting supervised neural networks with unsu- pervised objectives for large-scale image classification. In International Conference on Machine Learning (ICML), 2016. 11 Published as a conference paper at ICLR 2017 A ABNORMALITY MODULE EXAMPLE
1610.02136#42
A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks
We consider the two related problems of detecting if an example is misclassified or out-of-distribution. We present a simple baseline that utilizes probabilities from softmax distributions. Correctly classified examples tend to have greater maximum softmax probabilities than erroneously classified and out-of-distribution examples, allowing for their detection. We assess performance by defining several tasks in computer vision, natural language processing, and automatic speech recognition, showing the effectiveness of this baseline across all. We then show the baseline can sometimes be surpassed, demonstrating the room for future research on these underexplored detection tasks.
http://arxiv.org/pdf/1610.02136
Dan Hendrycks, Kevin Gimpel
cs.NE, cs.CV, cs.LG
Published as a conference paper at ICLR 2017. 1 Figure in 1 Appendix. Minor changes from the previous version
International Conference on Learning Representations 2017
cs.NE
20161007
20181003
[]
1610.02413
42
14 # in deterministically from R* and A as Y(R*, A) =1{R* > t4} where t4 = t—A. That is, in our particular example, R* is special in that optimal equalized odds predictors can be derived from it (and the protected attribute A) deterministically, without the need to introduce randomness as in Section 4.2. In terms of the A-conditional ROC curves, this happens because the images of the conditional ROC curves Cy and C, overlap, making it possible to choose points in the true/false-positive rate plane that are on both ROC curves. However, the same point on the conditional ROC curves correspond to different thresholds! Instead of Co(t) = C;(t), for R* we have Co(t) = C(t-1). We refer to this property as “matching” conditional ROC curves: Definition 6.3 (Matching ROC curves). We say that a score R has matching conditional ROC curves if the images of all A-conditional ROC curves are the same, i.e., for all groups a,a’, {C,(t): t € R} ={C,(t): te R}.
1610.02413#42
Equality of Opportunity in Supervised Learning
We propose a criterion for discrimination against a specified sensitive attribute in supervised learning, where the goal is to predict some target based on available features. Assuming data about the predictor, target, and membership in the protected group are available, we show how to optimally adjust any learned predictor so as to remove discrimination according to our definition. Our framework also improves incentives by shifting the cost of poor classification from disadvantaged groups to the decision maker, who can respond by improving the classification accuracy. In line with other studies, our notion is oblivious: it depends only on the joint statistics of the predictor, the target and the protected attribute, but not on interpretation of individualfeatures. We study the inherent limits of defining and identifying biases based on such oblivious measures, outlining what can and cannot be inferred from different oblivious tests. We illustrate our notion using a case study of FICO credit scores.
http://arxiv.org/pdf/1610.02413
Moritz Hardt, Eric Price, Nathan Srebro
cs.LG
null
null
cs.LG
20161007
20161007
[]
1610.02136
43
Figure 1: A neural network classifying a diamond image with an auxiliary decoder and an abnormal- ity module. Circles are neurons, either having a GELU or sigmoid activation. The blurred diamond reconstruction precedes subtraction and elementwise squaring. The probability vector is the soft- max probability vector. Blue layers train on in-distribution data, and red layers train on both in- and out-of-distribution examples. 12
1610.02136#43
A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks
We consider the two related problems of detecting if an example is misclassified or out-of-distribution. We present a simple baseline that utilizes probabilities from softmax distributions. Correctly classified examples tend to have greater maximum softmax probabilities than erroneously classified and out-of-distribution examples, allowing for their detection. We assess performance by defining several tasks in computer vision, natural language processing, and automatic speech recognition, showing the effectiveness of this baseline across all. We then show the baseline can sometimes be surpassed, demonstrating the room for future research on these underexplored detection tasks.
http://arxiv.org/pdf/1610.02136
Dan Hendrycks, Kevin Gimpel
cs.NE, cs.CV, cs.LG
Published as a conference paper at ICLR 2017. 1 Figure in 1 Appendix. Minor changes from the previous version
International Conference on Learning Representations 2017
cs.NE
20161007
20181003
[]
1610.02413
43
Having matching conditional ROC curves corresponds to being deterministically correctable to be non-discriminating: If a predictor R has matching conditional ROC curves, then for any loss function the optimal equalized odds derived predictor is a deterministic function of R and A. But as our examples show, having matching ROC curves does not at all mean the score is itself non-discriminatory: it can be biased according to A, and a (deterministic) correction might be necessary in order to ensure equalized odds. Having identical or matching ROC curves are properties of the conditional distribution R|Y , A, also referred to as “model errors”. Oblivious measures can also depend on the conditional distribution Y |R, A, also referred to as “target population errors”. In particular, one might consider the following property: Definition 6.4 (Matching frequencies). We say that a score R has matching conditional frequencies, if for all groups a, a Pr{Y =1|R=tA=a}=Pr{Y=1|R=t,A=a)}.
1610.02413#43
Equality of Opportunity in Supervised Learning
We propose a criterion for discrimination against a specified sensitive attribute in supervised learning, where the goal is to predict some target based on available features. Assuming data about the predictor, target, and membership in the protected group are available, we show how to optimally adjust any learned predictor so as to remove discrimination according to our definition. Our framework also improves incentives by shifting the cost of poor classification from disadvantaged groups to the decision maker, who can respond by improving the classification accuracy. In line with other studies, our notion is oblivious: it depends only on the joint statistics of the predictor, the target and the protected attribute, but not on interpretation of individualfeatures. We study the inherent limits of defining and identifying biases based on such oblivious measures, outlining what can and cannot be inferred from different oblivious tests. We illustrate our notion using a case study of FICO credit scores.
http://arxiv.org/pdf/1610.02413
Moritz Hardt, Eric Price, Nathan Srebro
cs.LG
null
null
cs.LG
20161007
20161007
[]
1610.02413
44
Pr{Y =1|R=tA=a}=Pr{Y=1|R=t,A=a)}. Matching conditional frequencies state that at a given score, both groups have the same probability of being labeled positive. The definition can also be phrased as requiring that the conditional distribution Y|R,A be independent of A. In other words, having matching conditional frequencies is equivalent to A and Y being independent conditioned on R. The corresponding dependency structure is Y- R—A. That is, the score R includes all possible information the protected attribute can provide on the target Y. Indeed having matching conditional frequencies means that the score is in a sense “optimally dependent” on the protected attribute A. Formally, for any loss function the optimal (unconstrained, possibly discriminatory) derived predictor Y(R,A) would be a function of R alone, since R already includes all relevant information about A. In particular, an unconstrained optimal score, like R* in our constructions, would satisfy matching conditional frequencies. Having matching frequencies can therefore be seen as a property indicating utilizing the protected attribute for optimal predictive power, rather then protecting discrimination based on it.
1610.02413#44
Equality of Opportunity in Supervised Learning
We propose a criterion for discrimination against a specified sensitive attribute in supervised learning, where the goal is to predict some target based on available features. Assuming data about the predictor, target, and membership in the protected group are available, we show how to optimally adjust any learned predictor so as to remove discrimination according to our definition. Our framework also improves incentives by shifting the cost of poor classification from disadvantaged groups to the decision maker, who can respond by improving the classification accuracy. In line with other studies, our notion is oblivious: it depends only on the joint statistics of the predictor, the target and the protected attribute, but not on interpretation of individualfeatures. We study the inherent limits of defining and identifying biases based on such oblivious measures, outlining what can and cannot be inferred from different oblivious tests. We illustrate our notion using a case study of FICO credit scores.
http://arxiv.org/pdf/1610.02413
Moritz Hardt, Eric Price, Nathan Srebro
cs.LG
null
null
cs.LG
20161007
20161007
[]
1610.02413
45
It is also worth noting the similarity between matching frequencies and a binary predictor Y having equal conditional precision, that is Pr{y =1|Y=7A= a} = Pr{y =1|/Y=pA= a}, Viewing Y as a score that takes two possible values, the notions agree. But R having matching conditional frequencies does not imply the threshold predictors Y(R) =I{R > #} will have match- ing precision—the conditional distributions R|A might be different, and these are involved in marginalizing over R>t and R<t. To summarize, the properties of the scores in our scenarios are: 15 ∗ is optimal based on the features and protected attribute, without any constraints. + Ris optimal among all equalized odds scores. + R does satisfy equal odds, R* does not satisfy equal odds. + R has identical (thus matching) ROC curves, R* has matching but non-identical ROC curves. curves. R* has matching conditional frequencies, while R does not. + # Proof of Proposition 6.1
1610.02413#45
Equality of Opportunity in Supervised Learning
We propose a criterion for discrimination against a specified sensitive attribute in supervised learning, where the goal is to predict some target based on available features. Assuming data about the predictor, target, and membership in the protected group are available, we show how to optimally adjust any learned predictor so as to remove discrimination according to our definition. Our framework also improves incentives by shifting the cost of poor classification from disadvantaged groups to the decision maker, who can respond by improving the classification accuracy. In line with other studies, our notion is oblivious: it depends only on the joint statistics of the predictor, the target and the protected attribute, but not on interpretation of individualfeatures. We study the inherent limits of defining and identifying biases based on such oblivious measures, outlining what can and cannot be inferred from different oblivious tests. We illustrate our notion using a case study of FICO credit scores.
http://arxiv.org/pdf/1610.02413
Moritz Hardt, Eric Price, Nathan Srebro
cs.LG
null
null
cs.LG
20161007
20161007
[]
1610.02413
46
curves. R* has matching conditional frequencies, while R does not. + # Proof of Proposition 6.1 First consider Scenario I. The score R = X7 obeys equalized odds due to the dependency structure. More broadly, if a score R = f (Xz, Xj) obeys equalized odds, for some randomized function f, it cannot depend on Xj: conditioned on Y, X2 is independent of A = X;, and so any dependency of f on X; would create a statistical dependency on A = X; (still conditioned on Y) which is not allowed. We can verify that Pr{Y = y| X2 = x2} « Pr{Y =y}Pr{X2 =x2|Y =y} « exp(2yx2) which is monotone in X), and so for any loss function we would just want to threshold X7 and any function monotone in X, would make an optimal equalized odds predictor. To obtain the optimal unconstrained score consider To obtain the optimal unconstrained score consider Pr {Y = y | X1 = x1, X2 = x2 } ∝ Pr {A = x1 ∝ exp(2y(x1 + x2)). } Pr {Y = y | A = x1 } Pr {X2 = x2 | Y = y} ∗ = X1 + X2 is optimal.
1610.02413#46
Equality of Opportunity in Supervised Learning
We propose a criterion for discrimination against a specified sensitive attribute in supervised learning, where the goal is to predict some target based on available features. Assuming data about the predictor, target, and membership in the protected group are available, we show how to optimally adjust any learned predictor so as to remove discrimination according to our definition. Our framework also improves incentives by shifting the cost of poor classification from disadvantaged groups to the decision maker, who can respond by improving the classification accuracy. In line with other studies, our notion is oblivious: it depends only on the joint statistics of the predictor, the target and the protected attribute, but not on interpretation of individualfeatures. We study the inherent limits of defining and identifying biases based on such oblivious measures, outlining what can and cannot be inferred from different oblivious tests. We illustrate our notion using a case study of FICO credit scores.
http://arxiv.org/pdf/1610.02413
Moritz Hardt, Eric Price, Nathan Srebro
cs.LG
null
null
cs.LG
20161007
20161007
[]
1610.02413
48
∗ = X1 + X2 is optimal. That is, optimal classification only depends on x1 + x2 and so R Turning to scenario II, since P(Y|X3) is monotone in X3, any monotone function of it is optimal (unconstrained), and the dependency structure implies its optimal even if we allow dependence on A. Furthermore, the conditional distribution Y|X3 matched that of Y|R* from scenario I since again we have Pr{Y = y|X3 = x3} « exp(2yx3) by construction. Since we defined R* = X3, we have that the conditionals R*|Y match. We can also verify that by construction X3|A matches R*|A in scenario I. Since in scenario I, R* is optimal even dependent on A, we have that A is independent of Y conditioned on R*, as in scenario II when we condition on X3 = R". This establishes the joint distribution over (A, Y, R") is the same in both scenarios. Since R is the same deterministic function of A and R* in both scenarios, we can further conclude the joint distributions over A, Y,R* and R are the same. Since equalized odds is an oblivious property, once these distributions match, if R obeys equalized odds in scenario I, it also obeys it in scenario II.
1610.02413#48
Equality of Opportunity in Supervised Learning
We propose a criterion for discrimination against a specified sensitive attribute in supervised learning, where the goal is to predict some target based on available features. Assuming data about the predictor, target, and membership in the protected group are available, we show how to optimally adjust any learned predictor so as to remove discrimination according to our definition. Our framework also improves incentives by shifting the cost of poor classification from disadvantaged groups to the decision maker, who can respond by improving the classification accuracy. In line with other studies, our notion is oblivious: it depends only on the joint statistics of the predictor, the target and the protected attribute, but not on interpretation of individualfeatures. We study the inherent limits of defining and identifying biases based on such oblivious measures, outlining what can and cannot be inferred from different oblivious tests. We illustrate our notion using a case study of FICO credit scores.
http://arxiv.org/pdf/1610.02413
Moritz Hardt, Eric Price, Nathan Srebro
cs.LG
null
null
cs.LG
20161007
20161007
[]
1610.02413
49
# 7 Case study: FICO scores We examine various fairness measures in the context of FICO scores with the protected attribute of race. FICO scores are a proprietary classifier widely used in the United States to predict credit worthiness. Our FICO data is based on a sample of 301536 TransUnion TransRisk scores from 2003 [Res07]. These scores, ranging from 300 to 850, try to predict credit risk; they form our score R. People were labeled as in default if they failed to pay a debt for at least 90 days on at least one account in the ensuing 18-24 month period; this gives an outcome Y . Our protected attribute A is race, which is restricted to four values: Asian, white non-Hispanic (labeled “white” 16 in figures), Hispanic, and black. FICO scores are complicated proprietary classifiers based on features, like number of bank accounts kept, that could interact with culture—and hence race—in unfair ways. A credit score cutoff of 620 is commonly used for prime-rate loans1,
1610.02413#49
Equality of Opportunity in Supervised Learning
We propose a criterion for discrimination against a specified sensitive attribute in supervised learning, where the goal is to predict some target based on available features. Assuming data about the predictor, target, and membership in the protected group are available, we show how to optimally adjust any learned predictor so as to remove discrimination according to our definition. Our framework also improves incentives by shifting the cost of poor classification from disadvantaged groups to the decision maker, who can respond by improving the classification accuracy. In line with other studies, our notion is oblivious: it depends only on the joint statistics of the predictor, the target and the protected attribute, but not on interpretation of individualfeatures. We study the inherent limits of defining and identifying biases based on such oblivious measures, outlining what can and cannot be inferred from different oblivious tests. We illustrate our notion using a case study of FICO credit scores.
http://arxiv.org/pdf/1610.02413
Moritz Hardt, Eric Price, Nathan Srebro
cs.LG
null
null
cs.LG
20161007
20161007
[]
1610.02413
50
Non-default rate by FICO score CDF of FICO score by group 100% _ 1.0 — Asian — Asian --- White --- White S 80%) | |... Hispanic z 08)... Hispanic 2 Black g Black 5 60% S06 3 2 & a vo _ 2 40% 20.4 6 I 2 0 c 20% 0.2 0% mst 0.95 300 400 500 600 700 800 00 ©6400. ~=«500. ~=©600.~=©700~=—800.~=—«900 FICO score FICO score Figure 7: These two marginals, and the number of people per group, constitute our input data. which corresponds to an any-account default rate of 18%. Note that this measures default on any account TransUnion was aware of; it corresponds to a much lower (≈ 2%) chance of default on individual new loans. To illustrate the concepts, we use any-account default as our target Y —a higher positive rate better illustrates the difference between equalized odds and equal opportunity.
1610.02413#50
Equality of Opportunity in Supervised Learning
We propose a criterion for discrimination against a specified sensitive attribute in supervised learning, where the goal is to predict some target based on available features. Assuming data about the predictor, target, and membership in the protected group are available, we show how to optimally adjust any learned predictor so as to remove discrimination according to our definition. Our framework also improves incentives by shifting the cost of poor classification from disadvantaged groups to the decision maker, who can respond by improving the classification accuracy. In line with other studies, our notion is oblivious: it depends only on the joint statistics of the predictor, the target and the protected attribute, but not on interpretation of individualfeatures. We study the inherent limits of defining and identifying biases based on such oblivious measures, outlining what can and cannot be inferred from different oblivious tests. We illustrate our notion using a case study of FICO credit scores.
http://arxiv.org/pdf/1610.02413
Moritz Hardt, Eric Price, Nathan Srebro
cs.LG
null
null
cs.LG
20161007
20161007
[]
1610.02413
51
We therefore consider the behavior of a lender who makes money on default rates below this, i.e., for whom whom false positives (giving loans to people that default on any account) is 82/18 as expensive as false negatives (not giving a loan to people that don’t default). The lender thus wants to construct a predictor Y that is optimal with respect to this asymmetric loss. A typical classifier will pick a threshold per group and set Y = 1 for people with FICO scores above the threshold for their group. Given the marginal distributions for each group (Figure 7), we can study the optimal profit-maximizing classifier under five different constraints on allowed predictors: • Max profit has no fairness constraints, and will pick for each group the threshold that maximizes profit. This is the score at which 82% of people in that group do not default. • Race blind requires the threshold to be the same for each group. Hence it will pick the single threshold at which 82% of people do not default overall, shown in Figure 8. • Demographic parity picks for each group a threshold such that the fraction of group members that qualify for loans is the same. • Equal opportunity picks for each group a threshold such that the fraction of non-defaulting group members that qualify for loans is the same.
1610.02413#51
Equality of Opportunity in Supervised Learning
We propose a criterion for discrimination against a specified sensitive attribute in supervised learning, where the goal is to predict some target based on available features. Assuming data about the predictor, target, and membership in the protected group are available, we show how to optimally adjust any learned predictor so as to remove discrimination according to our definition. Our framework also improves incentives by shifting the cost of poor classification from disadvantaged groups to the decision maker, who can respond by improving the classification accuracy. In line with other studies, our notion is oblivious: it depends only on the joint statistics of the predictor, the target and the protected attribute, but not on interpretation of individualfeatures. We study the inherent limits of defining and identifying biases based on such oblivious measures, outlining what can and cannot be inferred from different oblivious tests. We illustrate our notion using a case study of FICO credit scores.
http://arxiv.org/pdf/1610.02413
Moritz Hardt, Eric Price, Nathan Srebro
cs.LG
null
null
cs.LG
20161007
20161007
[]
1610.02413
52
• Equal opportunity picks for each group a threshold such that the fraction of non-defaulting group members that qualify for loans is the same. 1http://www.creditscoring.com/pages/bar.htm (Accessed: 2016-09-20) 17 Single threshold (raw score) Single threshold (per-group) 100% — Asian --- White 80% | Hispanic 60% 40% Non-default rate Non-default rate — Asian 20% --- White Black 0% Bt’ So 400 500 600 700 40 60 80 100 FICO score Within-group FICO score percentile Figure 8: The common FICO threshold of 620 corresponds to a non-default rate of 82%. Rescaling the x axis to represent the within-group thresholds (right), Pr[Y =1| Y =1,A] is the fraction of the area under the curve that is shaded. This means black non-defaulters are much less likely to qualify for loans than white or Asian ones, so a race blind score threshold violates our fairness definitions.
1610.02413#52
Equality of Opportunity in Supervised Learning
We propose a criterion for discrimination against a specified sensitive attribute in supervised learning, where the goal is to predict some target based on available features. Assuming data about the predictor, target, and membership in the protected group are available, we show how to optimally adjust any learned predictor so as to remove discrimination according to our definition. Our framework also improves incentives by shifting the cost of poor classification from disadvantaged groups to the decision maker, who can respond by improving the classification accuracy. In line with other studies, our notion is oblivious: it depends only on the joint statistics of the predictor, the target and the protected attribute, but not on interpretation of individualfeatures. We study the inherent limits of defining and identifying biases based on such oblivious measures, outlining what can and cannot be inferred from different oblivious tests. We illustrate our notion using a case study of FICO credit scores.
http://arxiv.org/pdf/1610.02413
Moritz Hardt, Eric Price, Nathan Srebro
cs.LG
null
null
cs.LG
20161007
20161007
[]
1610.02413
53
• Equalized odds requires both the fraction of non-defaulters that qualify for loans and the fraction of defaulters that qualify for loans to be constant across groups. This cannot be achieved with a single threshold for each group, but requires randomization. There are many ways to do it; here, we pick two thresholds for each group, so above both thresholds people always qualify and between the thresholds people qualify with some probability. We could generalize the above constraints to allow non-threshold classifiers, but we can show that each profit-maximizing classifier will use thresholds. As shown in Section 4, the optimal thresholds can be computed efficiently; the results are shown in Figure 9. Our proposed fairness definitions give thresholds between those of max-profit/race-blind thresholds and of demographic parity. Figure 10 plots the ROC curves for each group. It should be emphasized that differences in the ROC curve do not indicate differences in default behavior but rather differences in prediction accuracy—lower curves indicate FICO scores are less predictive for those populations. This demonstrates, as one should expect, that the majority (white) group is classified more accurately than minority groups, even over-represented minority groups like Asians.
1610.02413#53
Equality of Opportunity in Supervised Learning
We propose a criterion for discrimination against a specified sensitive attribute in supervised learning, where the goal is to predict some target based on available features. Assuming data about the predictor, target, and membership in the protected group are available, we show how to optimally adjust any learned predictor so as to remove discrimination according to our definition. Our framework also improves incentives by shifting the cost of poor classification from disadvantaged groups to the decision maker, who can respond by improving the classification accuracy. In line with other studies, our notion is oblivious: it depends only on the joint statistics of the predictor, the target and the protected attribute, but not on interpretation of individualfeatures. We study the inherent limits of defining and identifying biases based on such oblivious measures, outlining what can and cannot be inferred from different oblivious tests. We illustrate our notion using a case study of FICO credit scores.
http://arxiv.org/pdf/1610.02413
Moritz Hardt, Eric Price, Nathan Srebro
cs.LG
null
null
cs.LG
20161007
20161007
[]
1610.02413
54
The left side of Figure 11 shows the fraction of people that wouldn’t default that would qualify for loans by the various metrics. Under max-profit and race-blind thresholds, we find that black people that would not default have a significantly harder time qualifying for loans than others. Under demographic parity, the situation is reversed. The right side of Figure 11 gives the profit achieved by each method, as a fraction of the max profit achievable. We show this as a function of the non-default rate above which loans are profitable (i.e. 82% in the other figures). At 82%, we find that a race blind threshold gets 99.3% of the maximal profit, equal opportunity gets 92.8%, equalized odds gets 80.2%, and demographic parity gets 69.8%. So equal opportunity fairness costs less than a quarter what demographic parity costs—and if the classifier improves, this would reduce further. 18
1610.02413#54
Equality of Opportunity in Supervised Learning
We propose a criterion for discrimination against a specified sensitive attribute in supervised learning, where the goal is to predict some target based on available features. Assuming data about the predictor, target, and membership in the protected group are available, we show how to optimally adjust any learned predictor so as to remove discrimination according to our definition. Our framework also improves incentives by shifting the cost of poor classification from disadvantaged groups to the decision maker, who can respond by improving the classification accuracy. In line with other studies, our notion is oblivious: it depends only on the joint statistics of the predictor, the target and the protected attribute, but not on interpretation of individualfeatures. We study the inherent limits of defining and identifying biases based on such oblivious measures, outlining what can and cannot be inferred from different oblivious tests. We illustrate our notion using a case study of FICO credit scores.
http://arxiv.org/pdf/1610.02413
Moritz Hardt, Eric Price, Nathan Srebro
cs.LG
null
null
cs.LG
20161007
20161007
[]
1610.02413
55
18 FICO score thresholds (raw) FICO score thresholds (within-group) _|/44@ Asian % ; 4 e Max profit eee White . Max profit . . m@em Hispanic ’ ; Single threshold eee Black ‘ Single threshold bd 2 e e 4 + Opportunity / e Opportunity bd . e e — — Equal odds “~~, ° Equal odds -—— I oe o@_ + ¢ Demography . bd Demography ‘ e e 300 400 500 600 700 800 0 20 40 60 80 100 FICO score Within-group FICO score percentile Figure 9: FICO thresholds for various definitions of fairness. The equal odds method does not give a single threshold, but instead Pr[Y = 1| R, A] increases over some not uniquely defined range; we pick the one containing the fewest people. Observe that, within each race, the equal opportunity threshold and average equal odds threshold lie between the max profit threshold and equal demography thresholds.
1610.02413#55
Equality of Opportunity in Supervised Learning
We propose a criterion for discrimination against a specified sensitive attribute in supervised learning, where the goal is to predict some target based on available features. Assuming data about the predictor, target, and membership in the protected group are available, we show how to optimally adjust any learned predictor so as to remove discrimination according to our definition. Our framework also improves incentives by shifting the cost of poor classification from disadvantaged groups to the decision maker, who can respond by improving the classification accuracy. In line with other studies, our notion is oblivious: it depends only on the joint statistics of the predictor, the target and the protected attribute, but not on interpretation of individualfeatures. We study the inherent limits of defining and identifying biases based on such oblivious measures, outlining what can and cannot be inferred from different oblivious tests. We illustrate our notion using a case study of FICO credit scores.
http://arxiv.org/pdf/1610.02413
Moritz Hardt, Eric Price, Nathan Srebro
cs.LG
null
null
cs.LG
20161007
20161007
[]
1610.02413
56
The difference between equal odds and equal opportunity is that under equal opportunity, the classifier can make use of its better accuracy among whites. Under equal odds this is viewed as unfair, since it means that white people who wouldn’t pay their loans have a harder time getting them than minorities who wouldn’t pay their loans. An equal odds classifier must classify everyone as poorly as the hardest group, which is why it costs over twice as much in this case. This also leads to more conservative lending, so it is slightly harder for non-defaulters of all groups to get loans.
1610.02413#56
Equality of Opportunity in Supervised Learning
We propose a criterion for discrimination against a specified sensitive attribute in supervised learning, where the goal is to predict some target based on available features. Assuming data about the predictor, target, and membership in the protected group are available, we show how to optimally adjust any learned predictor so as to remove discrimination according to our definition. Our framework also improves incentives by shifting the cost of poor classification from disadvantaged groups to the decision maker, who can respond by improving the classification accuracy. In line with other studies, our notion is oblivious: it depends only on the joint statistics of the predictor, the target and the protected attribute, but not on interpretation of individualfeatures. We study the inherent limits of defining and identifying biases based on such oblivious measures, outlining what can and cannot be inferred from different oblivious tests. We illustrate our notion using a case study of FICO credit scores.
http://arxiv.org/pdf/1610.02413
Moritz Hardt, Eric Price, Nathan Srebro
cs.LG
null
null
cs.LG
20161007
20161007
[]
1610.02413
57
The equal opportunity classifier does make it easier for defaulters to get loans if they are minorities, but the incentives are aligned properly. Under max profit, a small group may not be worth figuring out how to classify and so be treated poorly, since the classifier can’t identify the qualified individuals. Under equal opportunity, such poorly-classified groups are instead treated better than well-classified groups. The cost is thus born by the company using the classifier, which can decide to invest in better classification, rather than the classified group, which cannot. Equalized odds gives a similar, but much stronger, incentive since the cost for a small group is not proportional to its size. While race blindness achieves high profit, the fairness guarantee is quite weak. As with max profit, small groups may be classified poorly and so treated poorly, and the company has little incentive to improve the accuracy. Furthermore, when race is redundantly encoded, race blindness degenerates into max profit. # 8 Conclusions We proposed a fairness measure that accomplishes two important desiderata. First, it remedies the main conceptual shortcomings of demographic parity as a fairness notion. Second, it is fully 19
1610.02413#57
Equality of Opportunity in Supervised Learning
We propose a criterion for discrimination against a specified sensitive attribute in supervised learning, where the goal is to predict some target based on available features. Assuming data about the predictor, target, and membership in the protected group are available, we show how to optimally adjust any learned predictor so as to remove discrimination according to our definition. Our framework also improves incentives by shifting the cost of poor classification from disadvantaged groups to the decision maker, who can respond by improving the classification accuracy. In line with other studies, our notion is oblivious: it depends only on the joint statistics of the predictor, the target and the protected attribute, but not on interpretation of individualfeatures. We study the inherent limits of defining and identifying biases based on such oblivious measures, outlining what can and cannot be inferred from different oblivious tests. We illustrate our notion using a case study of FICO credit scores.
http://arxiv.org/pdf/1610.02413
Moritz Hardt, Eric Price, Nathan Srebro
cs.LG
null
null
cs.LG
20161007
20161007
[]
1610.02413
58
# 8 Conclusions We proposed a fairness measure that accomplishes two important desiderata. First, it remedies the main conceptual shortcomings of demographic parity as a fairness notion. Second, it is fully 19 Per-group ROC curve lassifying non-defaulters using FICO score lassifying non-defaulters using FICO score Zoomed in view rp ° isd io ° & o ca ° oa 9 FS Fraction non-defaulters getting loan Fraction non-defaulters getting loan Oo x — Asian 0.6 a Max profit 02 --- White @ Single threshold , Hispanic 0.5 x Opportunity Black + Equal odds 986 0.2 0.4 0.6 0.8 1.0 800 - 0.05 0.10 0.15 0.20 0.25 Fraction defaulters getting loan Fraction defaulters getting loan Figure 10: The ROC curve for using FICO score to identify non-defaulters. Within a group, we can achieve any convex combination of these outcomes. Equality of opportunity picks points along the same horizontal line. Equal odds picks a point below all lines.
1610.02413#58
Equality of Opportunity in Supervised Learning
We propose a criterion for discrimination against a specified sensitive attribute in supervised learning, where the goal is to predict some target based on available features. Assuming data about the predictor, target, and membership in the protected group are available, we show how to optimally adjust any learned predictor so as to remove discrimination according to our definition. Our framework also improves incentives by shifting the cost of poor classification from disadvantaged groups to the decision maker, who can respond by improving the classification accuracy. In line with other studies, our notion is oblivious: it depends only on the joint statistics of the predictor, the target and the protected attribute, but not on interpretation of individualfeatures. We study the inherent limits of defining and identifying biases based on such oblivious measures, outlining what can and cannot be inferred from different oblivious tests. We illustrate our notion using a case study of FICO credit scores.
http://arxiv.org/pdf/1610.02413
Moritz Hardt, Eric Price, Nathan Srebro
cs.LG
null
null
cs.LG
20161007
20161007
[]
1610.02413
59
Fraction non-defaulters getting Fraction of max profit earned loan as a function of minimal desired non-default rate 1.0 1.0 bid ° ~ 0.8 ° oa 0.6 9 FS 0.4 --- Single threshold -- Opportunity Fraction non-defaulters getting loan “Profit as a fraction of max profit 0.2 0.2 oun Equal odds ME Max profit HEME Single threshold HI. Opportunity D h Ga Equal odds = I~ Demography —— Demography a 0. Asian White Hispanic Black 8% 20% 40% 60% 80% 100% Minimal non-default rate for profitability Figure 11: On the left, we see the fraction of non-defaulters that would get loans. On the right, we see the profit achievable for each notion of fairness, as a function of the false positive/negative trade-off. aligned with the central goal of supervised machine learning, that is, to build higher accuracy classifiers. In light of our results, we draw several conclusions aimed to help interpret and apply our framework effectively. Choose reliable target variables. Our notion requires access to observed outcomes such as default rates in the loan setting. This is precisely the same requirement that supervised learning 20
1610.02413#59
Equality of Opportunity in Supervised Learning
We propose a criterion for discrimination against a specified sensitive attribute in supervised learning, where the goal is to predict some target based on available features. Assuming data about the predictor, target, and membership in the protected group are available, we show how to optimally adjust any learned predictor so as to remove discrimination according to our definition. Our framework also improves incentives by shifting the cost of poor classification from disadvantaged groups to the decision maker, who can respond by improving the classification accuracy. In line with other studies, our notion is oblivious: it depends only on the joint statistics of the predictor, the target and the protected attribute, but not on interpretation of individualfeatures. We study the inherent limits of defining and identifying biases based on such oblivious measures, outlining what can and cannot be inferred from different oblivious tests. We illustrate our notion using a case study of FICO credit scores.
http://arxiv.org/pdf/1610.02413
Moritz Hardt, Eric Price, Nathan Srebro
cs.LG
null
null
cs.LG
20161007
20161007
[]
1610.02413
60
Choose reliable target variables. Our notion requires access to observed outcomes such as default rates in the loan setting. This is precisely the same requirement that supervised learning 20 generally has. The broad success of supervised learning demonstrates that this requirement is met in many important applications. That said, having access to reliable “labeled data” is not always possible. Moreover, the measurement of the target variable might in itself be unreliable or biased. Domain-specific scrutiny is required in defining and collecting a reliable target variable. Measuring unfairness, rather than proving fairness. Due to the limitations we described, satisfying our notion (or any other oblivious measure) should not be considered a conclusive proof of fairness. Similarly, violations of our condition are not meant to be a proof of unfairness. Rather we envision our framework as providing a reasonable way of discovering and measuring potential concerns that require further scrutiny. We believe that resolving fairness concerns is ultimately impossible without substantial domain-specific investigation. This realization echoes earlier findings in “Fairness through Awareness” [DHP+12] describing the task-specific nature of fairness.
1610.02413#60
Equality of Opportunity in Supervised Learning
We propose a criterion for discrimination against a specified sensitive attribute in supervised learning, where the goal is to predict some target based on available features. Assuming data about the predictor, target, and membership in the protected group are available, we show how to optimally adjust any learned predictor so as to remove discrimination according to our definition. Our framework also improves incentives by shifting the cost of poor classification from disadvantaged groups to the decision maker, who can respond by improving the classification accuracy. In line with other studies, our notion is oblivious: it depends only on the joint statistics of the predictor, the target and the protected attribute, but not on interpretation of individualfeatures. We study the inherent limits of defining and identifying biases based on such oblivious measures, outlining what can and cannot be inferred from different oblivious tests. We illustrate our notion using a case study of FICO credit scores.
http://arxiv.org/pdf/1610.02413
Moritz Hardt, Eric Price, Nathan Srebro
cs.LG
null
null
cs.LG
20161007
20161007
[]
1610.02413
61
Incentives. Requiring equalized odds creates an incentive structure for the entity building the predictor that aligns well with achieving fairness. Achieving better prediction with equalized odds requires collecting features that more directly capture the target Y , unrelated to its correlation with the protected attribute. Deriving an equalized odds predictor from a score involves considering the pointwise minimum ROC curve among different protected groups, encouraging constructing of predictors that are accurate in all groups, e.g., by collecting data appropriately or basing prediction on features predictive in all groups.
1610.02413#61
Equality of Opportunity in Supervised Learning
We propose a criterion for discrimination against a specified sensitive attribute in supervised learning, where the goal is to predict some target based on available features. Assuming data about the predictor, target, and membership in the protected group are available, we show how to optimally adjust any learned predictor so as to remove discrimination according to our definition. Our framework also improves incentives by shifting the cost of poor classification from disadvantaged groups to the decision maker, who can respond by improving the classification accuracy. In line with other studies, our notion is oblivious: it depends only on the joint statistics of the predictor, the target and the protected attribute, but not on interpretation of individualfeatures. We study the inherent limits of defining and identifying biases based on such oblivious measures, outlining what can and cannot be inferred from different oblivious tests. We illustrate our notion using a case study of FICO credit scores.
http://arxiv.org/pdf/1610.02413
Moritz Hardt, Eric Price, Nathan Srebro
cs.LG
null
null
cs.LG
20161007
20161007
[]
1610.02413
62
When to use our post-processing step. An important feature of our notion is that it can be achieved via a simple and efficient post-processing step. In fact, this step requires only aggregate information about the data and therefore could even be carried out in a privacy- preserving manner (formally, via Differential Privacy). In contrast, many other approaches require changing a usually complex machine learning training pipeline, or require access to raw data. Despite its simplicity, our post-processing step exhibits a strong optimality principle. If the underlying score was close to optimal, then the derived predictor will be close to optimal among all predictors satisfying our definition. However, this does not mean that the predictor is necessarily good in an absolute sense. It also does not mean that the loss compared to the original predictor is always small. An alternative to using our post-processing step is always to invest in better features and more data. Only when this is no longer an option, should our post-processing step be applied.
1610.02413#62
Equality of Opportunity in Supervised Learning
We propose a criterion for discrimination against a specified sensitive attribute in supervised learning, where the goal is to predict some target based on available features. Assuming data about the predictor, target, and membership in the protected group are available, we show how to optimally adjust any learned predictor so as to remove discrimination according to our definition. Our framework also improves incentives by shifting the cost of poor classification from disadvantaged groups to the decision maker, who can respond by improving the classification accuracy. In line with other studies, our notion is oblivious: it depends only on the joint statistics of the predictor, the target and the protected attribute, but not on interpretation of individualfeatures. We study the inherent limits of defining and identifying biases based on such oblivious measures, outlining what can and cannot be inferred from different oblivious tests. We illustrate our notion using a case study of FICO credit scores.
http://arxiv.org/pdf/1610.02413
Moritz Hardt, Eric Price, Nathan Srebro
cs.LG
null
null
cs.LG
20161007
20161007
[]
1610.02413
63
Predictive affirmative action. In some situations, including Scenario II in Section 6, the equalized odds predictor can be thought of as introducing some sort of affirmative action: the ∗ is shifted based on A. This shift compensates for the fact that, optimally predictive score R ∗ is more due to uncertainty, the score is in a sense more biased then the target label (roughly, R correlated with A then Y is correlated with A). Informally speaking, our approach transfers the burden of uncertainty from the protected class to the decision maker. We believe this is a reasonable proposal, since it incentivizes the decision maker to invest additional resources toward building a better model. 21 # References Solon Barocas and Andrew Selbst. Big data’s disparate impact. California Law Review, 104, 2016. [BZVGRG15] Muhammad Bilal Zafar, Isabel Valera, Manuel Gomez Rodriguez, and Krishna P Gummadi. Learning fair classifiers. CoRR, abs:1507.05259, 2015. T. Calders, F. Kamiran, and M. Pechenizkiy. Building classifiers with indepen- dency constraints. In In Proc. IEEE International Conference on Data Mining Workshops, pages 13–18, 2009.
1610.02413#63
Equality of Opportunity in Supervised Learning
We propose a criterion for discrimination against a specified sensitive attribute in supervised learning, where the goal is to predict some target based on available features. Assuming data about the predictor, target, and membership in the protected group are available, we show how to optimally adjust any learned predictor so as to remove discrimination according to our definition. Our framework also improves incentives by shifting the cost of poor classification from disadvantaged groups to the decision maker, who can respond by improving the classification accuracy. In line with other studies, our notion is oblivious: it depends only on the joint statistics of the predictor, the target and the protected attribute, but not on interpretation of individualfeatures. We study the inherent limits of defining and identifying biases based on such oblivious measures, outlining what can and cannot be inferred from different oblivious tests. We illustrate our notion using a case study of FICO credit scores.
http://arxiv.org/pdf/1610.02413
Moritz Hardt, Eric Price, Nathan Srebro
cs.LG
null
null
cs.LG
20161007
20161007
[]
1610.02413
64
Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Richard S. Zemel. Fairness through awareness. In Proc. ACM ITCS, pages 214–226, 2012. Michael Feldman, Sorelle A. Friedler, John Moeller, Carlos Scheidegger, and Suresh Venkatasubramanian. Certifying and removing disparate impact. In Proc. 21st ACM SIGKDD, pages 259–268. ACM, 2015. Jon M. Kleinberg, Sendhil Mullainathan, and Manish Raghavan. Inherent trade- offs in the fair determination of risk scores. CoRR, abs/1609.05807, 2016. Christos Louizos, Kevin Swersky, Yujia Li, Max Welling, and Richard S. Zemel. The variational fair autoencoder. CoRR, abs/1511.00830, 2015. John Podesta, Penny Pritzker, Ernest J. Moniz, John Holdren, and Jefrey Zients. Big data: Seizing opportunities and preserving values. Executive Office of the President, May 2014. Dino Pedreshi, Salvatore Ruggieri, and Franco Turini. Discrimination-aware data mining. In Proc. 14th ACM SIGKDD, 2008.
1610.02413#64
Equality of Opportunity in Supervised Learning
We propose a criterion for discrimination against a specified sensitive attribute in supervised learning, where the goal is to predict some target based on available features. Assuming data about the predictor, target, and membership in the protected group are available, we show how to optimally adjust any learned predictor so as to remove discrimination according to our definition. Our framework also improves incentives by shifting the cost of poor classification from disadvantaged groups to the decision maker, who can respond by improving the classification accuracy. In line with other studies, our notion is oblivious: it depends only on the joint statistics of the predictor, the target and the protected attribute, but not on interpretation of individualfeatures. We study the inherent limits of defining and identifying biases based on such oblivious measures, outlining what can and cannot be inferred from different oblivious tests. We illustrate our notion using a case study of FICO credit scores.
http://arxiv.org/pdf/1610.02413
Moritz Hardt, Eric Price, Nathan Srebro
cs.LG
null
null
cs.LG
20161007
20161007
[]
1610.02413
65
Dino Pedreshi, Salvatore Ruggieri, and Franco Turini. Discrimination-aware data mining. In Proc. 14th ACM SIGKDD, 2008. US Federal Reserve. Report to the congress on credit scoring and its effects on the availability and affordability of credit, 2007. Andrea Romei and Salvatore Ruggieri. A multidisciplinary survey on discrimina- tion analysis. The Knowledge Engineering Review, 29:582–638, 11 2014. [Was10] Larry Wasserman. All of Statistics: A Concise Course in Statistical Inference. Springer, 2010. Big data: A report on algorithmic systems, opportunity, and civil rights. Executive Office of the President, May 2016. Indre Zliobaite. On the relation between accuracy and fairness in binary classifi- cation. CoRR, abs/1505.05723, 2015. [ZWS+13] Richard S. Zemel, Yu Wu, Kevin Swersky, Toniann Pitassi, and Cynthia Dwork. Learning fair representations. In Proc. 30th ICML, 2013. 22
1610.02413#65
Equality of Opportunity in Supervised Learning
We propose a criterion for discrimination against a specified sensitive attribute in supervised learning, where the goal is to predict some target based on available features. Assuming data about the predictor, target, and membership in the protected group are available, we show how to optimally adjust any learned predictor so as to remove discrimination according to our definition. Our framework also improves incentives by shifting the cost of poor classification from disadvantaged groups to the decision maker, who can respond by improving the classification accuracy. In line with other studies, our notion is oblivious: it depends only on the joint statistics of the predictor, the target and the protected attribute, but not on interpretation of individualfeatures. We study the inherent limits of defining and identifying biases based on such oblivious measures, outlining what can and cannot be inferred from different oblivious tests. We illustrate our notion using a case study of FICO credit scores.
http://arxiv.org/pdf/1610.02413
Moritz Hardt, Eric Price, Nathan Srebro
cs.LG
null
null
cs.LG
20161007
20161007
[]
1610.01644
0
8 1 0 2 v o N 2 2 ] L M . t a t s [ 4 v 4 4 6 1 0 . 0 1 6 1 : v i X r a # Understanding intermediate layers using linear classifier probes # Guillaume Alain Mila, University of Montreal [email protected] Yoshua Bengio Mila, University of Montreal # Abstract Neural network models have a reputation for being black boxes. We propose to monitor the features at every layer of a model and measure how suitable they are for classification. We use linear classifiers, which we refer to as “probes”, trained entirely independently of the model itself. This helps us better understand the roles and dynamics of the intermediate layers. We demonstrate how this can be used to develop a better intuition about models and to diagnose potential problems. We apply this technique to the popular models Inception v3 and Resnet-50. Among other things, we observe experimentally that the linear separability of fea- tures increase monotonically along the depth of the model. # 1 Introduction The recent history of deep neural networks features an impressive number of new methods and technological improvements to allow the training of deeper and more powerful networks.
1610.01644#0
Understanding intermediate layers using linear classifier probes
Neural network models have a reputation for being black boxes. We propose to monitor the features at every layer of a model and measure how suitable they are for classification. We use linear classifiers, which we refer to as "probes", trained entirely independently of the model itself. This helps us better understand the roles and dynamics of the intermediate layers. We demonstrate how this can be used to develop a better intuition about models and to diagnose potential problems. We apply this technique to the popular models Inception v3 and Resnet-50. Among other things, we observe experimentally that the linear separability of features increase monotonically along the depth of the model.
http://arxiv.org/pdf/1610.01644
Guillaume Alain, Yoshua Bengio
stat.ML, cs.LG
null
null
stat.ML
20161005
20181122
[ { "id": "1706.05806" }, { "id": "1710.04773" }, { "id": "1610.01644" }, { "id": "1611.03530" }, { "id": "1706.07206" }, { "id": "1605.07648" } ]
1610.01644
1
# 1 Introduction The recent history of deep neural networks features an impressive number of new methods and technological improvements to allow the training of deeper and more powerful networks. Deep neural networks still carry some of their original reputation of being black boxes, but many efforts have been made to understand better what they do, what is the role of each layer (Yosinski et al., 2014), how we can interpret them (Zeiler and Fergus, 2014) and how we can fool them (Biggio et al., 2013; Szegedy et al., 2013). In this paper, we take the features of each layer separately and we fit a linear classifier to predict the original classes. We refer to these linear classifiers as “probes” and we make sure that we never influence the model itself by taking measurements with probes. We suggest that the reader think of those probes as thermometers used to measure the temperature simultaneously at many different locations. More broadly speaking, the core of the idea is that there are interesting quantities that we can report based on the features of many independent layers if we allow the “measuring instruments” to have their own trainable parameters (provided that they do not influence the model itself).
1610.01644#1
Understanding intermediate layers using linear classifier probes
Neural network models have a reputation for being black boxes. We propose to monitor the features at every layer of a model and measure how suitable they are for classification. We use linear classifiers, which we refer to as "probes", trained entirely independently of the model itself. This helps us better understand the roles and dynamics of the intermediate layers. We demonstrate how this can be used to develop a better intuition about models and to diagnose potential problems. We apply this technique to the popular models Inception v3 and Resnet-50. Among other things, we observe experimentally that the linear separability of features increase monotonically along the depth of the model.
http://arxiv.org/pdf/1610.01644
Guillaume Alain, Yoshua Bengio
stat.ML, cs.LG
null
null
stat.ML
20161005
20181122
[ { "id": "1706.05806" }, { "id": "1710.04773" }, { "id": "1610.01644" }, { "id": "1611.03530" }, { "id": "1706.07206" }, { "id": "1605.07648" } ]
1610.01644
2
In the context of this paper, we are working with convolutional neural networks on image classifica- tion tasks on the MNIST and ImageNet (Russakovsky et al., 2015) datasets. Naturally, we fit linear classifier probes to predict those classes, but in general it is possible to monitor the performance of the features on any other objective. Our contributions in this paper are twofold. Firstly, we introduce these “probes” as a general tool to understand deep neural networks. We show how they can be used to characterize different layers, to debug bad models, or to get a sense of how the training is progressing in a well-behaved model. While our proposed idea shares commonalities with Montavon et al. (2011), our analysis is very different. Secondly, we observe that the measurements of the probes are surprizingly monotonic, which means that the degree of linear separability of the features of layers increases as we reach the deeper layers. The level of regularity with which this happens is surprizing given that this is not technically part of the training objective. This helps to understand the dynamics of deep neural networks. # 2 Related Work Many researchers have come up with techniques to analyze certain aspects of neural networks which may guide our intuition and provide a partial explanation as to how they work.
1610.01644#2
Understanding intermediate layers using linear classifier probes
Neural network models have a reputation for being black boxes. We propose to monitor the features at every layer of a model and measure how suitable they are for classification. We use linear classifiers, which we refer to as "probes", trained entirely independently of the model itself. This helps us better understand the roles and dynamics of the intermediate layers. We demonstrate how this can be used to develop a better intuition about models and to diagnose potential problems. We apply this technique to the popular models Inception v3 and Resnet-50. Among other things, we observe experimentally that the linear separability of features increase monotonically along the depth of the model.
http://arxiv.org/pdf/1610.01644
Guillaume Alain, Yoshua Bengio
stat.ML, cs.LG
null
null
stat.ML
20161005
20181122
[ { "id": "1706.05806" }, { "id": "1710.04773" }, { "id": "1610.01644" }, { "id": "1611.03530" }, { "id": "1706.07206" }, { "id": "1605.07648" } ]
1610.01644
3
# 2 Related Work Many researchers have come up with techniques to analyze certain aspects of neural networks which may guide our intuition and provide a partial explanation as to how they work. In this section we will provide a survey of the literature on the subject, with a little more focus on papers related our current work. # 2.1 Linear classification with kernel PCA In our paper we investigate the linear separability of the features found at intermediate layers of a deep neural network. A similar starting point is presented by Montavon et al. (2011). In that particular case, the authors use kernel PCA to project the features of a given layer onto a new representation which will then be used to fit the best linear classifier. They use a radial basis function as kernel, and they choose to project the features of individual layers by using the d leading eigenvectors of the kernel PCA decomposition. They investigate the effects that d has on the quality of the linear classifier.
1610.01644#3
Understanding intermediate layers using linear classifier probes
Neural network models have a reputation for being black boxes. We propose to monitor the features at every layer of a model and measure how suitable they are for classification. We use linear classifiers, which we refer to as "probes", trained entirely independently of the model itself. This helps us better understand the roles and dynamics of the intermediate layers. We demonstrate how this can be used to develop a better intuition about models and to diagnose potential problems. We apply this technique to the popular models Inception v3 and Resnet-50. Among other things, we observe experimentally that the linear separability of features increase monotonically along the depth of the model.
http://arxiv.org/pdf/1610.01644
Guillaume Alain, Yoshua Bengio
stat.ML, cs.LG
null
null
stat.ML
20161005
20181122
[ { "id": "1706.05806" }, { "id": "1710.04773" }, { "id": "1610.01644" }, { "id": "1611.03530" }, { "id": "1706.07206" }, { "id": "1605.07648" } ]
1610.01644
4
Naturally, for a sufficiently large d, it would be possible to overfit on the training set (given how easy this is with a radial basis function), so they consider the situation where d is relatively small. They demonstrate that, for deeper layers in a neural network, they can achieve good performance with smaller d. This suggests that the features of the original convolution neural network are indeed more “abstract” as we go deeper, which corresponds to the general intuition shared by many researchers. They explore convolution networks of limited depth with a restricted subset of 10k training samples of MNIST and CIFAR-10. # 2.2 Generalization and transferability of layers
1610.01644#4
Understanding intermediate layers using linear classifier probes
Neural network models have a reputation for being black boxes. We propose to monitor the features at every layer of a model and measure how suitable they are for classification. We use linear classifiers, which we refer to as "probes", trained entirely independently of the model itself. This helps us better understand the roles and dynamics of the intermediate layers. We demonstrate how this can be used to develop a better intuition about models and to diagnose potential problems. We apply this technique to the popular models Inception v3 and Resnet-50. Among other things, we observe experimentally that the linear separability of features increase monotonically along the depth of the model.
http://arxiv.org/pdf/1610.01644
Guillaume Alain, Yoshua Bengio
stat.ML, cs.LG
null
null
stat.ML
20161005
20181122
[ { "id": "1706.05806" }, { "id": "1710.04773" }, { "id": "1610.01644" }, { "id": "1611.03530" }, { "id": "1706.07206" }, { "id": "1605.07648" } ]
1610.01644
5
They explore convolution networks of limited depth with a restricted subset of 10k training samples of MNIST and CIFAR-10. # 2.2 Generalization and transferability of layers There are good arguments to support the claim that the first layers of a convolution network for image recognition contain filters that are relatively “general”, in the sense that they would work great even if we switched to an entirely different dataset of images. The last layers are specific to the dataset being used, and have to be retrained when using a different dataset. In Yosinski et al. (2014) the authors try to pinpoint the layer at which this transition occurs, but they show that the exact transition is spread across multiple layers. In Donahue et al. (2014) the authors study the transfer of features from the last few layers of a model to a novel generic task. In Zeiler and Fergus (2014) the authors show that the filters are picking up certain patterns that make sense to us visually, and they show a method to visually inspect the filters as input images. # 2.3 Relevance Propagation
1610.01644#5
Understanding intermediate layers using linear classifier probes
Neural network models have a reputation for being black boxes. We propose to monitor the features at every layer of a model and measure how suitable they are for classification. We use linear classifiers, which we refer to as "probes", trained entirely independently of the model itself. This helps us better understand the roles and dynamics of the intermediate layers. We demonstrate how this can be used to develop a better intuition about models and to diagnose potential problems. We apply this technique to the popular models Inception v3 and Resnet-50. Among other things, we observe experimentally that the linear separability of features increase monotonically along the depth of the model.
http://arxiv.org/pdf/1610.01644
Guillaume Alain, Yoshua Bengio
stat.ML, cs.LG
null
null
stat.ML
20161005
20181122
[ { "id": "1706.05806" }, { "id": "1710.04773" }, { "id": "1610.01644" }, { "id": "1611.03530" }, { "id": "1706.07206" }, { "id": "1605.07648" } ]
1610.01644
6
# 2.3 Relevance Propagation In Bach et al. (2015), the authors introduce the idea of Relevance Propagation as a way to identify which pixels of the input space are the most important to the classifier on the final layer. Their approach frames the “relevance” as a kind of quantity that is to be preserved across the layers, as a sort of shared responsibility to be divided among the features of a given layer. In Binder et al. (2016) the authors apply the concept of Relevance Propagation to a larger family of models. Among other things, they provide a nice experiment where they study the effects of corrupting the pixels deemed the most relevant, and they show how this affects performance more than corrupting randomly-selected pixels (see Figure 2 of their paper). See also Lapuschkin et al. (2016). Other research dealing with Relevance Propagation includes Arras et al. (2017) where this is applied to RNN in text. We would also note that a good number of papers on interpretability of neural networks deals with “interpretations” taking the form of regions of the original image being identified, or where the 2 pixels in the original image receive a certain value of how relevant they are (e.g. a heat map of relevance).
1610.01644#6
Understanding intermediate layers using linear classifier probes
Neural network models have a reputation for being black boxes. We propose to monitor the features at every layer of a model and measure how suitable they are for classification. We use linear classifiers, which we refer to as "probes", trained entirely independently of the model itself. This helps us better understand the roles and dynamics of the intermediate layers. We demonstrate how this can be used to develop a better intuition about models and to diagnose potential problems. We apply this technique to the popular models Inception v3 and Resnet-50. Among other things, we observe experimentally that the linear separability of features increase monotonically along the depth of the model.
http://arxiv.org/pdf/1610.01644
Guillaume Alain, Yoshua Bengio
stat.ML, cs.LG
null
null
stat.ML
20161005
20181122
[ { "id": "1706.05806" }, { "id": "1710.04773" }, { "id": "1610.01644" }, { "id": "1611.03530" }, { "id": "1706.07206" }, { "id": "1605.07648" } ]
1610.01644
7
2 pixels in the original image receive a certain value of how relevant they are (e.g. a heat map of relevance). In those cases we rely on the human user to parse the regions of the image with their vision so as to determine whether the region indeed makes sense or whether the information contained within is irrelevant to the task at hand. This is analogous to the way that image-captioning attention (Xu et al., 2015) can highlight portions of the input image that inspired specific segments of the caption. An interesting approach is presented in Mahendran and Vedaldi (2015, 2016); Dosovitskiy and Brox (2016) where the authors analyze the set of “equivalent” inputs in the sense that some of the features total at a given layer should be preserved. Given a layer to study, they apply a regularizer (e.g. variation) and use gradient descent in order to reconstruct the pre-image that yields the same features at that layer, but for which the regularizer would be minimized. This procedure yields pre-images that are of the same format as the input image, and which can be used to get a sense of what are the components of the original image that are preserved. For certain tasks, one may be surprised as to how many details of the input image are being completely discarded by the time we reach the fully-connected layers at the end of a convolution neural network.
1610.01644#7
Understanding intermediate layers using linear classifier probes
Neural network models have a reputation for being black boxes. We propose to monitor the features at every layer of a model and measure how suitable they are for classification. We use linear classifiers, which we refer to as "probes", trained entirely independently of the model itself. This helps us better understand the roles and dynamics of the intermediate layers. We demonstrate how this can be used to develop a better intuition about models and to diagnose potential problems. We apply this technique to the popular models Inception v3 and Resnet-50. Among other things, we observe experimentally that the linear separability of features increase monotonically along the depth of the model.
http://arxiv.org/pdf/1610.01644
Guillaume Alain, Yoshua Bengio
stat.ML, cs.LG
null
null
stat.ML
20161005
20181122
[ { "id": "1706.05806" }, { "id": "1710.04773" }, { "id": "1610.01644" }, { "id": "1611.03530" }, { "id": "1706.07206" }, { "id": "1605.07648" } ]
1610.01644
8
# 2.4 SVCCA In Raghu et al. (2017a,b) the authors study the question of whether neural networks are trained from the first to the last layer, or the other way around (i.e. “bottom up” vs “top down”). The concept is rather intuitive, but it still requires a proper definition of what they mean. They use Canonical Correlation Analysis (CCA) to compare two instances of a given model trained separately. Given that two different instances of the same model might assign entirely different roles to their neurons (on corresponding layers), this is a comparison that is normally impossible to even attempt. On one side, they take a model that has already been optimized. On the other side, they take multiple snapshots of a model during training. Every layer of one model is being compared with every other layer of the other. The values computed by CCA allows them to report the correlation between every pair of layers. This shows how quickly a given layer of the model being trained is going to achieve a configuration equivalent to the one of the optimized model. They find that the early layers reach their final configuration, so to speak, much earlier than layers downstream.
1610.01644#8
Understanding intermediate layers using linear classifier probes
Neural network models have a reputation for being black boxes. We propose to monitor the features at every layer of a model and measure how suitable they are for classification. We use linear classifiers, which we refer to as "probes", trained entirely independently of the model itself. This helps us better understand the roles and dynamics of the intermediate layers. We demonstrate how this can be used to develop a better intuition about models and to diagnose potential problems. We apply this technique to the popular models Inception v3 and Resnet-50. Among other things, we observe experimentally that the linear separability of features increase monotonically along the depth of the model.
http://arxiv.org/pdf/1610.01644
Guillaume Alain, Yoshua Bengio
stat.ML, cs.LG
null
null
stat.ML
20161005
20181122
[ { "id": "1706.05806" }, { "id": "1710.04773" }, { "id": "1610.01644" }, { "id": "1611.03530" }, { "id": "1706.07206" }, { "id": "1605.07648" } ]
1610.01644
9
Given that any two sets of features can be compared using CCA, they also compare the correlation between any intermediate layer and the ground truth. This gives a sense of how easy it would be to predict the target label using the features of any intermediate layer instead of only using the last layer (as convnet usually do). Refer to Figure 6 of Raghu et al. (2017b) for more details. This aspect of Raghu et al. (2017b) is very similar to our own previous work (Alain and Bengio, 2016). # 3 Monitoring with probes # Information theory, and monotonic improvements to linear separability The initial motivation for linear classifier probes was related to a reflection about the nature of information (in the entropy sense of the word) passing from one layer to the next. New information is never added as we propagate forward in a model. If we consider the typical image classification problem, the representation of the data is transformed over the course of many layers, to be finally used by a linear classifier at the last layer.
1610.01644#9
Understanding intermediate layers using linear classifier probes
Neural network models have a reputation for being black boxes. We propose to monitor the features at every layer of a model and measure how suitable they are for classification. We use linear classifiers, which we refer to as "probes", trained entirely independently of the model itself. This helps us better understand the roles and dynamics of the intermediate layers. We demonstrate how this can be used to develop a better intuition about models and to diagnose potential problems. We apply this technique to the popular models Inception v3 and Resnet-50. Among other things, we observe experimentally that the linear separability of features increase monotonically along the depth of the model.
http://arxiv.org/pdf/1610.01644
Guillaume Alain, Yoshua Bengio
stat.ML, cs.LG
null
null
stat.ML
20161005
20181122
[ { "id": "1706.05806" }, { "id": "1710.04773" }, { "id": "1610.01644" }, { "id": "1611.03530" }, { "id": "1706.07206" }, { "id": "1605.07648" } ]
1610.01644
10
In the case of a binary classifier (say, detecting the presence or absence of a lion in a picture of the savannah like in Figure 1), we could say that there was at most one bit of information to be uncovered in the original image. Lion or no lion ? Here we are not interested in measuring the information about the pixels of an image that we want to reconstruct. That would be a different problem. This is illustrated in a formal way by the Data Processing Inequality. It states that, for a set of three random variables satisfying the dependency X → Y → Z then we have that I(X; Z) ≤ I(X; Y ) 3 where I(X, Y ) is the mutual information.
1610.01644#10
Understanding intermediate layers using linear classifier probes
Neural network models have a reputation for being black boxes. We propose to monitor the features at every layer of a model and measure how suitable they are for classification. We use linear classifiers, which we refer to as "probes", trained entirely independently of the model itself. This helps us better understand the roles and dynamics of the intermediate layers. We demonstrate how this can be used to develop a better intuition about models and to diagnose potential problems. We apply this technique to the popular models Inception v3 and Resnet-50. Among other things, we observe experimentally that the linear separability of features increase monotonically along the depth of the model.
http://arxiv.org/pdf/1610.01644
Guillaume Alain, Yoshua Bengio
stat.ML, cs.LG
null
null
stat.ML
20161005
20181122
[ { "id": "1706.05806" }, { "id": "1710.04773" }, { "id": "1610.01644" }, { "id": "1611.03530" }, { "id": "1706.07206" }, { "id": "1605.07648" } ]
1610.01644
11
e1n90|32 2 45 70 a7 SB EF 53 BI 32 @tach|32 7A CP 3E DB 7D 31 4D 99 BB elatal@e cb 2 1D ba 9F BC 2F 50 EF eipse|12 ee 6F SF 73 21 Op 7F BA De 17 14 6p 25 SE 7A 91 7: sipe7|z3 23 3a AC EA A AO 55 eibps|27 ca 9a 74 21 St AT 68 eibes|54 7P 48 38 E6 30 5A DT sicié|27 50 05 p2 32 Fa Fé Ag Os co cB eicts|ag cB 74 4D 78 31 85 Ce cl aD 34 8ic72|R0 Fe 47 1D D7 AS EB BI BO BO ED BF 13 1 96 AB FA 65 9B AE? eicai|20 co 8B D3 98 C6 GB SE 63 CB F7 65 22 BF 42 5A 44 4 90 21 49 0 @iedd|1a on SD ED A} 69 A9 65 BY C2 S415 a2 24 09 DF 67 D7 DR 91 38 Bi eicte|cs ze 43 SE 2p 59 DE DA 76 42 2a 52 47 1D 80 27 OD TE BO IF D3 DA DT eiaze|09 FD FA 6C GD 78 44 27 85 ED 00 C7
1610.01644#11
Understanding intermediate layers using linear classifier probes
Neural network models have a reputation for being black boxes. We propose to monitor the features at every layer of a model and measure how suitable they are for classification. We use linear classifiers, which we refer to as "probes", trained entirely independently of the model itself. This helps us better understand the roles and dynamics of the intermediate layers. We demonstrate how this can be used to develop a better intuition about models and to diagnose potential problems. We apply this technique to the popular models Inception v3 and Resnet-50. Among other things, we observe experimentally that the linear separability of features increase monotonically along the depth of the model.
http://arxiv.org/pdf/1610.01644
Guillaume Alain, Yoshua Bengio
stat.ML, cs.LG
null
null
stat.ML
20161005
20181122
[ { "id": "1706.05806" }, { "id": "1710.04773" }, { "id": "1610.01644" }, { "id": "1611.03530" }, { "id": "1706.07206" }, { "id": "1605.07648" } ]
1610.01644
12
59 DE DA 76 42 2a 52 47 1D 80 27 OD TE BO IF D3 DA DT eiaze|09 FD FA 6C GD 78 44 27 85 ED 00 C7 e1asalce dc ag 32 52 BE 55 CE DE BB £3 Dd Slate|51 $F 89 02 7D B1 D3 45 83 17 95 BD 70 eiapb|62 ee 5 iF 1c 99 1B 01 5p 96 81 2 BldeaAP ec 35 19 42 AB 25 8c PO e1e19|99 20 2D aS Ee DE 8A BA 24 14 7E D3 D125 2C Ad 13 Cl 29 D3 09 32 D3 Bled8l56 cc BA AA 57 9E OD #A 67 11 AD 71 04 05 7A GF 4F FS BI DP 66 £3 9C
1610.01644#12
Understanding intermediate layers using linear classifier probes
Neural network models have a reputation for being black boxes. We propose to monitor the features at every layer of a model and measure how suitable they are for classification. We use linear classifiers, which we refer to as "probes", trained entirely independently of the model itself. This helps us better understand the roles and dynamics of the intermediate layers. We demonstrate how this can be used to develop a better intuition about models and to diagnose potential problems. We apply this technique to the popular models Inception v3 and Resnet-50. Among other things, we observe experimentally that the linear separability of features increase monotonically along the depth of the model.
http://arxiv.org/pdf/1610.01644
Guillaume Alain, Yoshua Bengio
stat.ML, cs.LG
null
null
stat.ML
20161005
20181122
[ { "id": "1706.05806" }, { "id": "1710.04773" }, { "id": "1610.01644" }, { "id": "1611.03530" }, { "id": "1706.07206" }, { "id": "1605.07648" } ]
1610.01644
13
(a) hex dump of picture of a lion (b) same lion in human-readable format Figure 1: The hex dump represented at the left has more information contents than the image at the right. Only one of them can be processed by the human brain in time to save their lives. Computational convenience matters. Not just entropy. The task of a deep neural network classifier is to come up with a representation for the final layer that can be easily fed to a linear classifier (i.e. the most elementary form of useful classifier). The cross-entropy loss applies a lot of pressure directly on the last layer to make it linearly separable. Any degree of linear separability in the intermediate layers happens only as a by-product. On one hand, we have that every layer has less information than its parent layer. On the other hand, we observe experimentally in Section 3.5, 4.1 and 4.2 that features from deeper layers work better with linear classifiers to predict the target labels. At first glance this might seem like a contradiction. One of the important lessons is that neural networks are really about distilling computationally- useful representations, and they are not about information contents as described by the field of Information Theory. # 3.2 Linear classifier probes
1610.01644#13
Understanding intermediate layers using linear classifier probes
Neural network models have a reputation for being black boxes. We propose to monitor the features at every layer of a model and measure how suitable they are for classification. We use linear classifiers, which we refer to as "probes", trained entirely independently of the model itself. This helps us better understand the roles and dynamics of the intermediate layers. We demonstrate how this can be used to develop a better intuition about models and to diagnose potential problems. We apply this technique to the popular models Inception v3 and Resnet-50. Among other things, we observe experimentally that the linear separability of features increase monotonically along the depth of the model.
http://arxiv.org/pdf/1610.01644
Guillaume Alain, Yoshua Bengio
stat.ML, cs.LG
null
null
stat.ML
20161005
20181122
[ { "id": "1706.05806" }, { "id": "1710.04773" }, { "id": "1610.01644" }, { "id": "1611.03530" }, { "id": "1706.07206" }, { "id": "1605.07648" } ]
1610.01644
14
# 3.2 Linear classifier probes Consider the common scenario in deep learning in which we are trying to classify the input data X to produce an output distribution over D classes. The last layer of the model is a densely-connected map to D values followed by a softmax, and we train by minimizing cross-entropy. At every layer we can take the features Hk from that layer and try to predict the correct labels y using a linear classifier parameterized as fe: Hy > (0,1)? hy +> softmax (Why + b) . where hk ∈ H are the features of hidden layer k, [0, 1]D is the space of categorical distributions of the D target classes, and (W, b) are the probe weights and biases to be learned so as to minimize the usual cross-entropy loss. Let Ltrain k define Lvalid be the empirical loss of that linear classifier fk evaluated over the training set. We can also k by exporting the same linear classifier on the validation and test sets. # k Without making any assumptions about the model itself being trained, we can nevertheless assume that these fk are themselves optimized so that, at any given time, they reflect the currently optimal thing that can be done with the features present.
1610.01644#14
Understanding intermediate layers using linear classifier probes
Neural network models have a reputation for being black boxes. We propose to monitor the features at every layer of a model and measure how suitable they are for classification. We use linear classifiers, which we refer to as "probes", trained entirely independently of the model itself. This helps us better understand the roles and dynamics of the intermediate layers. We demonstrate how this can be used to develop a better intuition about models and to diagnose potential problems. We apply this technique to the popular models Inception v3 and Resnet-50. Among other things, we observe experimentally that the linear separability of features increase monotonically along the depth of the model.
http://arxiv.org/pdf/1610.01644
Guillaume Alain, Yoshua Bengio
stat.ML, cs.LG
null
null
stat.ML
20161005
20181122
[ { "id": "1706.05806" }, { "id": "1710.04773" }, { "id": "1610.01644" }, { "id": "1611.03530" }, { "id": "1706.07206" }, { "id": "1605.07648" } ]
1610.01644
15
We refer to those linear classifiers as “probes” in an effort to clarify our thinking about the model. These probes do not affect the model training. They only measure the level of linear separability of the features at a given layer. Blocking the backpropagation from the probes to the model itself can be achieved by using tf.stop gradient in Tensorflow (or its Theano equivalent), or by managing the probe parameters separately from the model parameters. Note that we can avoid the issue of local minima because training a linear classifier using softmax cross-entropy is a convex problem. 4 In this paper, we study how Lk decreases as k increases (see Section 3.1), • the usefulness of Lk as a diagnostic tool (see Section 5.1). # 3.3 Practical concern : Ltrain # k # vs Lvalid k The reason why we care about optimality of the probes in Section 3.2 is because it abstracts away the problem of optimizing them. When a general function g(x) has a unique global minimum, we can talk about that minimum without ambiguity even though, in practice, we are probably going to use only a convenient approximation of the minimum.
1610.01644#15
Understanding intermediate layers using linear classifier probes
Neural network models have a reputation for being black boxes. We propose to monitor the features at every layer of a model and measure how suitable they are for classification. We use linear classifiers, which we refer to as "probes", trained entirely independently of the model itself. This helps us better understand the roles and dynamics of the intermediate layers. We demonstrate how this can be used to develop a better intuition about models and to diagnose potential problems. We apply this technique to the popular models Inception v3 and Resnet-50. Among other things, we observe experimentally that the linear separability of features increase monotonically along the depth of the model.
http://arxiv.org/pdf/1610.01644
Guillaume Alain, Yoshua Bengio
stat.ML, cs.LG
null
null
stat.ML
20161005
20181122
[ { "id": "1706.05806" }, { "id": "1710.04773" }, { "id": "1610.01644" }, { "id": "1611.03530" }, { "id": "1706.07206" }, { "id": "1605.07648" } ]
1610.01644
16
This is acceptable in a context where we are seeking better intuition about deep learning models by using linear classifier probes. If a researcher judges that the measurements are useful to further their understanding of their model (and act on that intuition), then they should not worry too much about how close they are to optimality. This applies also to the question of whether we should prioritize Ltrain Lvalid k might not be easy to track Lvalid # k Moreover, for the purposes of many of the experiments in this paper we chose to report the classi- fication error instead of the cross-entropy, since this is ultimately often the quantity that matters the most. Reporting the top5 classification error could also have been possible. # 3.4 Practical concern : Dimension reduction on features Another practical problem can arise when certain layers of a neural network have an exceedingly large quantity of features. The first few layers of Inception v3, for example, have a few million features when we multiply height, width and channels. This leads to parameters for a single probe taking upwards of a few gigabytes of storage, which is disproportionately large when we consider that the entire set of model parameters takes less space than that. In those cases, we have three possible suggestions for trimming down the space of features on which we fit the probes.
1610.01644#16
Understanding intermediate layers using linear classifier probes
Neural network models have a reputation for being black boxes. We propose to monitor the features at every layer of a model and measure how suitable they are for classification. We use linear classifiers, which we refer to as "probes", trained entirely independently of the model itself. This helps us better understand the roles and dynamics of the intermediate layers. We demonstrate how this can be used to develop a better intuition about models and to diagnose potential problems. We apply this technique to the popular models Inception v3 and Resnet-50. Among other things, we observe experimentally that the linear separability of features increase monotonically along the depth of the model.
http://arxiv.org/pdf/1610.01644
Guillaume Alain, Yoshua Bengio
stat.ML, cs.LG
null
null
stat.ML
20161005
20181122
[ { "id": "1706.05806" }, { "id": "1710.04773" }, { "id": "1610.01644" }, { "id": "1611.03530" }, { "id": "1706.07206" }, { "id": "1605.07648" } ]
1610.01644
17
In those cases, we have three possible suggestions for trimming down the space of features on which we fit the probes. • Use only a random subset of the features (but always the same ones). This is used on the Inception v3 model in Section 4.2. • Project the features to a lower-dimensional space. Learn this mapping. This is probably a worse idea than it sounds because the projection matrix itself can take a lot of storage (even more than the probe parameters). • When dealing with features in the form of images (height, width, channels), we can perform 2D pooling along the (height, width) of each channel. This reduces the number of features to the number of channels. This is used on the ResNet-50 model in Section 4.1. In practice, when using linear classifier probes on any serious model (i.e. not MNIST) we have to choose a way to reduce the number of features used. Note that we also want to avoid a situation where our probes are simply overfitting on the features because there are too many features. It was recently demonstrated that very large models can fit random labels on ImageNet (Zhang et al., 2016). This is a situation that we want to avoid because the probe measurements would be entirely meaningless in that situation. Dimensionality reduction helps with this concern. # 3.5 Basic example on MNIST
1610.01644#17
Understanding intermediate layers using linear classifier probes
Neural network models have a reputation for being black boxes. We propose to monitor the features at every layer of a model and measure how suitable they are for classification. We use linear classifiers, which we refer to as "probes", trained entirely independently of the model itself. This helps us better understand the roles and dynamics of the intermediate layers. We demonstrate how this can be used to develop a better intuition about models and to diagnose potential problems. We apply this technique to the popular models Inception v3 and Resnet-50. Among other things, we observe experimentally that the linear separability of features increase monotonically along the depth of the model.
http://arxiv.org/pdf/1610.01644
Guillaume Alain, Yoshua Bengio
stat.ML, cs.LG
null
null
stat.ML
20161005
20181122
[ { "id": "1706.05806" }, { "id": "1710.04773" }, { "id": "1610.01644" }, { "id": "1611.03530" }, { "id": "1706.07206" }, { "id": "1605.07648" } ]
1610.01644
18
# 3.5 Basic example on MNIST In this section we run the MNIST convolutional model provided by the tensorflow/models github repository (image/mnist/convolutional.py). We selected that model for reproducibility and to demonstrate how to easily peek into popular models by using probes. We start by sketching the model in Figure 2. We report the results at the beginning and the end of training on Figure 3. One of the interesting dynamics to be observed there is how useful the first 5 layers are, despite the fact that the model is completely untrained. Random projections can be useful to classify data, and this has been studied by others (Jarrett et al., 2009). conv 5x5 maxpool conv 5x5 maxpool 32 filters ReLU 2x2 64 filters ReLU 2x2 matmul ReLU matmul input output images logits convolution convolution fully-connected fully-connected layer layer layer layer
1610.01644#18
Understanding intermediate layers using linear classifier probes
Neural network models have a reputation for being black boxes. We propose to monitor the features at every layer of a model and measure how suitable they are for classification. We use linear classifiers, which we refer to as "probes", trained entirely independently of the model itself. This helps us better understand the roles and dynamics of the intermediate layers. We demonstrate how this can be used to develop a better intuition about models and to diagnose potential problems. We apply this technique to the popular models Inception v3 and Resnet-50. Among other things, we observe experimentally that the linear separability of features increase monotonically along the depth of the model.
http://arxiv.org/pdf/1610.01644
Guillaume Alain, Yoshua Bengio
stat.ML, cs.LG
null
null
stat.ML
20161005
20181122
[ { "id": "1706.05806" }, { "id": "1710.04773" }, { "id": "1610.01644" }, { "id": "1611.03530" }, { "id": "1706.07206" }, { "id": "1605.07648" } ]
1610.01644
19
Figure 2: This graphical model represents the neural network that we are going to use for MNIST. The model could be written in a more compact form, but we represent it this way to expose all the locations where we are going to insert probes. The model itself is simply two convolutional layers followed by two fully-connected layer (one being the final classifier). However, we insert probes on each side of each convolution, activation function, and pooling function. This is a bit overzealous, but the small size of the model makes this relatively easy to do. (a) After initialization, no training. (b) After training for 10 epochs. Figure 3: We represent here the test prediction error for each probe, at the beginning and at the end of training. This measurement was obtained through early stopping based on a validation set of 104 elements. The probes are prevented from overfitting the training data. We can see that, at the beginning of training (on the left), the randomly-initialized layers were still providing useful trans- formations. The test prediction error goes from 8% to 2% simply using those random features. The biggest impact comes from the first ReLU. At the end of training (on the right), the test prediction error is improving at every layer (with the exception of a minor kink on fc1 preact).
1610.01644#19
Understanding intermediate layers using linear classifier probes
Neural network models have a reputation for being black boxes. We propose to monitor the features at every layer of a model and measure how suitable they are for classification. We use linear classifiers, which we refer to as "probes", trained entirely independently of the model itself. This helps us better understand the roles and dynamics of the intermediate layers. We demonstrate how this can be used to develop a better intuition about models and to diagnose potential problems. We apply this technique to the popular models Inception v3 and Resnet-50. Among other things, we observe experimentally that the linear separability of features increase monotonically along the depth of the model.
http://arxiv.org/pdf/1610.01644
Guillaume Alain, Yoshua Bengio
stat.ML, cs.LG
null
null
stat.ML
20161005
20181122
[ { "id": "1706.05806" }, { "id": "1710.04773" }, { "id": "1610.01644" }, { "id": "1611.03530" }, { "id": "1706.07206" }, { "id": "1605.07648" } ]
1610.01644
20
# 3.6 Other objectives Note that it would be entirely possible to use linear classifier probes on a different set of labels. For the same reason as it is possible to transfer many layers from one vision task to another (e.g. with different classes), we are not limited to fitting probes using the same domain. Inserting probes at many different layers of a model is essentially a way to ask the following ques- tion: Is there any information about factor present in this part of the model ? # 4 Experiments with popular models # 4.1 ResNet-50 The family of ResNet models (He et al.|/2016) are characterized by their large quantities of residual layers mapping essentially x > «+ r(x Se hay have been very successful and there are various 6 papers seeking to understand better how they work (Veit et al., 2016; Larsson et al., 2016; Singh et al., 2016). Here we are going to show how linear classifier probes might be able to help us a little to shed some light into the ResNet-50 model. We used the pretrained model from the github repo (fchollet/deep-learning-models) of the author of Keras (Chollet et al., 2015).
1610.01644#20
Understanding intermediate layers using linear classifier probes
Neural network models have a reputation for being black boxes. We propose to monitor the features at every layer of a model and measure how suitable they are for classification. We use linear classifiers, which we refer to as "probes", trained entirely independently of the model itself. This helps us better understand the roles and dynamics of the intermediate layers. We demonstrate how this can be used to develop a better intuition about models and to diagnose potential problems. We apply this technique to the popular models Inception v3 and Resnet-50. Among other things, we observe experimentally that the linear separability of features increase monotonically along the depth of the model.
http://arxiv.org/pdf/1610.01644
Guillaume Alain, Yoshua Bengio
stat.ML, cs.LG
null
null
stat.ML
20161005
20181122
[ { "id": "1706.05806" }, { "id": "1710.04773" }, { "id": "1610.01644" }, { "id": "1611.03530" }, { "id": "1706.07206" }, { "id": "1605.07648" } ]
1610.01644
21
One of the questions that comes up when discussing ResNet models is whether the successive layers are essentially performing the same operation over many times, refining the representation just a little more each time, or whether there is a more fundamental change of representation happening. In particular, we can point to certain places in ResNet-50 where the image size diminishes and we increase the number of channels. This happens at three places in the model (identified with blank lines in Table 4a).
1610.01644#21
Understanding intermediate layers using linear classifier probes
Neural network models have a reputation for being black boxes. We propose to monitor the features at every layer of a model and measure how suitable they are for classification. We use linear classifiers, which we refer to as "probes", trained entirely independently of the model itself. This helps us better understand the roles and dynamics of the intermediate layers. We demonstrate how this can be used to develop a better intuition about models and to diagnose potential problems. We apply this technique to the popular models Inception v3 and Resnet-50. Among other things, we observe experimentally that the linear separability of features increase monotonically along the depth of the model.
http://arxiv.org/pdf/1610.01644
Guillaume Alain, Yoshua Bengio
stat.ML, cs.LG
null
null
stat.ML
20161005
20181122
[ { "id": "1706.05806" }, { "id": "1710.04773" }, { "id": "1610.01644" }, { "id": "1611.03530" }, { "id": "1706.07206" }, { "id": "1605.07648" } ]
1610.01644
22
layer name topology probe valid prediction error input 1 (224, 224, 3) 0.99 add 1 add 2 add 3 (28, 28, 256) (28, 28, 256) (28, 28, 256) 0.94 0.89 0.88 add 4 add 5 add 6 add 7 add 8 add 9 add 10 add 11 add 12 add 13 add 14 add 15 add 16 (28, 28, 512) (28, 28, 512) (28, 28, 512) (28, 28, 512) (14, 14, 1024) (14, 14, 1024) (14, 14, 1024) (14, 14, 1024) (14, 14, 1024) (14, 14, 1024) (7, 7, 2048) (7, 7, 2048) (7, 7, 2048) 0.87 0.82 0.79 0.76 0.77 0.69 0.67 0.62 0.57 0.51 0.41 0.39 0.31 7 == == model top layer os GLUT] PIL H ex punsneas igi pioateny gap iseaQiQaseray eis{esipsaieeaegipiesiQa4t) 02 valid prediction error a se vee eyy - ei Ej a | 3 2
1610.01644#22
Understanding intermediate layers using linear classifier probes
Neural network models have a reputation for being black boxes. We propose to monitor the features at every layer of a model and measure how suitable they are for classification. We use linear classifiers, which we refer to as "probes", trained entirely independently of the model itself. This helps us better understand the roles and dynamics of the intermediate layers. We demonstrate how this can be used to develop a better intuition about models and to diagnose potential problems. We apply this technique to the popular models Inception v3 and Resnet-50. Among other things, we observe experimentally that the linear separability of features increase monotonically along the depth of the model.
http://arxiv.org/pdf/1610.01644
Guillaume Alain, Yoshua Bengio
stat.ML, cs.LG
null
null
stat.ML
20161005
20181122
[ { "id": "1706.05806" }, { "id": "1710.04773" }, { "id": "1610.01644" }, { "id": "1611.03530" }, { "id": "1706.07206" }, { "id": "1605.07648" } ]
1610.01644
23
(a) Validation errors for probes. layers. Comparing Pre-trained on ImageNet dataset. # different ResNet-50 (b) Inserting probes at meaningful layers of ResNet-50. This plot shows the rightmost column of the table in Figure 4a. Reporting the validation error for probes (magenta) and comparing it with the validation error of the pre-trained model (green). Figure 4: For the ResNet-50 model trained on ImageNet, we can see deeper features are better at predicting the output classes. More importantly, the relationship between depth and validation prediction error is almost perfectly monotonic. This suggests a certain “greedy” aspect of the repre- sentations used in deep neural networks. This property is something that comes naturally as a result of conventional training, and it is not due to the insertion of probes in the model. # 4.2 Inception v3 We have performed an experiment using the Inception v3 model on the ImageNet dataset (Szegedy et al., 2015; Russakovsky et al., 2015). We show using colors in Figure 5 how the predictive error of each layer can be measured using probes. This can be computed at many different times of training, but here we report only after minibatch 308230, which corresponds to about 2 weeks of training. 7
1610.01644#23
Understanding intermediate layers using linear classifier probes
Neural network models have a reputation for being black boxes. We propose to monitor the features at every layer of a model and measure how suitable they are for classification. We use linear classifiers, which we refer to as "probes", trained entirely independently of the model itself. This helps us better understand the roles and dynamics of the intermediate layers. We demonstrate how this can be used to develop a better intuition about models and to diagnose potential problems. We apply this technique to the popular models Inception v3 and Resnet-50. Among other things, we observe experimentally that the linear separability of features increase monotonically along the depth of the model.
http://arxiv.org/pdf/1610.01644
Guillaume Alain, Yoshua Bengio
stat.ML, cs.LG
null
null
stat.ML
20161005
20181122
[ { "id": "1706.05806" }, { "id": "1710.04773" }, { "id": "1610.01644" }, { "id": "1611.03530" }, { "id": "1706.07206" }, { "id": "1605.07648" } ]
1610.01644
24
7 This model has a few particularities, one of which is that it features an auxiliary branch that con- tributes to training the model (it can be discarded afterwards, but not necessarily). We wanted to investigate whether this branch is “leading training”, in the sense that its classifier might have lower prediction error than the main head for the first part of the training. This is something that we confirmed by looking at the prediction errors for the probes, but the difference was not very large. The auxiliary branch was ahead of the main branch by just a little. The smooth gradient of colors in Figure 5 shows how the linear separability increases monotonically as we probe layers deeper into the network. Refer to the Appendix Section C for a comparison at four different moments of training, and for some more details about how we reduced the dimensionality of the feature to make this more tractable. Te 0.0 probe training error 1.0 308230 main head auxiliary head Figure 5: Inception v3 model after 2 weeks of training. Red is bad (high prediction error) and green/blue is good (low prediction error). The smooth color gradient shows a very gradual transition in the degree of linear separability (almost perfectly monotonic). # 5 Diagnostics for failing models # 5.1 Pathological behavior on skip connections
1610.01644#24
Understanding intermediate layers using linear classifier probes
Neural network models have a reputation for being black boxes. We propose to monitor the features at every layer of a model and measure how suitable they are for classification. We use linear classifiers, which we refer to as "probes", trained entirely independently of the model itself. This helps us better understand the roles and dynamics of the intermediate layers. We demonstrate how this can be used to develop a better intuition about models and to diagnose potential problems. We apply this technique to the popular models Inception v3 and Resnet-50. Among other things, we observe experimentally that the linear separability of features increase monotonically along the depth of the model.
http://arxiv.org/pdf/1610.01644
Guillaume Alain, Yoshua Bengio
stat.ML, cs.LG
null
null
stat.ML
20161005
20181122
[ { "id": "1706.05806" }, { "id": "1710.04773" }, { "id": "1610.01644" }, { "id": "1611.03530" }, { "id": "1706.07206" }, { "id": "1605.07648" } ]
1610.01644
25
# 5 Diagnostics for failing models # 5.1 Pathological behavior on skip connections In this section we show an example of a situation where we can use probes to diagnose a training problem as it is happening. We purposefully selected a model that was pathologically deep so that it would fail to train under normal circumstances. We used 128 fully-connected layers of 128 hidden units to classify MNIST, which is not at all a model that we would recommend. We thought that something interesting might happen if we added a very long skip connection that bypasses the first half of the model completely (Figure 6a). With that skip connection, the model became trainable through the usual SGD. Intuitively, we thought that the latter portion of the model would see use at first, but then we did not know whether the first half of the model would then also become useful. Using probes we show that this solution was not working as intended, because half of the model stays unused. The weights are not zero, but there is no useful signal passing through that segment. The skip connection left a dead segment and skipped over it. The lesson that we want to show the reader is not that skip connections are bad. Our goal here is to show that linear classification probes are a tool to understand what is happening internally in such situations. Sometimes the successful minimization of a loss fails to capture important details.
1610.01644#25
Understanding intermediate layers using linear classifier probes
Neural network models have a reputation for being black boxes. We propose to monitor the features at every layer of a model and measure how suitable they are for classification. We use linear classifiers, which we refer to as "probes", trained entirely independently of the model itself. This helps us better understand the roles and dynamics of the intermediate layers. We demonstrate how this can be used to develop a better intuition about models and to diagnose potential problems. We apply this technique to the popular models Inception v3 and Resnet-50. Among other things, we observe experimentally that the linear separability of features increase monotonically along the depth of the model.
http://arxiv.org/pdf/1610.01644
Guillaume Alain, Yoshua Bengio
stat.ML, cs.LG
null
null
stat.ML
20161005
20181122
[ { "id": "1706.05806" }, { "id": "1710.04773" }, { "id": "1610.01644" }, { "id": "1611.03530" }, { "id": "1706.07206" }, { "id": "1605.07648" } ]
1610.01644
26
# 6 Discussion and future work We have presented a combination of both a small convnet on MNIST and larger popular convnets Inception v3 and ResNet-50. It would be nice to continue this work and look at ResNet-101, ResNet- 151, VGG-16 and VGG-19. A similar thing could be done with popular RNNs also. To apply linear classifier probes to a different context, we could also try any setting where either Gen- erative Adversarial Networks (Goodfellow et al., 2014) or adversarial examples are used (Szegedy et al., 2013). 8 ae
1610.01644#26
Understanding intermediate layers using linear classifier probes
Neural network models have a reputation for being black boxes. We propose to monitor the features at every layer of a model and measure how suitable they are for classification. We use linear classifiers, which we refer to as "probes", trained entirely independently of the model itself. This helps us better understand the roles and dynamics of the intermediate layers. We demonstrate how this can be used to develop a better intuition about models and to diagnose potential problems. We apply this technique to the popular models Inception v3 and Resnet-50. Among other things, we observe experimentally that the linear separability of features increase monotonically along the depth of the model.
http://arxiv.org/pdf/1610.01644
Guillaume Alain, Yoshua Bengio
stat.ML, cs.LG
null
null
stat.ML
20161005
20181122
[ { "id": "1706.05806" }, { "id": "1710.04773" }, { "id": "1610.01644" }, { "id": "1611.03530" }, { "id": "1706.07206" }, { "id": "1605.07648" } ]
1610.01644
27
(a) Model with 128 layers. A skip connec- tion goes from the beginning straight to the middle of the graph. # (b) probes after 500 mini- batches (c) probes after 2000 mini- batches Figure 6: Pathological skip connection being diagnosed. Refer to Appendix Section A for explana- tions about the special notation for probes using the “diode” symbol. The idea of multi-layer probes has been suggested to us on multiple occasions. This could be seen as a natural extension of the linear classifier probes. One downside to this idea is that we lose the convexity property of the probes. It might be worth pursuing in a particular setting, but as of now we feel that it is premature to start using multi-layer probes. This also leads to the convoluted idea of having a regular probe inside a multi-layer probe. One completely new direction would be to train a model in a way that actively discourages certain internal layers to be useful to linear classifiers. What would be the consequences of this constraint? Would it handicap a given model or would the model simply adjust without any trouble? At that point, we are no longer dealing with non-invasive probes, but we are feeding a strange kind of signal back to the model.
1610.01644#27
Understanding intermediate layers using linear classifier probes
Neural network models have a reputation for being black boxes. We propose to monitor the features at every layer of a model and measure how suitable they are for classification. We use linear classifiers, which we refer to as "probes", trained entirely independently of the model itself. This helps us better understand the roles and dynamics of the intermediate layers. We demonstrate how this can be used to develop a better intuition about models and to diagnose potential problems. We apply this technique to the popular models Inception v3 and Resnet-50. Among other things, we observe experimentally that the linear separability of features increase monotonically along the depth of the model.
http://arxiv.org/pdf/1610.01644
Guillaume Alain, Yoshua Bengio
stat.ML, cs.LG
null
null
stat.ML
20161005
20181122
[ { "id": "1706.05806" }, { "id": "1710.04773" }, { "id": "1610.01644" }, { "id": "1611.03530" }, { "id": "1706.07206" }, { "id": "1605.07648" } ]
1610.01644
28
Finally, we think that it is rather interesting that the probe prediction errors are almost perfectly monotonically decreasing. We suspect that this warrants a deeper investigation into the reasons why that it happens, and it may lead to the discovery of fundamental concepts to understand better deep neural networks (in relation to their optimization). This is connected to the work done by Jastrzebski et al. (2017). # 7 Conclusion In this paper we introduced the concept of the linear classifier probe as a conceptual tool to better understand the dynamics inside a neural network and the role played by the individual intermediate layers. We have observed experimentally that an interesting property holds : the level of linear separabil- ity increases monotonically as we go to deeper layers. This is purely an indirect consequence of enforcing this constraint on the last layer. We have demonstrated how these probes can be used to identify certain problematic behaviors in models that might not be apparent when we traditionally have access to only the prediction loss and error. We are now able to ask new questions and explore new areas. We hope that the notions presented in this paper can contribute to the understanding of deep neural networks and guide the intuition of researchers that design them. # Acknowledgments
1610.01644#28
Understanding intermediate layers using linear classifier probes
Neural network models have a reputation for being black boxes. We propose to monitor the features at every layer of a model and measure how suitable they are for classification. We use linear classifiers, which we refer to as "probes", trained entirely independently of the model itself. This helps us better understand the roles and dynamics of the intermediate layers. We demonstrate how this can be used to develop a better intuition about models and to diagnose potential problems. We apply this technique to the popular models Inception v3 and Resnet-50. Among other things, we observe experimentally that the linear separability of features increase monotonically along the depth of the model.
http://arxiv.org/pdf/1610.01644
Guillaume Alain, Yoshua Bengio
stat.ML, cs.LG
null
null
stat.ML
20161005
20181122
[ { "id": "1706.05806" }, { "id": "1710.04773" }, { "id": "1610.01644" }, { "id": "1611.03530" }, { "id": "1706.07206" }, { "id": "1605.07648" } ]
1610.01644
29
We hope that the notions presented in this paper can contribute to the understanding of deep neural networks and guide the intuition of researchers that design them. # Acknowledgments Yoshua Bengio is a senior CIFAR Fellow. The authors would like to acknowledge the support of the following agencies for research funding and computing support: NSERC, FQRNT, Calcul Qu´ebec, Compute Canada, the Canada Research Chairs and CIFAR. Thanks to Nicolas Ballas for fruitful discussions, to Reyhane Askari and Mohammad Pezeshki for proofreading and comments, and to all the reviewers for their comments. 9 # References Alain, G. and Bengio, Y. (2016). Understanding intermediate layers using linear classifier probes. arXiv preprint arXiv:1610.01644. Arras, L., Montavon, G., M¨uller, K.-R., and Samek, W. (2017). Explaining recurrent neural network predictions in sentiment analysis. arXiv preprint arXiv:1706.07206.
1610.01644#29
Understanding intermediate layers using linear classifier probes
Neural network models have a reputation for being black boxes. We propose to monitor the features at every layer of a model and measure how suitable they are for classification. We use linear classifiers, which we refer to as "probes", trained entirely independently of the model itself. This helps us better understand the roles and dynamics of the intermediate layers. We demonstrate how this can be used to develop a better intuition about models and to diagnose potential problems. We apply this technique to the popular models Inception v3 and Resnet-50. Among other things, we observe experimentally that the linear separability of features increase monotonically along the depth of the model.
http://arxiv.org/pdf/1610.01644
Guillaume Alain, Yoshua Bengio
stat.ML, cs.LG
null
null
stat.ML
20161005
20181122
[ { "id": "1706.05806" }, { "id": "1710.04773" }, { "id": "1610.01644" }, { "id": "1611.03530" }, { "id": "1706.07206" }, { "id": "1605.07648" } ]
1610.01644
30
Bach, S., Binder, A., Montavon, G., Klauschen, F., M¨uller, K.-R., and Samek, W. (2015). On pixel- wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PloS one, 10(7), e0130140. Biggio, B., Corona, I., Maiorca, D., Nelson, B., ˇSrndi´c, N., Laskov, P., Giacinto, G., and Roli, F. (2013). Evasion attacks against machine learning at test time. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pages 387–402. Springer. Binder, A., Montavon, G., Lapuschkin, S., M¨uller, K.-R., and Samek, W. (2016). Layer-wise rele- vance propagation for neural networks with local renormalization layers. In International Con- ference on Artificial Neural Networks, pages 63–71. Springer. Chollet, F. et al. (2015). Keras. https://github.com/fchollet/keras.
1610.01644#30
Understanding intermediate layers using linear classifier probes
Neural network models have a reputation for being black boxes. We propose to monitor the features at every layer of a model and measure how suitable they are for classification. We use linear classifiers, which we refer to as "probes", trained entirely independently of the model itself. This helps us better understand the roles and dynamics of the intermediate layers. We demonstrate how this can be used to develop a better intuition about models and to diagnose potential problems. We apply this technique to the popular models Inception v3 and Resnet-50. Among other things, we observe experimentally that the linear separability of features increase monotonically along the depth of the model.
http://arxiv.org/pdf/1610.01644
Guillaume Alain, Yoshua Bengio
stat.ML, cs.LG
null
null
stat.ML
20161005
20181122
[ { "id": "1706.05806" }, { "id": "1710.04773" }, { "id": "1610.01644" }, { "id": "1611.03530" }, { "id": "1706.07206" }, { "id": "1605.07648" } ]
1610.01644
31
Chollet, F. et al. (2015). Keras. https://github.com/fchollet/keras. Donahue, J., Jia, Y., Vinyals, O., Hoffman, J., Zhang, N., Tzeng, E., and Darrell, T. (2014). Decaf: A deep convolutional activation feature for generic visual recognition. In International conference on machine learning, pages 647–655. Dosovitskiy, A. and Brox, T. (2016). Inverting visual representations with convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4829–4837. Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., and Bengio, Y. (2014). Generative adversarial nets. In Advances in neural information processing systems, pages 2672–2680. He, K., Zhang, X., Ren, S., and Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 770– 778.
1610.01644#31
Understanding intermediate layers using linear classifier probes
Neural network models have a reputation for being black boxes. We propose to monitor the features at every layer of a model and measure how suitable they are for classification. We use linear classifiers, which we refer to as "probes", trained entirely independently of the model itself. This helps us better understand the roles and dynamics of the intermediate layers. We demonstrate how this can be used to develop a better intuition about models and to diagnose potential problems. We apply this technique to the popular models Inception v3 and Resnet-50. Among other things, we observe experimentally that the linear separability of features increase monotonically along the depth of the model.
http://arxiv.org/pdf/1610.01644
Guillaume Alain, Yoshua Bengio
stat.ML, cs.LG
null
null
stat.ML
20161005
20181122
[ { "id": "1706.05806" }, { "id": "1710.04773" }, { "id": "1610.01644" }, { "id": "1611.03530" }, { "id": "1706.07206" }, { "id": "1605.07648" } ]
1610.01644
32
Jarrett, K., Kavukcuoglu, K., Lecun, Y., et al. (2009). What is the best multi-stage architecture for object recognition? In 2009 IEEE 12th International Conference on Computer Vision, pages 2146–2153. IEEE. Jastrzebski, S., Arpit, D., Ballas, N., Verma, V., Che, T., and Bengio, Y. (2017). Residual connections encourage iterative inference. arXiv preprint arXiv:1710.04773. Lapuschkin, S., Binder, A., Montavon, G., M¨uller, K.-R., and Samek, W. (2016). Analyzing clas- In Proceedings of the IEEE Conference on sifiers: Fisher vectors and deep neural networks. Computer Vision and Pattern Recognition, pages 2912–2920. Larsson, G., Maire, M., and Shakhnarovich, G. (2016). Fractalnet: Ultra-deep neural networks without residuals. arXiv preprint arXiv:1605.07648. Mahendran, A. and Vedaldi, A. (2015). Understanding deep image representations by inverting them. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 5188–5196.
1610.01644#32
Understanding intermediate layers using linear classifier probes
Neural network models have a reputation for being black boxes. We propose to monitor the features at every layer of a model and measure how suitable they are for classification. We use linear classifiers, which we refer to as "probes", trained entirely independently of the model itself. This helps us better understand the roles and dynamics of the intermediate layers. We demonstrate how this can be used to develop a better intuition about models and to diagnose potential problems. We apply this technique to the popular models Inception v3 and Resnet-50. Among other things, we observe experimentally that the linear separability of features increase monotonically along the depth of the model.
http://arxiv.org/pdf/1610.01644
Guillaume Alain, Yoshua Bengio
stat.ML, cs.LG
null
null
stat.ML
20161005
20181122
[ { "id": "1706.05806" }, { "id": "1710.04773" }, { "id": "1610.01644" }, { "id": "1611.03530" }, { "id": "1706.07206" }, { "id": "1605.07648" } ]
1610.01644
33
Mahendran, A. and Vedaldi, A. (2016). Visualizing deep convolutional neural networks using natural pre-images. International Journal of Computer Vision, 120(3), 233–255. Montavon, G., Braun, M. L., and M¨uller, K.-R. (2011). Kernel analysis of deep networks. Journal of Machine Learning Research, 12(Sep), 2563–2581. Raghu, M., Yosinski, J., and Sohl-Dickstein, J. (2017a). Bottom up or top down? dynamics of deep representations via canonical correlation analysis. arxiv. 10 Raghu, M., Gilmer, J., Yosinski, J., and Sohl-Dickstein, J. (2017b). Svcca: Singular vector canonical correlation analysis for deep understanding and improvement. arXiv preprint arXiv:1706.05806. Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., Berg, A. C., and Fei-Fei, L. (2015). ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV), 115(3), 211–252.
1610.01644#33
Understanding intermediate layers using linear classifier probes
Neural network models have a reputation for being black boxes. We propose to monitor the features at every layer of a model and measure how suitable they are for classification. We use linear classifiers, which we refer to as "probes", trained entirely independently of the model itself. This helps us better understand the roles and dynamics of the intermediate layers. We demonstrate how this can be used to develop a better intuition about models and to diagnose potential problems. We apply this technique to the popular models Inception v3 and Resnet-50. Among other things, we observe experimentally that the linear separability of features increase monotonically along the depth of the model.
http://arxiv.org/pdf/1610.01644
Guillaume Alain, Yoshua Bengio
stat.ML, cs.LG
null
null
stat.ML
20161005
20181122
[ { "id": "1706.05806" }, { "id": "1710.04773" }, { "id": "1610.01644" }, { "id": "1611.03530" }, { "id": "1706.07206" }, { "id": "1605.07648" } ]
1610.01644
34
Singh, S., Hoiem, D., and Forsyth, D. (2016). Swapout: Learning an ensemble of deep architectures. In Advances In Neural Information Processing Systems, pages 28–36. Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I., and Fergus, R. (2013). Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199. Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., and Rabinovich, A. (2015). Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1–9. Veit, A., Wilber, M. J., and Belongie, S. (2016). Residual networks behave like ensembles of relatively shallow networks. In Advances in Neural Information Processing Systems, pages 550– 558.
1610.01644#34
Understanding intermediate layers using linear classifier probes
Neural network models have a reputation for being black boxes. We propose to monitor the features at every layer of a model and measure how suitable they are for classification. We use linear classifiers, which we refer to as "probes", trained entirely independently of the model itself. This helps us better understand the roles and dynamics of the intermediate layers. We demonstrate how this can be used to develop a better intuition about models and to diagnose potential problems. We apply this technique to the popular models Inception v3 and Resnet-50. Among other things, we observe experimentally that the linear separability of features increase monotonically along the depth of the model.
http://arxiv.org/pdf/1610.01644
Guillaume Alain, Yoshua Bengio
stat.ML, cs.LG
null
null
stat.ML
20161005
20181122
[ { "id": "1706.05806" }, { "id": "1710.04773" }, { "id": "1610.01644" }, { "id": "1611.03530" }, { "id": "1706.07206" }, { "id": "1605.07648" } ]
1610.01644
35
Xu, K., Ba, J., Kiros, R., Cho, K., Courville, A., Salakhudinov, R., Zemel, R., and Bengio, Y. (2015). Show, attend and tell: Neural image caption generation with visual attention. In International Conference on Machine Learning, pages 2048–2057. Yosinski, J., Clune, J., Bengio, Y., and Lipson, H. (2014). How transferable are features in deep neural networks? In Advances in neural information processing systems, pages 3320–3328. Zeiler, M. D. and Fergus, R. (2014). Visualizing and understanding convolutional networks. European conference on computer vision, pages 818–833. Springer. In Zhang, C., Bengio, S., Hardt, M., Recht, B., and Vinyals, O. (2016). Understanding deep learning requires rethinking generalization. arXiv preprint arXiv:1611.03530. # A Diode notation We have the following suggestion for extending traditional graphical models to describe where probes are being inserted in a model. See Figure 7.
1610.01644#35
Understanding intermediate layers using linear classifier probes
Neural network models have a reputation for being black boxes. We propose to monitor the features at every layer of a model and measure how suitable they are for classification. We use linear classifiers, which we refer to as "probes", trained entirely independently of the model itself. This helps us better understand the roles and dynamics of the intermediate layers. We demonstrate how this can be used to develop a better intuition about models and to diagnose potential problems. We apply this technique to the popular models Inception v3 and Resnet-50. Among other things, we observe experimentally that the linear separability of features increase monotonically along the depth of the model.
http://arxiv.org/pdf/1610.01644
Guillaume Alain, Yoshua Bengio
stat.ML, cs.LG
null
null
stat.ML
20161005
20181122
[ { "id": "1706.05806" }, { "id": "1710.04773" }, { "id": "1610.01644" }, { "id": "1611.03530" }, { "id": "1706.07206" }, { "id": "1605.07648" } ]
1610.01644
36
# A Diode notation We have the following suggestion for extending traditional graphical models to describe where probes are being inserted in a model. See Figure 7. Due to the fact that probes do not contribute to backpropagation, but they still consume the features during the feed-forward step, we thought that borrowing the diode symbol from electrical engineer- ing might be a good idea. A diode is a one-way valve for electrical current. This notation could be useful also outside of this context with probes, whenever we want to sketch a graphical model and highlight the fact that the gradient backpropagation signal is being blocked. sooner Figure 7: Probes being added to every layer of a model. These additional probes are not supposed to change the training of the model, so we add a little diode symbol through the arrows to indicate that the gradients will not backpropagate through those connections. 11 # B Training probes with finished model Sometimes we do not care about measuring the probe losses/accuracy during training, but we have a model that is already trained and we want to report the measurements on that static model.
1610.01644#36
Understanding intermediate layers using linear classifier probes
Neural network models have a reputation for being black boxes. We propose to monitor the features at every layer of a model and measure how suitable they are for classification. We use linear classifiers, which we refer to as "probes", trained entirely independently of the model itself. This helps us better understand the roles and dynamics of the intermediate layers. We demonstrate how this can be used to develop a better intuition about models and to diagnose potential problems. We apply this technique to the popular models Inception v3 and Resnet-50. Among other things, we observe experimentally that the linear separability of features increase monotonically along the depth of the model.
http://arxiv.org/pdf/1610.01644
Guillaume Alain, Yoshua Bengio
stat.ML, cs.LG
null
null
stat.ML
20161005
20181122
[ { "id": "1706.05806" }, { "id": "1710.04773" }, { "id": "1610.01644" }, { "id": "1611.03530" }, { "id": "1706.07206" }, { "id": "1605.07648" } ]
1610.01644
37
Sometimes we do not care about measuring the probe losses/accuracy during training, but we have a model that is already trained and we want to report the measurements on that static model. In that case, it is worth considering whether we really want to augment the model by adding the probes and training the probes by iterating through the training set. Sometimes the model itself is computationally expensive to run and we can only do 150 images per second. If we have to do multiple passes over the training set in order to train probes, then it might be more efficient to run the whole training set and extract the features to the local hard drive. Experimentally, in the case for the pre-trained model Resnet-50 (Section 4.1) we found that we could process approximately 100 training samples per second when doing forward propagation, but we could run through 6000 training samples per second when reading from the local hard drive. This makes it a lot easier to do multiple passes over the training set. # C Inception v3 In Section 3.4 we showed results from an experiment using the Inception v3 model on the ImageNet dataset (Szegedy et al., 2015; Russakovsky et al., 2015). The results shown were taken from the last training step only.
1610.01644#37
Understanding intermediate layers using linear classifier probes
Neural network models have a reputation for being black boxes. We propose to monitor the features at every layer of a model and measure how suitable they are for classification. We use linear classifiers, which we refer to as "probes", trained entirely independently of the model itself. This helps us better understand the roles and dynamics of the intermediate layers. We demonstrate how this can be used to develop a better intuition about models and to diagnose potential problems. We apply this technique to the popular models Inception v3 and Resnet-50. Among other things, we observe experimentally that the linear separability of features increase monotonically along the depth of the model.
http://arxiv.org/pdf/1610.01644
Guillaume Alain, Yoshua Bengio
stat.ML, cs.LG
null
null
stat.ML
20161005
20181122
[ { "id": "1706.05806" }, { "id": "1710.04773" }, { "id": "1610.01644" }, { "id": "1611.03530" }, { "id": "1706.07206" }, { "id": "1605.07648" } ]
1610.01644
39
Figure 8: Sketch of the Inception v3 model. Note the structure with the “auxiliary head” at the bottom, and the “inception modules” with a common topology represented as blocks that have 3 or 4 sub-branches. As discussed in Section 3.4, we had to resort to a technique to limit the number of features used by the linear classifier probes. In this particular experiment, we have had the most success by taking 1000 random features for each probe. This gives certain layers an unfair advantage if they start with 4000 features and we kept 1000, whereas in other cases the probe insertion point has 426, 320 features and we keep 1000. There was no simple “fair” solution. That being said, 13 out of the 17 probes have more than 100, 000 features, and 11 of those probes have more than 200, 000 features, so things were relatively comparable. 12 Inception v3 7 main head probe training prediction error minibatches auxiliary head main head minibatches 050389 auxiliary head main head minibatches 100876 auxiliary head main head minibatches 308230 auxiliary head
1610.01644#39
Understanding intermediate layers using linear classifier probes
Neural network models have a reputation for being black boxes. We propose to monitor the features at every layer of a model and measure how suitable they are for classification. We use linear classifiers, which we refer to as "probes", trained entirely independently of the model itself. This helps us better understand the roles and dynamics of the intermediate layers. We demonstrate how this can be used to develop a better intuition about models and to diagnose potential problems. We apply this technique to the popular models Inception v3 and Resnet-50. Among other things, we observe experimentally that the linear separability of features increase monotonically along the depth of the model.
http://arxiv.org/pdf/1610.01644
Guillaume Alain, Yoshua Bengio
stat.ML, cs.LG
null
null
stat.ML
20161005
20181122
[ { "id": "1706.05806" }, { "id": "1710.04773" }, { "id": "1610.01644" }, { "id": "1611.03530" }, { "id": "1706.07206" }, { "id": "1605.07648" } ]
1610.01644
40
Figure 9: Inserting a probe at multiple moments during training the Inception v3 model on the ImageNet dataset. We represent here the prediction error evaluated at a random subset of 1000 features. As expected, at first all the probes have a 100% prediction error, but as training progresses we see that the model is getting better. Note that there are 1000 classes, so a prediction error of 50% is much better than a random guess. The auxiliary head, shown under the model, was observed to have a prediction error that was slightly better than the main head. This is not necessarily a condition that will hold at the end of training, but merely an observation. Red is bad (high prediction error) and green/blue is good (low prediction error). 13
1610.01644#40
Understanding intermediate layers using linear classifier probes
Neural network models have a reputation for being black boxes. We propose to monitor the features at every layer of a model and measure how suitable they are for classification. We use linear classifiers, which we refer to as "probes", trained entirely independently of the model itself. This helps us better understand the roles and dynamics of the intermediate layers. We demonstrate how this can be used to develop a better intuition about models and to diagnose potential problems. We apply this technique to the popular models Inception v3 and Resnet-50. Among other things, we observe experimentally that the linear separability of features increase monotonically along the depth of the model.
http://arxiv.org/pdf/1610.01644
Guillaume Alain, Yoshua Bengio
stat.ML, cs.LG
null
null
stat.ML
20161005
20181122
[ { "id": "1706.05806" }, { "id": "1710.04773" }, { "id": "1610.01644" }, { "id": "1611.03530" }, { "id": "1706.07206" }, { "id": "1605.07648" } ]
1609.08675
0
6 1 0 2 p e S 7 2 ] V C . s c [ 1 v 5 7 6 8 0 . 9 0 6 1 : v i X r a # YouTube-8M: A Large-Scale Video Classification Benchmark # Sami Abu-El-Haija [email protected] # Nisarg Kothari [email protected] # Joonseok Lee [email protected] # Paul Natsev [email protected] # George Toderici [email protected] # Balakrishnan Varadarajan [email protected] # Sudheendra Vijayanarasimhan [email protected] # Google Research
1609.08675#0
YouTube-8M: A Large-Scale Video Classification Benchmark
Many recent advancements in Computer Vision are attributed to large datasets. Open-source software packages for Machine Learning and inexpensive commodity hardware have reduced the barrier of entry for exploring novel approaches at scale. It is possible to train models over millions of examples within a few days. Although large-scale datasets exist for image understanding, such as ImageNet, there are no comparable size video classification datasets. In this paper, we introduce YouTube-8M, the largest multi-label video classification dataset, composed of ~8 million videos (500K hours of video), annotated with a vocabulary of 4800 visual entities. To get the videos and their labels, we used a YouTube video annotation system, which labels videos with their main topics. While the labels are machine-generated, they have high-precision and are derived from a variety of human-based signals including metadata and query click signals. We filtered the video labels (Knowledge Graph entities) using both automated and manual curation strategies, including asking human raters if the labels are visually recognizable. Then, we decoded each video at one-frame-per-second, and used a Deep CNN pre-trained on ImageNet to extract the hidden representation immediately prior to the classification layer. Finally, we compressed the frame features and make both the features and video-level labels available for download. We trained various (modest) classification models on the dataset, evaluated them using popular evaluation metrics, and report them as baselines. Despite the size of the dataset, some of our models train to convergence in less than a day on a single machine using TensorFlow. We plan to release code for training a TensorFlow model and for computing metrics.
http://arxiv.org/pdf/1609.08675
Sami Abu-El-Haija, Nisarg Kothari, Joonseok Lee, Paul Natsev, George Toderici, Balakrishnan Varadarajan, Sudheendra Vijayanarasimhan
cs.CV
10 pages
null
cs.CV
20160927
20160927
[ { "id": "1502.07209" } ]
1609.08675
1
ABSTRACT Many recent advancements in Computer Vision are attributed to large datasets. Open-source software packages for Machine Learn- ing and inexpensive commodity hardware have reduced the bar- rier of entry for exploring novel approaches at scale. It is possible to train models over millions of examples within a few days. Al- though large-scale datasets exist for image understanding, such as ImageNet, there are no comparable size video classification datasets. In this paper, we introduce YouTube-8M, the largest multi-label video classification dataset, composed of ∼8 million videos—500K hours of video—annotated with a vocabulary of 4800 visual en- tities. To get the videos and their (multiple) labels, we used a YouTube video annotation system, which labels videos with the main topics in them. While the labels are machine-generated, they have high-precision and are derived from a variety of human-based signals including metadata and query click signals, so they repre- sent an excellent target for content-based annotation approaches. We filtered the video labels (Knowledge Graph entities) using both automated and manual curation strategies, including asking human raters if the labels are
1609.08675#1
YouTube-8M: A Large-Scale Video Classification Benchmark
Many recent advancements in Computer Vision are attributed to large datasets. Open-source software packages for Machine Learning and inexpensive commodity hardware have reduced the barrier of entry for exploring novel approaches at scale. It is possible to train models over millions of examples within a few days. Although large-scale datasets exist for image understanding, such as ImageNet, there are no comparable size video classification datasets. In this paper, we introduce YouTube-8M, the largest multi-label video classification dataset, composed of ~8 million videos (500K hours of video), annotated with a vocabulary of 4800 visual entities. To get the videos and their labels, we used a YouTube video annotation system, which labels videos with their main topics. While the labels are machine-generated, they have high-precision and are derived from a variety of human-based signals including metadata and query click signals. We filtered the video labels (Knowledge Graph entities) using both automated and manual curation strategies, including asking human raters if the labels are visually recognizable. Then, we decoded each video at one-frame-per-second, and used a Deep CNN pre-trained on ImageNet to extract the hidden representation immediately prior to the classification layer. Finally, we compressed the frame features and make both the features and video-level labels available for download. We trained various (modest) classification models on the dataset, evaluated them using popular evaluation metrics, and report them as baselines. Despite the size of the dataset, some of our models train to convergence in less than a day on a single machine using TensorFlow. We plan to release code for training a TensorFlow model and for computing metrics.
http://arxiv.org/pdf/1609.08675
Sami Abu-El-Haija, Nisarg Kothari, Joonseok Lee, Paul Natsev, George Toderici, Balakrishnan Varadarajan, Sudheendra Vijayanarasimhan
cs.CV
10 pages
null
cs.CV
20160927
20160927
[ { "id": "1502.07209" } ]
1609.09106
1
# HYPERNETWORKS David Ha; Andrew Dai, Quoc V. Le Google Brain {hadavid, adai, qvl1}@google.com # ABSTRACT This work explores hypernetworks: an approach of using a one network, also known as a hypernetwork, to generate the weights for another network. Hypernet- works provide an abstraction that is similar to what is found in nature: the relation- ship between a genotype — the hypernetwork — and a phenotype — the main net- work. Though they are also reminiscent of HyperNEAT in evolution, our hyper- networks are trained end-to-end with backpropagation and thus are usually faster. The focus of this work is to make hypernetworks useful for deep convolutional networks and long recurrent networks, where hypernetworks can be viewed as re- laxed form of weight-sharing across layers. Our main result is that hypernetworks can generate non-shared weights for LSTM and achieve near state-of-the-art re- sults on a variety of sequence modelling tasks including character-level language modelling, handwriting generation and neural machine translation, challenging the weight-sharing paradigm for recurrent networks. Our results also show that hypernetworks applied to convolutional networks still achieve respectable results for image recognition tasks compared to state-of-the-art baseline models while requiring fewer learnable parameters. # 1 INTRODUCTION
1609.09106#1
HyperNetworks
This work explores hypernetworks: an approach of using a one network, also known as a hypernetwork, to generate the weights for another network. Hypernetworks provide an abstraction that is similar to what is found in nature: the relationship between a genotype - the hypernetwork - and a phenotype - the main network. Though they are also reminiscent of HyperNEAT in evolution, our hypernetworks are trained end-to-end with backpropagation and thus are usually faster. The focus of this work is to make hypernetworks useful for deep convolutional networks and long recurrent networks, where hypernetworks can be viewed as relaxed form of weight-sharing across layers. Our main result is that hypernetworks can generate non-shared weights for LSTM and achieve near state-of-the-art results on a variety of sequence modelling tasks including character-level language modelling, handwriting generation and neural machine translation, challenging the weight-sharing paradigm for recurrent networks. Our results also show that hypernetworks applied to convolutional networks still achieve respectable results for image recognition tasks compared to state-of-the-art baseline models while requiring fewer learnable parameters.
http://arxiv.org/pdf/1609.09106
David Ha, Andrew Dai, Quoc V. Le
cs.LG
null
null
cs.LG
20160927
20161201
[ { "id": "1603.09025" } ]
1609.08675
2
filtered the video labels (Knowledge Graph entities) using both automated and manual curation strategies, including asking human raters if the labels are visually recognizable. Then, we decoded each video at one-frame-per-second, and used a Deep CNN pre- trained on ImageNet to extract the hidden representation immedi- ately prior to the classification layer. Finally, we compressed the frame features and make both the features and video-level labels available for download. The dataset contains frame-level features for over 1.9 billion video frames and 8 million videos, making it the largest public multi-label video dataset.
1609.08675#2
YouTube-8M: A Large-Scale Video Classification Benchmark
Many recent advancements in Computer Vision are attributed to large datasets. Open-source software packages for Machine Learning and inexpensive commodity hardware have reduced the barrier of entry for exploring novel approaches at scale. It is possible to train models over millions of examples within a few days. Although large-scale datasets exist for image understanding, such as ImageNet, there are no comparable size video classification datasets. In this paper, we introduce YouTube-8M, the largest multi-label video classification dataset, composed of ~8 million videos (500K hours of video), annotated with a vocabulary of 4800 visual entities. To get the videos and their labels, we used a YouTube video annotation system, which labels videos with their main topics. While the labels are machine-generated, they have high-precision and are derived from a variety of human-based signals including metadata and query click signals. We filtered the video labels (Knowledge Graph entities) using both automated and manual curation strategies, including asking human raters if the labels are visually recognizable. Then, we decoded each video at one-frame-per-second, and used a Deep CNN pre-trained on ImageNet to extract the hidden representation immediately prior to the classification layer. Finally, we compressed the frame features and make both the features and video-level labels available for download. We trained various (modest) classification models on the dataset, evaluated them using popular evaluation metrics, and report them as baselines. Despite the size of the dataset, some of our models train to convergence in less than a day on a single machine using TensorFlow. We plan to release code for training a TensorFlow model and for computing metrics.
http://arxiv.org/pdf/1609.08675
Sami Abu-El-Haija, Nisarg Kothari, Joonseok Lee, Paul Natsev, George Toderici, Balakrishnan Varadarajan, Sudheendra Vijayanarasimhan
cs.CV
10 pages
null
cs.CV
20160927
20160927
[ { "id": "1502.07209" } ]
1609.09106
2
# 1 INTRODUCTION In this work, we consider an approach of using a small network (called a “hypernetwork") to generate the weights for a larger network (called a main network). The behavior of the main network is the same with any usual neural network: it learns to map some raw inputs to their desired targets; whereas the hypernetwork takes a set of inputs that contain information about the structure of the weights and generates the weight for that layer (see Figure 1). >| hy > wy W2 layer index and other information about the weight Figure 1: A hypernetwork generates the weights for a feedforward network. Black connections and parameters are associated the main network whereas orange connections and parameters are associated with the hypernetwork. HyperNEAT (Stanley et al., 2009) is an example of hypernetworks where the inputs are a set of virtual coordinates for each weight in the main network. In this work, we will focus on a more pow- erful approach where the input is an embedding vector that describes the entire weights of a given layer. Our embedding vectors can be fixed parameters that are also learned during end-to-end train- ing, allowing approximate weight-sharing within a layer and across layers of the main network. In “Work done as a member of the Google Brain Residency program (g.co/brainresidency).
1609.09106#2
HyperNetworks
This work explores hypernetworks: an approach of using a one network, also known as a hypernetwork, to generate the weights for another network. Hypernetworks provide an abstraction that is similar to what is found in nature: the relationship between a genotype - the hypernetwork - and a phenotype - the main network. Though they are also reminiscent of HyperNEAT in evolution, our hypernetworks are trained end-to-end with backpropagation and thus are usually faster. The focus of this work is to make hypernetworks useful for deep convolutional networks and long recurrent networks, where hypernetworks can be viewed as relaxed form of weight-sharing across layers. Our main result is that hypernetworks can generate non-shared weights for LSTM and achieve near state-of-the-art results on a variety of sequence modelling tasks including character-level language modelling, handwriting generation and neural machine translation, challenging the weight-sharing paradigm for recurrent networks. Our results also show that hypernetworks applied to convolutional networks still achieve respectable results for image recognition tasks compared to state-of-the-art baseline models while requiring fewer learnable parameters.
http://arxiv.org/pdf/1609.09106
David Ha, Andrew Dai, Quoc V. Le
cs.LG
null
null
cs.LG
20160927
20161201
[ { "id": "1603.09025" } ]
1609.08675
3
Vertical Filter Entities [oeom ress goa] [Spo] Figure 1: YouTube-8M is a large-scale benchmark for general multi-label video classification. This screenshot of a dataset explorer depicts a subset of videos in the dataset annotated with the entity “Guitar”. The dataset explorer allows browsing and searching of the full vocabulary of Knowledge Graph enti- ties, grouped in 24 top-level verticals, along with corresponding videos. We trained various (modest) classification models on the dataset, evaluated them using popular evaluation metrics, and report them as baselines. Despite the size of the dataset, some of our models train to convergence in less than a day on a single machine using the publicly-available TensorFlow framework. We plan to release code for training a basic TensorFlow model and for computing metrics. like Sports-1M and ActivityNet. We achieve state-of-the-art on Ac- tivityNet, improving mAP from 53.8% to 77.6%. We hope that the unprecedented scale and diversity of YouTube-8M will lead to ad- vances in video understanding and representation learning.
1609.08675#3
YouTube-8M: A Large-Scale Video Classification Benchmark
Many recent advancements in Computer Vision are attributed to large datasets. Open-source software packages for Machine Learning and inexpensive commodity hardware have reduced the barrier of entry for exploring novel approaches at scale. It is possible to train models over millions of examples within a few days. Although large-scale datasets exist for image understanding, such as ImageNet, there are no comparable size video classification datasets. In this paper, we introduce YouTube-8M, the largest multi-label video classification dataset, composed of ~8 million videos (500K hours of video), annotated with a vocabulary of 4800 visual entities. To get the videos and their labels, we used a YouTube video annotation system, which labels videos with their main topics. While the labels are machine-generated, they have high-precision and are derived from a variety of human-based signals including metadata and query click signals. We filtered the video labels (Knowledge Graph entities) using both automated and manual curation strategies, including asking human raters if the labels are visually recognizable. Then, we decoded each video at one-frame-per-second, and used a Deep CNN pre-trained on ImageNet to extract the hidden representation immediately prior to the classification layer. Finally, we compressed the frame features and make both the features and video-level labels available for download. We trained various (modest) classification models on the dataset, evaluated them using popular evaluation metrics, and report them as baselines. Despite the size of the dataset, some of our models train to convergence in less than a day on a single machine using TensorFlow. We plan to release code for training a TensorFlow model and for computing metrics.
http://arxiv.org/pdf/1609.08675
Sami Abu-El-Haija, Nisarg Kothari, Joonseok Lee, Paul Natsev, George Toderici, Balakrishnan Varadarajan, Sudheendra Vijayanarasimhan
cs.CV
10 pages
null
cs.CV
20160927
20160927
[ { "id": "1502.07209" } ]
1609.09106
3
“Work done as a member of the Google Brain Residency program (g.co/brainresidency). addition, our embedding vectors can also be generated dynamically by our hypernetwork, allowing the weights of a recurrent network to change over timesteps and also adapt to the input sequence. We perform experiments to investigate the behaviors of hypernetworks in a range of contexts and find that hypernetworks mix well with other techniques such as batch normalization and layer nor- malization. Our main result is that hypernetworks can generate non-shared weights for LSTM that work better than the standard version of LSTM (Hochreiter & Schmidhuber, 1997). On language modelling tasks with Character Penn Treebank, Hutter Prize Wikipedia datasets, hypernetworks for LSTM achieve near state-of-the-art results. On a handwriting generation task with IAM handwrit- ing dataset, Hypernetworks for LSTM achieves high quantitative and qualitative results. On image classification with CIFAR-10, hypernetworks, when being used to generate weights for a deep con- vnet (LeCun et al., 1990), obtain respectable results compared to state-of-the-art models while hav- ing fewer learnable parameters. In addition to simple tasks, we show that Hypernetworks for LSTM offers an increase in performance for large, production-level neural machine translation models.
1609.09106#3
HyperNetworks
This work explores hypernetworks: an approach of using a one network, also known as a hypernetwork, to generate the weights for another network. Hypernetworks provide an abstraction that is similar to what is found in nature: the relationship between a genotype - the hypernetwork - and a phenotype - the main network. Though they are also reminiscent of HyperNEAT in evolution, our hypernetworks are trained end-to-end with backpropagation and thus are usually faster. The focus of this work is to make hypernetworks useful for deep convolutional networks and long recurrent networks, where hypernetworks can be viewed as relaxed form of weight-sharing across layers. Our main result is that hypernetworks can generate non-shared weights for LSTM and achieve near state-of-the-art results on a variety of sequence modelling tasks including character-level language modelling, handwriting generation and neural machine translation, challenging the weight-sharing paradigm for recurrent networks. Our results also show that hypernetworks applied to convolutional networks still achieve respectable results for image recognition tasks compared to state-of-the-art baseline models while requiring fewer learnable parameters.
http://arxiv.org/pdf/1609.09106
David Ha, Andrew Dai, Quoc V. Le
cs.LG
null
null
cs.LG
20160927
20161201
[ { "id": "1603.09025" } ]
1609.08675
4
ous tasks beyond classification [41, 9, 31]. In a similar vein, the amount and size of video benchmarks is growing with the avail- ability of Sports-1M [19] for sports videos and ActivityNet [12] for human activities. However, unlike ImageNet, which contains a diverse and general set of objects/entities, existing video bench- marks are restricted to action and sports classes. In this paper, we introduce YouTube-8M 1, a large-scale bench- mark dataset for general multi-label video classification. We treat the task of video classification as that of producing labels that are relevant to a video given its frames. Therefore, unlike Sports-1M and ActivityNet, YouTube-8M is not restricted to action classes alone. For example, Figure 1 shows random video examples for the Guitar entity. # INTRODUCTION
1609.08675#4
YouTube-8M: A Large-Scale Video Classification Benchmark
Many recent advancements in Computer Vision are attributed to large datasets. Open-source software packages for Machine Learning and inexpensive commodity hardware have reduced the barrier of entry for exploring novel approaches at scale. It is possible to train models over millions of examples within a few days. Although large-scale datasets exist for image understanding, such as ImageNet, there are no comparable size video classification datasets. In this paper, we introduce YouTube-8M, the largest multi-label video classification dataset, composed of ~8 million videos (500K hours of video), annotated with a vocabulary of 4800 visual entities. To get the videos and their labels, we used a YouTube video annotation system, which labels videos with their main topics. While the labels are machine-generated, they have high-precision and are derived from a variety of human-based signals including metadata and query click signals. We filtered the video labels (Knowledge Graph entities) using both automated and manual curation strategies, including asking human raters if the labels are visually recognizable. Then, we decoded each video at one-frame-per-second, and used a Deep CNN pre-trained on ImageNet to extract the hidden representation immediately prior to the classification layer. Finally, we compressed the frame features and make both the features and video-level labels available for download. We trained various (modest) classification models on the dataset, evaluated them using popular evaluation metrics, and report them as baselines. Despite the size of the dataset, some of our models train to convergence in less than a day on a single machine using TensorFlow. We plan to release code for training a TensorFlow model and for computing metrics.
http://arxiv.org/pdf/1609.08675
Sami Abu-El-Haija, Nisarg Kothari, Joonseok Lee, Paul Natsev, George Toderici, Balakrishnan Varadarajan, Sudheendra Vijayanarasimhan
cs.CV
10 pages
null
cs.CV
20160927
20160927
[ { "id": "1502.07209" } ]
1609.09106
4
# 2 MOTIVATION AND RELATED WORK Our approach is inspired by methods in evolutionary computing, where it is difficult to directly operate in large search spaces consisting of millions of weight parameters. A more efficient method is to evolve a smaller network to generate the structure of weights for a larger network, so that the search is constrained within the much smaller weight space. An instance of this approach is the work on the HyperNEAT framework (Stanley et al., 2009). In the HyperNEAT framework, Compositional Pattern-Producing Networks (CPPNs) are evolved to define the weight structure of much larger main network. Closely related to our approach is a simplified variation of HyperNEAT, where the structure is fixed and the weights are evolved through Discrete Cosine Transform (DCT) is called Compressed Weight Search (Koutnik et al., 2010). Even more closely related to our approach are Differentiable Pattern Producing Networks (DPPNs), where the structure is evolved but the weights are learned (Fernando et al., 2016), and ACDC-Networks (Moczulski et al., 2015), where linear layers are compressed with DCT and the parameters are learned.
1609.09106#4
HyperNetworks
This work explores hypernetworks: an approach of using a one network, also known as a hypernetwork, to generate the weights for another network. Hypernetworks provide an abstraction that is similar to what is found in nature: the relationship between a genotype - the hypernetwork - and a phenotype - the main network. Though they are also reminiscent of HyperNEAT in evolution, our hypernetworks are trained end-to-end with backpropagation and thus are usually faster. The focus of this work is to make hypernetworks useful for deep convolutional networks and long recurrent networks, where hypernetworks can be viewed as relaxed form of weight-sharing across layers. Our main result is that hypernetworks can generate non-shared weights for LSTM and achieve near state-of-the-art results on a variety of sequence modelling tasks including character-level language modelling, handwriting generation and neural machine translation, challenging the weight-sharing paradigm for recurrent networks. Our results also show that hypernetworks applied to convolutional networks still achieve respectable results for image recognition tasks compared to state-of-the-art baseline models while requiring fewer learnable parameters.
http://arxiv.org/pdf/1609.09106
David Ha, Andrew Dai, Quoc V. Le
cs.LG
null
null
cs.LG
20160927
20161201
[ { "id": "1603.09025" } ]
1609.08675
5
# INTRODUCTION Large-scale datasets such as ImageNet [6] have been key en- ablers of recent progress in image understanding [20, 14, 11]. By supporting the learning process of deep networks with mil- lions of parameters, such datasets have played a crucial role for the rapid progress of image understanding to near-human level ac- curacy [30]. Furthermore, intermediate layer activations of such networks have proven to be powerful and interpretable for variWe first construct a visual annotation vocabulary from Knowl- edge Graph entities that appear as topic annotations for YouTube videos based on the YouTube video annotation system [2]. To en- sure that our vocabulary consists of entities that are recognizable visually, we use various filtering criteria, including human raters. The entities in the dataset span activities (sports, games, hobbies), objects (autos, food, products), scenes (travel), and events. The # 1http://research.google.com/youtube8m
1609.08675#5
YouTube-8M: A Large-Scale Video Classification Benchmark
Many recent advancements in Computer Vision are attributed to large datasets. Open-source software packages for Machine Learning and inexpensive commodity hardware have reduced the barrier of entry for exploring novel approaches at scale. It is possible to train models over millions of examples within a few days. Although large-scale datasets exist for image understanding, such as ImageNet, there are no comparable size video classification datasets. In this paper, we introduce YouTube-8M, the largest multi-label video classification dataset, composed of ~8 million videos (500K hours of video), annotated with a vocabulary of 4800 visual entities. To get the videos and their labels, we used a YouTube video annotation system, which labels videos with their main topics. While the labels are machine-generated, they have high-precision and are derived from a variety of human-based signals including metadata and query click signals. We filtered the video labels (Knowledge Graph entities) using both automated and manual curation strategies, including asking human raters if the labels are visually recognizable. Then, we decoded each video at one-frame-per-second, and used a Deep CNN pre-trained on ImageNet to extract the hidden representation immediately prior to the classification layer. Finally, we compressed the frame features and make both the features and video-level labels available for download. We trained various (modest) classification models on the dataset, evaluated them using popular evaluation metrics, and report them as baselines. Despite the size of the dataset, some of our models train to convergence in less than a day on a single machine using TensorFlow. We plan to release code for training a TensorFlow model and for computing metrics.
http://arxiv.org/pdf/1609.08675
Sami Abu-El-Haija, Nisarg Kothari, Joonseok Lee, Paul Natsev, George Toderici, Balakrishnan Varadarajan, Sudheendra Vijayanarasimhan
cs.CV
10 pages
null
cs.CV
20160927
20160927
[ { "id": "1502.07209" } ]
1609.09106
5
Most reported results using these methods, however, are in small scales, perhaps because they are both slow to train and require heuristics to be efficient. The main difference between our approach and HyperNEAT is that hypernetworks in our approach are trained end-to-end with gradient descent together with the main network, and therefore are more efficient. In addition to end-to-end learning with gradient descent, our approach strikes a good balance be- tween Compressed Weight Search and HyperNEAT in terms of model flexibility and training sim- plicity. First, it can be argued that Discrete Cosine Transform used in Compressed Weight Search may be too simple and using the DCT prior may not be suitable for many problems. Second, even though HyperNEAT is more flexible, evolving both the architecture and the weights in HyperNEAT is often an overkill for most practical problems.
1609.09106#5
HyperNetworks
This work explores hypernetworks: an approach of using a one network, also known as a hypernetwork, to generate the weights for another network. Hypernetworks provide an abstraction that is similar to what is found in nature: the relationship between a genotype - the hypernetwork - and a phenotype - the main network. Though they are also reminiscent of HyperNEAT in evolution, our hypernetworks are trained end-to-end with backpropagation and thus are usually faster. The focus of this work is to make hypernetworks useful for deep convolutional networks and long recurrent networks, where hypernetworks can be viewed as relaxed form of weight-sharing across layers. Our main result is that hypernetworks can generate non-shared weights for LSTM and achieve near state-of-the-art results on a variety of sequence modelling tasks including character-level language modelling, handwriting generation and neural machine translation, challenging the weight-sharing paradigm for recurrent networks. Our results also show that hypernetworks applied to convolutional networks still achieve respectable results for image recognition tasks compared to state-of-the-art baseline models while requiring fewer learnable parameters.
http://arxiv.org/pdf/1609.09106
David Ha, Andrew Dai, Quoc V. Le
cs.LG
null
null
cs.LG
20160927
20161201
[ { "id": "1603.09025" } ]
1609.08675
6
# 1http://research.google.com/youtube8m 10 r o. ul i YouTybe8M @ | gr o eee PImagenet 0% Seog e ioplebachle rc) ist Cgco magenet(ILSYRG). 2 Msit Ggco 5 | mee perenne E10 bce Fgp....@ 4 S : : “SUN. 2 Actntglgt & , 1 Pasgal UCF 109 Caltech 256 © 10°» af serietereed Sveetieeeeity °-Galtech 101: fl Hollyvood ne ! Image Datasets ; : Video Datasets 10 ; 10" 10 10 10° 10° Total Number of Classes Figure 2: The progression of datasets for image and video understand- ing tasks. Large datasets have played a key role for advances in both areas. entities were selected using a combination of their popularity on YouTube and manual ratings of their visualness according to hu- man raters. They are an attempt to describe the central themes of videos using a few succinct labels.
1609.08675#6
YouTube-8M: A Large-Scale Video Classification Benchmark
Many recent advancements in Computer Vision are attributed to large datasets. Open-source software packages for Machine Learning and inexpensive commodity hardware have reduced the barrier of entry for exploring novel approaches at scale. It is possible to train models over millions of examples within a few days. Although large-scale datasets exist for image understanding, such as ImageNet, there are no comparable size video classification datasets. In this paper, we introduce YouTube-8M, the largest multi-label video classification dataset, composed of ~8 million videos (500K hours of video), annotated with a vocabulary of 4800 visual entities. To get the videos and their labels, we used a YouTube video annotation system, which labels videos with their main topics. While the labels are machine-generated, they have high-precision and are derived from a variety of human-based signals including metadata and query click signals. We filtered the video labels (Knowledge Graph entities) using both automated and manual curation strategies, including asking human raters if the labels are visually recognizable. Then, we decoded each video at one-frame-per-second, and used a Deep CNN pre-trained on ImageNet to extract the hidden representation immediately prior to the classification layer. Finally, we compressed the frame features and make both the features and video-level labels available for download. We trained various (modest) classification models on the dataset, evaluated them using popular evaluation metrics, and report them as baselines. Despite the size of the dataset, some of our models train to convergence in less than a day on a single machine using TensorFlow. We plan to release code for training a TensorFlow model and for computing metrics.
http://arxiv.org/pdf/1609.08675
Sami Abu-El-Haija, Nisarg Kothari, Joonseok Lee, Paul Natsev, George Toderici, Balakrishnan Varadarajan, Sudheendra Vijayanarasimhan
cs.CV
10 pages
null
cs.CV
20160927
20160927
[ { "id": "1502.07209" } ]
1609.09106
6
Even before the work on HyperNEAT and DCT, Schmidhuber (1992; 1993) has suggested the con- cept of fast weights in which one network can produce context-dependent weight changes for a second network. Small scale experiments were conducted to demonstrate fast weights for feed for- ward networks at the time, but perhaps due to the lack of modern computational tools, the recurrent network version was mentioned mainly as a thought experiment (Schmidhuber, 1993). A subse- quent work demonstrated practical applications of fast weights (Gomez & Schmidhuber, 2005), where a generator network is learnt through evolution to solve an artificial control problem. The concept of a network interacting with another network is central to the work of (Jaderberg et al., 2016; Andrychowicz et al., 2016), and especially (Denil et al., 2013; Yang et al., 2015; Bertinetto et al., 2016; De Brabandere et al., 2016), where certain parameters in a convolutional network are predicted by another network. These studies however did not explore the use of this approach to recurrent networks, which is a main contribution of our work.
1609.09106#6
HyperNetworks
This work explores hypernetworks: an approach of using a one network, also known as a hypernetwork, to generate the weights for another network. Hypernetworks provide an abstraction that is similar to what is found in nature: the relationship between a genotype - the hypernetwork - and a phenotype - the main network. Though they are also reminiscent of HyperNEAT in evolution, our hypernetworks are trained end-to-end with backpropagation and thus are usually faster. The focus of this work is to make hypernetworks useful for deep convolutional networks and long recurrent networks, where hypernetworks can be viewed as relaxed form of weight-sharing across layers. Our main result is that hypernetworks can generate non-shared weights for LSTM and achieve near state-of-the-art results on a variety of sequence modelling tasks including character-level language modelling, handwriting generation and neural machine translation, challenging the weight-sharing paradigm for recurrent networks. Our results also show that hypernetworks applied to convolutional networks still achieve respectable results for image recognition tasks compared to state-of-the-art baseline models while requiring fewer learnable parameters.
http://arxiv.org/pdf/1609.09106
David Ha, Andrew Dai, Quoc V. Le
cs.LG
null
null
cs.LG
20160927
20161201
[ { "id": "1603.09025" } ]
1609.08675
7
We then collect a sample set of videos for each entity, and use a publicly available state-of-the-art Inception network [4] to extract features from them. Specifically, we decode videos at one frame- per-second and extract the last hidden representation before the classification layer for each frame. We compress the frame-level features and make them available on our website for download. Overall, YouTube-8M contains more than 8 million videos— over 500,000 hours of video—from 4,800 classes. Figure 2 illus- trates the scale of YouTube-8M, compared to existing image and video datasets. We hope that the unprecedented scale and diversity of this dataset will be a useful resource for developing advanced video understanding and representation learning techniques. Towards this end, we provide extensive experiments comparing several state-of-the-art techniques for video representation learn- ing, including Deep Networks [26], and LSTMs (Long Short-Term In addition, we show Memory Networks) [13] on this dataset. that transfering video feature representations learned on this dataset leads to significant improvements on other benchmarks such as Sports-1M and ActivityNet.
1609.08675#7
YouTube-8M: A Large-Scale Video Classification Benchmark
Many recent advancements in Computer Vision are attributed to large datasets. Open-source software packages for Machine Learning and inexpensive commodity hardware have reduced the barrier of entry for exploring novel approaches at scale. It is possible to train models over millions of examples within a few days. Although large-scale datasets exist for image understanding, such as ImageNet, there are no comparable size video classification datasets. In this paper, we introduce YouTube-8M, the largest multi-label video classification dataset, composed of ~8 million videos (500K hours of video), annotated with a vocabulary of 4800 visual entities. To get the videos and their labels, we used a YouTube video annotation system, which labels videos with their main topics. While the labels are machine-generated, they have high-precision and are derived from a variety of human-based signals including metadata and query click signals. We filtered the video labels (Knowledge Graph entities) using both automated and manual curation strategies, including asking human raters if the labels are visually recognizable. Then, we decoded each video at one-frame-per-second, and used a Deep CNN pre-trained on ImageNet to extract the hidden representation immediately prior to the classification layer. Finally, we compressed the frame features and make both the features and video-level labels available for download. We trained various (modest) classification models on the dataset, evaluated them using popular evaluation metrics, and report them as baselines. Despite the size of the dataset, some of our models train to convergence in less than a day on a single machine using TensorFlow. We plan to release code for training a TensorFlow model and for computing metrics.
http://arxiv.org/pdf/1609.08675
Sami Abu-El-Haija, Nisarg Kothari, Joonseok Lee, Paul Natsev, George Toderici, Balakrishnan Varadarajan, Sudheendra Vijayanarasimhan
cs.CV
10 pages
null
cs.CV
20160927
20160927
[ { "id": "1502.07209" } ]
1609.09106
7
The focus of this work is to generate weights for practical architectures, such as convolutional net- works and recurrent networks by taking layer embedding vectors as inputs. However, our hypernet- works can also be utilized to generate weights for a fully connected network by taking coordinate information as inputs similar to DPPNs. Using this setting, hypernetworks can approximately recover the convolutional architecture without explicitly being told to do so, a similar result obtained by “Convolution by Evolution" (Fernando et al., 2016). This result is described in Appendix A.1. # 3 METHODS In this paper, we view convolutional networks and recurrent networks as two ends of a spectrum. On one end, recurrent networks can be seen as imposing weight-sharing across layers, which makes them inflexible and difficult to learn due to vanishing gradient. On the other end, convolutional networks enjoy the flexibility of not having weight-sharing, at the expense of having redundant parameters when the networks are deep. Hypernetworks can be seen as a form of relaxed weight- sharing, and therefore strikes a balance between the two ends. See Appendix A.2 for conceptual diagrams of Static and Dynamic Hypernetworks. 3.1 STATIC HYPERNETWORK: A WEIGHT FACTORIZATION APPROACH FOR DEEP CONVOLUTIONAL NETWORKS
1609.09106#7
HyperNetworks
This work explores hypernetworks: an approach of using a one network, also known as a hypernetwork, to generate the weights for another network. Hypernetworks provide an abstraction that is similar to what is found in nature: the relationship between a genotype - the hypernetwork - and a phenotype - the main network. Though they are also reminiscent of HyperNEAT in evolution, our hypernetworks are trained end-to-end with backpropagation and thus are usually faster. The focus of this work is to make hypernetworks useful for deep convolutional networks and long recurrent networks, where hypernetworks can be viewed as relaxed form of weight-sharing across layers. Our main result is that hypernetworks can generate non-shared weights for LSTM and achieve near state-of-the-art results on a variety of sequence modelling tasks including character-level language modelling, handwriting generation and neural machine translation, challenging the weight-sharing paradigm for recurrent networks. Our results also show that hypernetworks applied to convolutional networks still achieve respectable results for image recognition tasks compared to state-of-the-art baseline models while requiring fewer learnable parameters.
http://arxiv.org/pdf/1609.09106
David Ha, Andrew Dai, Quoc V. Le
cs.LG
null
null
cs.LG
20160927
20161201
[ { "id": "1603.09025" } ]
1609.08675
8
In the rest of the paper, we first review existing benchmarks for image and video classification in Section 2. We present the details of our dataset including the collection process and a brief analysis of the categories and videos in Section 3. In Section 4, we review several approaches for the task of multi-label video classification given fixed frame-level features, and evaluate the approaches on the dataset. In Section 5, we show that features and models learned on our large-scale dataset generalize very well on other benchmarks. We offer concluding remarks with Section 6. # 2. RELATED WORK Image benchmarks have played a significant role in advancing computer vision algorithms for image understanding. Starting from a number of well labeled small-scale datasets such as Caltech 101/256 [8, 10], MSRC [32], PASCAL [7], image understanding research has rapidly advanced to utilizing larger datasets such as ImageNet [6] and SUN [38] for the next generation of vision algorithms. Im- ageNet in particular has enabled the development of deep feature learning techniques with millions of parameters such as the AlexNet [20] and Inception [14] architectures due to the number of classes (21841), the diversity of the classes (27 top-level categories) and the millions of labeled images available.
1609.08675#8
YouTube-8M: A Large-Scale Video Classification Benchmark
Many recent advancements in Computer Vision are attributed to large datasets. Open-source software packages for Machine Learning and inexpensive commodity hardware have reduced the barrier of entry for exploring novel approaches at scale. It is possible to train models over millions of examples within a few days. Although large-scale datasets exist for image understanding, such as ImageNet, there are no comparable size video classification datasets. In this paper, we introduce YouTube-8M, the largest multi-label video classification dataset, composed of ~8 million videos (500K hours of video), annotated with a vocabulary of 4800 visual entities. To get the videos and their labels, we used a YouTube video annotation system, which labels videos with their main topics. While the labels are machine-generated, they have high-precision and are derived from a variety of human-based signals including metadata and query click signals. We filtered the video labels (Knowledge Graph entities) using both automated and manual curation strategies, including asking human raters if the labels are visually recognizable. Then, we decoded each video at one-frame-per-second, and used a Deep CNN pre-trained on ImageNet to extract the hidden representation immediately prior to the classification layer. Finally, we compressed the frame features and make both the features and video-level labels available for download. We trained various (modest) classification models on the dataset, evaluated them using popular evaluation metrics, and report them as baselines. Despite the size of the dataset, some of our models train to convergence in less than a day on a single machine using TensorFlow. We plan to release code for training a TensorFlow model and for computing metrics.
http://arxiv.org/pdf/1609.08675
Sami Abu-El-Haija, Nisarg Kothari, Joonseok Lee, Paul Natsev, George Toderici, Balakrishnan Varadarajan, Sudheendra Vijayanarasimhan
cs.CV
10 pages
null
cs.CV
20160927
20160927
[ { "id": "1502.07209" } ]
1609.09106
8
3.1 STATIC HYPERNETWORK: A WEIGHT FACTORIZATION APPROACH FOR DEEP CONVOLUTIONAL NETWORKS First we will describe how we construct a hypernetwork for the purpose of generating the weights of a feedforward convolutional network. In a typical deep convolutional network, the majority of model parameters are in the kernels of convolutional layers. Each kernel contain Nj, x Nouz filters and each filter has dimensions f.;e < fsize. Let’s suppose that these parameters are stored in a matrix KJ ¢ RNinfsizexNourSsize for each layer 7 = 1,..,D, where D is the depth of the main convolutional network. For each layer j, the hypernetwork receives a layer embedding z/ € RY* as input and predicts A’, which can be generally written as follows: Ki =g(z’), Vj=1,..,D (dy
1609.09106#8
HyperNetworks
This work explores hypernetworks: an approach of using a one network, also known as a hypernetwork, to generate the weights for another network. Hypernetworks provide an abstraction that is similar to what is found in nature: the relationship between a genotype - the hypernetwork - and a phenotype - the main network. Though they are also reminiscent of HyperNEAT in evolution, our hypernetworks are trained end-to-end with backpropagation and thus are usually faster. The focus of this work is to make hypernetworks useful for deep convolutional networks and long recurrent networks, where hypernetworks can be viewed as relaxed form of weight-sharing across layers. Our main result is that hypernetworks can generate non-shared weights for LSTM and achieve near state-of-the-art results on a variety of sequence modelling tasks including character-level language modelling, handwriting generation and neural machine translation, challenging the weight-sharing paradigm for recurrent networks. Our results also show that hypernetworks applied to convolutional networks still achieve respectable results for image recognition tasks compared to state-of-the-art baseline models while requiring fewer learnable parameters.
http://arxiv.org/pdf/1609.09106
David Ha, Andrew Dai, Quoc V. Le
cs.LG
null
null
cs.LG
20160927
20161201
[ { "id": "1603.09025" } ]
1609.08675
9
[20] and Inception [14] architectures due to the number of classes (21841), the diversity of the classes (27 top-level categories) and the millions of labeled images available. A similar effort is in progress in the video understanding do- main where the community has quickly progressed from small, well-labeled datasets such as KTH [22], Hollywood 2 [23], Weiz- mann [5], with a few thousand video clips, to medium-scale datasets such as UCF101 [33], Thumos‘14 [16] and HMDB51 [21], with more than 50 action categories. Currently, the largest available video benchmarks are the Sports-1M [19], with 487 sports related activities and 1M videos, the YFCC-100M [34], with 800K videos and raw metadata (titles, descriptions, tags) for some of them, the FCVID [17] dataset of 91, 223 videos manually annotated with 239 categories, and ActivityNet [12], with ∼200 human activity classes and a few thousand videos. However, almost all current video benchmarks are restricted to recognizing action and activity categories, and have less than 500 categories. YouTube-8M fills the gap in video benchmarks as follows: • A large-scale video annotation and representation learn- ing benchmark, reflecting the main themes of a video.
1609.08675#9
YouTube-8M: A Large-Scale Video Classification Benchmark
Many recent advancements in Computer Vision are attributed to large datasets. Open-source software packages for Machine Learning and inexpensive commodity hardware have reduced the barrier of entry for exploring novel approaches at scale. It is possible to train models over millions of examples within a few days. Although large-scale datasets exist for image understanding, such as ImageNet, there are no comparable size video classification datasets. In this paper, we introduce YouTube-8M, the largest multi-label video classification dataset, composed of ~8 million videos (500K hours of video), annotated with a vocabulary of 4800 visual entities. To get the videos and their labels, we used a YouTube video annotation system, which labels videos with their main topics. While the labels are machine-generated, they have high-precision and are derived from a variety of human-based signals including metadata and query click signals. We filtered the video labels (Knowledge Graph entities) using both automated and manual curation strategies, including asking human raters if the labels are visually recognizable. Then, we decoded each video at one-frame-per-second, and used a Deep CNN pre-trained on ImageNet to extract the hidden representation immediately prior to the classification layer. Finally, we compressed the frame features and make both the features and video-level labels available for download. We trained various (modest) classification models on the dataset, evaluated them using popular evaluation metrics, and report them as baselines. Despite the size of the dataset, some of our models train to convergence in less than a day on a single machine using TensorFlow. We plan to release code for training a TensorFlow model and for computing metrics.
http://arxiv.org/pdf/1609.08675
Sami Abu-El-Haija, Nisarg Kothari, Joonseok Lee, Paul Natsev, George Toderici, Balakrishnan Varadarajan, Sudheendra Vijayanarasimhan
cs.CV
10 pages
null
cs.CV
20160927
20160927
[ { "id": "1502.07209" } ]