doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1610.03017 | 48 | Yvette Graham, Nitika Mathur, and Timothy Baldwin. 2015. Accurate evaluation of segment-level machine translation metrics. In Proceedings of the 2015 Con- ference of the North American Chapter of the Associ- ation for Computational Linguistics Human Language Technologies, Denver, Colorado.
Yvette Graham, Timothy Baldwin, Alistair Moffat, and Justin Zobel. 2016. Can machine translation systems be evaluated by the crowd alone. Natural Language Engineering, FirstView.
Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735â 1780.
Ray S. Jackendoff. 1992. Semantic Structures, vol- ume 18. MIT press.
S´ebastien Jean, Kyunghyun Cho, Roland Memisevic, and Yoshua Bengio. 2015. On using very large target vo- cabulary for neural machine translation. In Proceed- ings of the 53rd Annual Meeting of the Association for Computational Linguistics.
Yoon Kim, Yacine Jernite, David Sontag, and Alexan- der M. Rush. 2015. Character-aware neural language models. In Proceedings of the 30th AAAI Conference on Artiï¬cial Intelligence. | 1610.03017#48 | Fully Character-Level Neural Machine Translation without Explicit Segmentation | Most existing machine translation systems operate at the level of words,
relying on explicit segmentation to extract tokens. We introduce a neural
machine translation (NMT) model that maps a source character sequence to a
target character sequence without any segmentation. We employ a character-level
convolutional network with max-pooling at the encoder to reduce the length of
source representation, allowing the model to be trained at a speed comparable
to subword-level models while capturing local regularities. Our
character-to-character model outperforms a recently proposed baseline with a
subword-level encoder on WMT'15 DE-EN and CS-EN, and gives comparable
performance on FI-EN and RU-EN. We then demonstrate that it is possible to
share a single character-level encoder across multiple languages by training a
model on a many-to-one translation task. In this multilingual setting, the
character-level encoder significantly outperforms the subword-level encoder on
all the language pairs. We observe that on CS-EN, FI-EN and RU-EN, the quality
of the multilingual character-level translation even surpasses the models
specifically trained on that language pair alone, both in terms of BLEU score
and human judgment. | http://arxiv.org/pdf/1610.03017 | Jason Lee, Kyunghyun Cho, Thomas Hofmann | cs.CL, cs.LG | Transactions of the Association for Computational Linguistics (TACL),
2017 | null | cs.CL | 20161010 | 20170613 | [
{
"id": "1602.00367"
},
{
"id": "1609.08144"
},
{
"id": "1511.04586"
}
] |
1610.03017 | 49 | 2014. Adam: A method for stochastic optimization. In Proceedings of the 3rd International Conference for Learning Repre- sentations (ICLR).
Wang Ling, Isabel Trancoso, Chris Dyer, and Alan W. Black. 2015. Character-based neural machine transla- tion. arXiv preprint arXiv:1511.04586.
Minh-Thang Luong and Christopher D. Manning. 2016. Achieving open vocabulary neural machine translation with hybrid word-character models. In Proceedings of the 54th Annual Meeting of the Association for Com- putational Linguistics.
Minh-Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective approaches to attention- In Proceedings of based neural machine translation. the 54th Annual Meeting of the Association for Com- putational Linguistics.
Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. 2013. On the difï¬culty of training recurrent neural net- works. In Proceedings of the 30th International Con- ference on Machine Learning (ICML).
Rico Sennrich, Barry Haddow, and Alexandra Birch. 2015. Neural machine translation of rare words with In Proceedings of the 54th Annual subword units.
Meeting of the Association for Computational Linguis- tics. | 1610.03017#49 | Fully Character-Level Neural Machine Translation without Explicit Segmentation | Most existing machine translation systems operate at the level of words,
relying on explicit segmentation to extract tokens. We introduce a neural
machine translation (NMT) model that maps a source character sequence to a
target character sequence without any segmentation. We employ a character-level
convolutional network with max-pooling at the encoder to reduce the length of
source representation, allowing the model to be trained at a speed comparable
to subword-level models while capturing local regularities. Our
character-to-character model outperforms a recently proposed baseline with a
subword-level encoder on WMT'15 DE-EN and CS-EN, and gives comparable
performance on FI-EN and RU-EN. We then demonstrate that it is possible to
share a single character-level encoder across multiple languages by training a
model on a many-to-one translation task. In this multilingual setting, the
character-level encoder significantly outperforms the subword-level encoder on
all the language pairs. We observe that on CS-EN, FI-EN and RU-EN, the quality
of the multilingual character-level translation even surpasses the models
specifically trained on that language pair alone, both in terms of BLEU score
and human judgment. | http://arxiv.org/pdf/1610.03017 | Jason Lee, Kyunghyun Cho, Thomas Hofmann | cs.CL, cs.LG | Transactions of the Association for Computational Linguistics (TACL),
2017 | null | cs.CL | 20161010 | 20170613 | [
{
"id": "1602.00367"
},
{
"id": "1609.08144"
},
{
"id": "1511.04586"
}
] |
1610.03017 | 50 | Meeting of the Association for Computational Linguis- tics.
Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Edinburgh neural machine translation systems for WMT 16.
Rupesh Kumar Srivastava, Klaus Greff, and J¨urgen Schmidhuber. 2015. Training very deep networks. In Advances in Neural Information Processing Systems (NIPS 2015), volume 28.
Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2015. Se- quence to sequence learning with neural networks. In Advances in Neural Information Processing Systems (NIPS 2015), volume 28.
Yulia Tsvetkov, Sunayana Sitaram, Manaal Faruqui, Guillaume Lample, Patrick Littell, David Mortensen, Alan W. Black, Lori Levin, and Chris Dyer. 2016. Polyglot neural language models: A case study in cross-lingual phonetic representation learning. In Pro- ceedings of the 2016 Conference of the North Ameri- can Chapter of the Association for Computational Lin- guistics. | 1610.03017#50 | Fully Character-Level Neural Machine Translation without Explicit Segmentation | Most existing machine translation systems operate at the level of words,
relying on explicit segmentation to extract tokens. We introduce a neural
machine translation (NMT) model that maps a source character sequence to a
target character sequence without any segmentation. We employ a character-level
convolutional network with max-pooling at the encoder to reduce the length of
source representation, allowing the model to be trained at a speed comparable
to subword-level models while capturing local regularities. Our
character-to-character model outperforms a recently proposed baseline with a
subword-level encoder on WMT'15 DE-EN and CS-EN, and gives comparable
performance on FI-EN and RU-EN. We then demonstrate that it is possible to
share a single character-level encoder across multiple languages by training a
model on a many-to-one translation task. In this multilingual setting, the
character-level encoder significantly outperforms the subword-level encoder on
all the language pairs. We observe that on CS-EN, FI-EN and RU-EN, the quality
of the multilingual character-level translation even surpasses the models
specifically trained on that language pair alone, both in terms of BLEU score
and human judgment. | http://arxiv.org/pdf/1610.03017 | Jason Lee, Kyunghyun Cho, Thomas Hofmann | cs.CL, cs.LG | Transactions of the Association for Computational Linguistics (TACL),
2017 | null | cs.CL | 20161010 | 20170613 | [
{
"id": "1602.00367"
},
{
"id": "1609.08144"
},
{
"id": "1511.04586"
}
] |
1610.03017 | 51 | Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Lukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2016. Googleâs neural machine translation system: Bridg- ing the gap between human and machine translation. arXiv preprint arXiv:1609.08144. Yijun Xiao and Kyunghyun Cho.
Efï¬cient character-level document classiï¬cation by combin- ing convolution and recurrent layers. arXiv preprint arXiv:1602.00367.
2015. Character-level convolutional networks for text classi- ï¬cation. In Advances in Neural Information Process- ing Systems (NIPS 2015), volume 28. | 1610.03017#51 | Fully Character-Level Neural Machine Translation without Explicit Segmentation | Most existing machine translation systems operate at the level of words,
relying on explicit segmentation to extract tokens. We introduce a neural
machine translation (NMT) model that maps a source character sequence to a
target character sequence without any segmentation. We employ a character-level
convolutional network with max-pooling at the encoder to reduce the length of
source representation, allowing the model to be trained at a speed comparable
to subword-level models while capturing local regularities. Our
character-to-character model outperforms a recently proposed baseline with a
subword-level encoder on WMT'15 DE-EN and CS-EN, and gives comparable
performance on FI-EN and RU-EN. We then demonstrate that it is possible to
share a single character-level encoder across multiple languages by training a
model on a many-to-one translation task. In this multilingual setting, the
character-level encoder significantly outperforms the subword-level encoder on
all the language pairs. We observe that on CS-EN, FI-EN and RU-EN, the quality
of the multilingual character-level translation even surpasses the models
specifically trained on that language pair alone, both in terms of BLEU score
and human judgment. | http://arxiv.org/pdf/1610.03017 | Jason Lee, Kyunghyun Cho, Thomas Hofmann | cs.CL, cs.LG | Transactions of the Association for Computational Linguistics (TACL),
2017 | null | cs.CL | 20161010 | 20170613 | [
{
"id": "1602.00367"
},
{
"id": "1609.08144"
},
{
"id": "1511.04586"
}
] |
1610.02136 | 0 | 8 1 0 2
t c O 3 ] E N . s c [
3 v 6 3 1 2 0 . 0 1 6 1 : v i X r a
Published as a conference paper at ICLR 2017
# A BASELINE FOR DETECTING MISCLASSIFIED AND OUT-OF-DISTRIBUTION EXAMPLES IN NEURAL NETWORKS
Dan Hendrycksâ University of California, Berkeley [email protected]
Kevin Gimpel Toyota Technological Institute at Chicago [email protected]
# ABSTRACT
We consider the two related problems of detecting if an example is misclassiï¬ed or out-of-distribution. We present a simple baseline that utilizes probabilities from softmax distributions. Correctly classiï¬ed examples tend to have greater maxi- mum softmax probabilities than erroneously classiï¬ed and out-of-distribution ex- amples, allowing for their detection. We assess performance by deï¬ning sev- eral tasks in computer vision, natural language processing, and automatic speech recognition, showing the effectiveness of this baseline across all. We then show the baseline can sometimes be surpassed, demonstrating the room for future re- search on these underexplored detection tasks.
# INTRODUCTION | 1610.02136#0 | A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks | We consider the two related problems of detecting if an example is
misclassified or out-of-distribution. We present a simple baseline that
utilizes probabilities from softmax distributions. Correctly classified
examples tend to have greater maximum softmax probabilities than erroneously
classified and out-of-distribution examples, allowing for their detection. We
assess performance by defining several tasks in computer vision, natural
language processing, and automatic speech recognition, showing the
effectiveness of this baseline across all. We then show the baseline can
sometimes be surpassed, demonstrating the room for future research on these
underexplored detection tasks. | http://arxiv.org/pdf/1610.02136 | Dan Hendrycks, Kevin Gimpel | cs.NE, cs.CV, cs.LG | Published as a conference paper at ICLR 2017. 1 Figure in 1 Appendix.
Minor changes from the previous version | International Conference on Learning Representations 2017 | cs.NE | 20161007 | 20181003 | [] |
1610.02357 | 0 | 7 1 0 2
r p A 4 ] V C . s c [
3 v 7 5 3 2 0 . 0 1 6 1 : v i X r a
# Xception: Deep Learning with Depthwise Separable Convolutions
# Franc¸ois Chollet Google, Inc. [email protected]
# Abstract
We present an interpretation of Inception modules in con- volutional neural networks as being an intermediate step in-between regular convolution and the depthwise separable convolution operation (a depthwise convolution followed by a pointwise convolution). In this light, a depthwise separable convolution can be understood as an Inception module with a maximally large number of towers. This observation leads us to propose a novel deep convolutional neural network architecture inspired by Inception, where Inception modules have been replaced with depthwise separable convolutions. We show that this architecture, dubbed Xception, slightly outperforms Inception V3 on the ImageNet dataset (which Inception V3 was designed for), and signiï¬cantly outper- forms Inception V3 on a larger image classiï¬cation dataset comprising 350 million images and 17,000 classes. Since the Xception architecture has the same number of param- eters as Inception V3, the performance gains are not due to increased capacity but rather to a more efï¬cient use of model parameters. | 1610.02357#0 | Xception: Deep Learning with Depthwise Separable Convolutions | We present an interpretation of Inception modules in convolutional neural
networks as being an intermediate step in-between regular convolution and the
depthwise separable convolution operation (a depthwise convolution followed by
a pointwise convolution). In this light, a depthwise separable convolution can
be understood as an Inception module with a maximally large number of towers.
This observation leads us to propose a novel deep convolutional neural network
architecture inspired by Inception, where Inception modules have been replaced
with depthwise separable convolutions. We show that this architecture, dubbed
Xception, slightly outperforms Inception V3 on the ImageNet dataset (which
Inception V3 was designed for), and significantly outperforms Inception V3 on a
larger image classification dataset comprising 350 million images and 17,000
classes. Since the Xception architecture has the same number of parameters as
Inception V3, the performance gains are not due to increased capacity but
rather to a more efficient use of model parameters. | http://arxiv.org/pdf/1610.02357 | François Chollet | cs.CV | null | null | cs.CV | 20161007 | 20170404 | [
{
"id": "1608.04337"
},
{
"id": "1602.07261"
},
{
"id": "1511.07289"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
}
] |
1610.02413 | 0 | 6 1 0 2
t c O 7 ] G L . s c [
1 v 3 1 4 2 0 . 0 1 6 1 : v i X r a
# Equality of Opportunity in Supervised Learning
# Moritz Hardt
Eric Price
# Nathan Srebro
February 18, 2022
# Abstract
We propose a criterion for discrimination against a speciï¬ed sensitive attribute in su- pervised learning, where the goal is to predict some target based on available features. Assuming data about the predictor, target, and membership in the protected group are avail- able, we show how to optimally adjust any learned predictor so as to remove discrimination according to our deï¬nition. Our framework also improves incentives by shifting the cost of poor classiï¬cation from disadvantaged groups to the decision maker, who can respond by improving the classiï¬cation accuracy.
In line with other studies, our notion is oblivious: it depends only on the joint statistics of the predictor, the target and the protected attribute, but not on interpretation of individual features. We study the inherent limits of deï¬ning and identifying biases based on such oblivious measures, outlining what can and cannot be inferred from diï¬erent oblivious tests.
# 1 Introduction | 1610.02413#0 | Equality of Opportunity in Supervised Learning | We propose a criterion for discrimination against a specified sensitive
attribute in supervised learning, where the goal is to predict some target
based on available features. Assuming data about the predictor, target, and
membership in the protected group are available, we show how to optimally
adjust any learned predictor so as to remove discrimination according to our
definition. Our framework also improves incentives by shifting the cost of poor
classification from disadvantaged groups to the decision maker, who can respond
by improving the classification accuracy.
In line with other studies, our notion is oblivious: it depends only on the
joint statistics of the predictor, the target and the protected attribute, but
not on interpretation of individualfeatures. We study the inherent limits of
defining and identifying biases based on such oblivious measures, outlining
what can and cannot be inferred from different oblivious tests.
We illustrate our notion using a case study of FICO credit scores. | http://arxiv.org/pdf/1610.02413 | Moritz Hardt, Eric Price, Nathan Srebro | cs.LG | null | null | cs.LG | 20161007 | 20161007 | [] |
1610.02136 | 1 | # INTRODUCTION
When machine learning classiï¬ers are employed in real-world tasks, they tend to fail when the training and test distributions differ. Worse, these classiï¬ers often fail silently by providing high- conï¬dence predictions while being woefully incorrect (Goodfellow et al., 2015; Amodei et al., 2016). Classiï¬ers failing to indicate when they are likely mistaken can limit their adoption or cause serious accidents. For example, a medical diagnosis model may consistently classify with high conï¬dence, even while it should ï¬ag difï¬cult examples for human intervention. The resulting unï¬agged, erroneous diagnoses could blockade future machine learning technologies in medicine. More generally and importantly, estimating when a model is in error is of great concern to AI Safety (Amodei et al., 2016). | 1610.02136#1 | A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks | We consider the two related problems of detecting if an example is
misclassified or out-of-distribution. We present a simple baseline that
utilizes probabilities from softmax distributions. Correctly classified
examples tend to have greater maximum softmax probabilities than erroneously
classified and out-of-distribution examples, allowing for their detection. We
assess performance by defining several tasks in computer vision, natural
language processing, and automatic speech recognition, showing the
effectiveness of this baseline across all. We then show the baseline can
sometimes be surpassed, demonstrating the room for future research on these
underexplored detection tasks. | http://arxiv.org/pdf/1610.02136 | Dan Hendrycks, Kevin Gimpel | cs.NE, cs.CV, cs.LG | Published as a conference paper at ICLR 2017. 1 Figure in 1 Appendix.
Minor changes from the previous version | International Conference on Learning Representations 2017 | cs.NE | 20161007 | 20181003 | [] |
1610.02357 | 1 | as GoogLeNet (Inception V1), later reï¬ned as Inception V2 [7], Inception V3 [21], and most recently Inception-ResNet [19]. Inception itself was inspired by the earlier Network- In-Network architecture [11]. Since its ï¬rst introduction, Inception has been one of the best performing family of models on the ImageNet dataset [14], as well as internal datasets in use at Google, in particular JFT [5].
The fundamental building block of Inception-style mod- els is the Inception module, of which several different ver- sions exist. In ï¬gure 1 we show the canonical form of an Inception module, as found in the Inception V3 architec- ture. An Inception model can be understood as a stack of such modules. This is a departure from earlier VGG-style networks which were stacks of simple convolution layers.
While Inception modules are conceptually similar to con- volutions (they are convolutional feature extractors), they empirically appear to be capable of learning richer repre- sentations with less parameters. How do they work, and how do they differ from regular convolutions? What design strategies come after Inception?
# 1.1. The Inception hypothesis
# 1. Introduction | 1610.02357#1 | Xception: Deep Learning with Depthwise Separable Convolutions | We present an interpretation of Inception modules in convolutional neural
networks as being an intermediate step in-between regular convolution and the
depthwise separable convolution operation (a depthwise convolution followed by
a pointwise convolution). In this light, a depthwise separable convolution can
be understood as an Inception module with a maximally large number of towers.
This observation leads us to propose a novel deep convolutional neural network
architecture inspired by Inception, where Inception modules have been replaced
with depthwise separable convolutions. We show that this architecture, dubbed
Xception, slightly outperforms Inception V3 on the ImageNet dataset (which
Inception V3 was designed for), and significantly outperforms Inception V3 on a
larger image classification dataset comprising 350 million images and 17,000
classes. Since the Xception architecture has the same number of parameters as
Inception V3, the performance gains are not due to increased capacity but
rather to a more efficient use of model parameters. | http://arxiv.org/pdf/1610.02357 | François Chollet | cs.CV | null | null | cs.CV | 20161007 | 20170404 | [
{
"id": "1608.04337"
},
{
"id": "1602.07261"
},
{
"id": "1511.07289"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
}
] |
1610.02413 | 1 | # 1 Introduction
As machine learning increasingly aï¬ects decisions in domains protected by anti-discrimination law, there is much interest in algorithmically measuring and ensuring fairness in machine learning. In domains such as advertising, credit, employment, education, and criminal justice, machine learning could help obtain more accurate predictions, but its eï¬ect on existing biases is not well understood. Although reliance on data and quantitative measures can help quantify and eliminate existing biases, some scholars caution that algorithms can also introduce new biases or perpetuate existing ones [BS16]. In May 2014, the Obama Administrationâs Big Data Working Group released a report [PPM+14] arguing that discrimination can sometimes âbe the inadvertent outcome of the way big data technologies are structured and usedâ and pointed toward âthe potential of encoding discrimination in automated decisionsâ. A subsequent White House report [Whi16] calls for âequal opportunity by designâ as a guiding principle in domains such as credit scoring.
Despite the demand, a vetted methodology for avoiding discrimination against protected attributes in machine learning is lacking. A naïve approach might require that the algorithm should ignore all protected attributes such as race, color, religion, gender, disability, or family status. However, this idea of âfairness through unawarenessâ is ineï¬ective due to the existence of redundant encodings, ways of predicting protected attributes from other features [PRT08]. | 1610.02413#1 | Equality of Opportunity in Supervised Learning | We propose a criterion for discrimination against a specified sensitive
attribute in supervised learning, where the goal is to predict some target
based on available features. Assuming data about the predictor, target, and
membership in the protected group are available, we show how to optimally
adjust any learned predictor so as to remove discrimination according to our
definition. Our framework also improves incentives by shifting the cost of poor
classification from disadvantaged groups to the decision maker, who can respond
by improving the classification accuracy.
In line with other studies, our notion is oblivious: it depends only on the
joint statistics of the predictor, the target and the protected attribute, but
not on interpretation of individualfeatures. We study the inherent limits of
defining and identifying biases based on such oblivious measures, outlining
what can and cannot be inferred from different oblivious tests.
We illustrate our notion using a case study of FICO credit scores. | http://arxiv.org/pdf/1610.02413 | Moritz Hardt, Eric Price, Nathan Srebro | cs.LG | null | null | cs.LG | 20161007 | 20161007 | [] |
1610.02136 | 2 | These high-conï¬dence predictions are frequently produced by softmaxes because softmax probabil- ities are computed with the fast-growing exponential function. Thus minor additions to the softmax inputs, i.e. the logits, can lead to substantial changes in the output distribution. Since the soft- max function is a smooth approximation of an indicator function, it is uncommon to see a uniform distribution outputted for out-of-distribution examples. Indeed, random Gaussian noise fed into an MNIST image classiï¬er gives a âprediction conï¬denceâ or predicted class probability of 91%, as we show later. Throughout our experiments we establish that the prediction probability from a softmax distribution has a poor direct correspondence to conï¬dence. This is consistent with a great deal of anecdotal evidence from researchers (Nguyen & OâConnor, 2015; Yu et al., 2010; Provost et al., 1998; Nguyen et al., 2015).
However, in this work we also show the prediction probability of incorrect and out-of-distribution examples tends to be lower than the prediction probability for correct examples. Therefore, cap- turing prediction probability statistics about correct or in-sample examples is often sufï¬cient for detecting whether an example is in error or abnormal, even though the prediction probability viewed in isolation can be misleading. | 1610.02136#2 | A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks | We consider the two related problems of detecting if an example is
misclassified or out-of-distribution. We present a simple baseline that
utilizes probabilities from softmax distributions. Correctly classified
examples tend to have greater maximum softmax probabilities than erroneously
classified and out-of-distribution examples, allowing for their detection. We
assess performance by defining several tasks in computer vision, natural
language processing, and automatic speech recognition, showing the
effectiveness of this baseline across all. We then show the baseline can
sometimes be surpassed, demonstrating the room for future research on these
underexplored detection tasks. | http://arxiv.org/pdf/1610.02136 | Dan Hendrycks, Kevin Gimpel | cs.NE, cs.CV, cs.LG | Published as a conference paper at ICLR 2017. 1 Figure in 1 Appendix.
Minor changes from the previous version | International Conference on Learning Representations 2017 | cs.NE | 20161007 | 20181003 | [] |
1610.02357 | 2 | # 1.1. The Inception hypothesis
# 1. Introduction
Convolutional neural networks have emerged as the mas- ter algorithm in computer vision in recent years, and de- veloping recipes for designing them has been a subject of considerable attention. The history of convolutional neural network design started with LeNet-style models [10], which were simple stacks of convolutions for feature extraction and max-pooling operations for spatial sub-sampling. In 2012, these ideas were reï¬ned into the AlexNet architec- ture [9], where convolution operations were being repeated multiple times in-between max-pooling operations, allowing the network to learn richer features at every spatial scale. What followed was a trend to make this style of network increasingly deeper, mostly driven by the yearly ILSVRC competition; ï¬rst with Zeiler and Fergus in 2013 [25] and then with the VGG architecture in 2014 [18].
A convolution layer attempts to learn ï¬lters in a 3D space, with 2 spatial dimensions (width and height) and a chan- nel dimension; thus a single convolution kernel is tasked with simultaneously mapping cross-channel correlations and spatial correlations. | 1610.02357#2 | Xception: Deep Learning with Depthwise Separable Convolutions | We present an interpretation of Inception modules in convolutional neural
networks as being an intermediate step in-between regular convolution and the
depthwise separable convolution operation (a depthwise convolution followed by
a pointwise convolution). In this light, a depthwise separable convolution can
be understood as an Inception module with a maximally large number of towers.
This observation leads us to propose a novel deep convolutional neural network
architecture inspired by Inception, where Inception modules have been replaced
with depthwise separable convolutions. We show that this architecture, dubbed
Xception, slightly outperforms Inception V3 on the ImageNet dataset (which
Inception V3 was designed for), and significantly outperforms Inception V3 on a
larger image classification dataset comprising 350 million images and 17,000
classes. Since the Xception architecture has the same number of parameters as
Inception V3, the performance gains are not due to increased capacity but
rather to a more efficient use of model parameters. | http://arxiv.org/pdf/1610.02357 | François Chollet | cs.CV | null | null | cs.CV | 20161007 | 20170404 | [
{
"id": "1608.04337"
},
{
"id": "1602.07261"
},
{
"id": "1511.07289"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
}
] |
1610.02413 | 2 | Another common conception of non-discrimination is demographic parity. Demographic parity requires that a decisionâsuch as accepting or denying a loan applicationâbe independent of the protected attribute. In the case of a binary decision Ye{0,lj anda binary protected attribute A ⬠{0,1}, this constraint can be formalized by asking that Pr{Y =1|A=0}= Pr{Y =
1
1|A = 1}. In other words, membership in a protected class should have no correlation with the decision. Through its various equivalent formalizations this idea appears in numerous papers. Unfortunately, as was already argued by Dwork et al. [DHP*12], the notion is seriously flawed on two counts. First, it doesnât ensure fairness. Indeed, the notion permits that we accept qualified applicants in the demographic A = 0, but unqualified individuals in A = 1, so long as the percentages of acceptance match. This behavior can arise naturally, when there is little or no training data available within A = 1. Second, demographic parity often cripples the utility that we might hope to achieve. Just imagine the common scenario in which the target variable Yâwhether an individual actually defaults or notâis correlated with A. Demographic parity would not allow the ideal predictor Y = Y, which can hardly be considered discriminatory as it represents the actual outcome. As a result, the loss in utility of introducing demographic parity can be substantial. | 1610.02413#2 | Equality of Opportunity in Supervised Learning | We propose a criterion for discrimination against a specified sensitive
attribute in supervised learning, where the goal is to predict some target
based on available features. Assuming data about the predictor, target, and
membership in the protected group are available, we show how to optimally
adjust any learned predictor so as to remove discrimination according to our
definition. Our framework also improves incentives by shifting the cost of poor
classification from disadvantaged groups to the decision maker, who can respond
by improving the classification accuracy.
In line with other studies, our notion is oblivious: it depends only on the
joint statistics of the predictor, the target and the protected attribute, but
not on interpretation of individualfeatures. We study the inherent limits of
defining and identifying biases based on such oblivious measures, outlining
what can and cannot be inferred from different oblivious tests.
We illustrate our notion using a case study of FICO credit scores. | http://arxiv.org/pdf/1610.02413 | Moritz Hardt, Eric Price, Nathan Srebro | cs.LG | null | null | cs.LG | 20161007 | 20161007 | [] |
1610.02136 | 3 | These prediction probabilities form our detection baseline, and we demonstrate its efï¬cacy through various computer vision, natural language processing, and automatic speech recognition tasks. While these prediction probabilities create a consistently useful baseline, at times they are less ef- fective, revealing room for improvement. To give ideas for future detection research, we contribute
âWork done while the author was at TTIC. Code is available at github.com/hendrycks/error-detection
1
Published as a conference paper at ICLR 2017
one method which outperforms the baseline on some (but not all) tasks. This new method evaluates the quality of a neural networkâs input reconstruction to determine if an example is abnormal.
In addition to the baseline methods, another contribution of this work is the designation of standard tasks and evaluation metrics for assessing the automatic detection of errors and out-of-distribution examples. We use a large number of well-studied tasks across three research areas, using standard neural network architectures that perform well on them. For out-of-distribution detection, we pro- vide ways to supply the out-of-distribution examples at test time like using images from different datasets and realistically distorting inputs. We hope that other researchers will pursue these tasks in future work and surpass the performance of our baselines. | 1610.02136#3 | A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks | We consider the two related problems of detecting if an example is
misclassified or out-of-distribution. We present a simple baseline that
utilizes probabilities from softmax distributions. Correctly classified
examples tend to have greater maximum softmax probabilities than erroneously
classified and out-of-distribution examples, allowing for their detection. We
assess performance by defining several tasks in computer vision, natural
language processing, and automatic speech recognition, showing the
effectiveness of this baseline across all. We then show the baseline can
sometimes be surpassed, demonstrating the room for future research on these
underexplored detection tasks. | http://arxiv.org/pdf/1610.02136 | Dan Hendrycks, Kevin Gimpel | cs.NE, cs.CV, cs.LG | Published as a conference paper at ICLR 2017. 1 Figure in 1 Appendix.
Minor changes from the previous version | International Conference on Learning Representations 2017 | cs.NE | 20161007 | 20181003 | [] |
1610.02357 | 3 | This idea behind the Inception module is to make this process easier and more efï¬cient by explicitly factoring it into a series of operations that would independently look at cross-channel correlations and at spatial correlations. More precisely, the typical Inception module ï¬rst looks at cross- channel correlations via a set of 1x1 convolutions, mapping the input data into 3 or 4 separate spaces that are smaller than the original input space, and then maps all correlations in these smaller 3D spaces, via regular 3x3 or 5x5 convolutions. This is illustrated in ï¬gure 1. In effect, the fundamental hy- pothesis behind Inception is that cross-channel correlations and spatial correlations are sufï¬ciently decoupled that it is preferable not to map them jointly 1.
At this point a new style of network emerged, the Incep- tion architecture, introduced by Szegedy et al. in 2014 [20] | 1610.02357#3 | Xception: Deep Learning with Depthwise Separable Convolutions | We present an interpretation of Inception modules in convolutional neural
networks as being an intermediate step in-between regular convolution and the
depthwise separable convolution operation (a depthwise convolution followed by
a pointwise convolution). In this light, a depthwise separable convolution can
be understood as an Inception module with a maximally large number of towers.
This observation leads us to propose a novel deep convolutional neural network
architecture inspired by Inception, where Inception modules have been replaced
with depthwise separable convolutions. We show that this architecture, dubbed
Xception, slightly outperforms Inception V3 on the ImageNet dataset (which
Inception V3 was designed for), and significantly outperforms Inception V3 on a
larger image classification dataset comprising 350 million images and 17,000
classes. Since the Xception architecture has the same number of parameters as
Inception V3, the performance gains are not due to increased capacity but
rather to a more efficient use of model parameters. | http://arxiv.org/pdf/1610.02357 | François Chollet | cs.CV | null | null | cs.CV | 20161007 | 20170404 | [
{
"id": "1608.04337"
},
{
"id": "1602.07261"
},
{
"id": "1511.07289"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
}
] |
1610.02413 | 3 | In this paper, we consider non-discrimination from the perspective of supervised learning, where the goal is to predict a true outcome Y from features X based on labeled training data, while ensuring they are ânon-discriminatoryâ with respect to a specified protected attribute A. As in the usual supervised learning setting, we assume that we have access to labeled training data, in our case indicating also the protected attribute A. That is, to samples from the joint distribution of (X,A,Y). This data is used to construct a predictor Y(X) or Y(X,A), and we also use such data to test whether it is unfairly discriminatory.
Unlike demographic parity, our notion always allows for the perfectly accurate solution of Y = Y. More broadly, our criterion is easier to achieve the more accurate the predictor Yy is, aligning fairness with the central goal in supervised learning of building more accurate predictors. | 1610.02413#3 | Equality of Opportunity in Supervised Learning | We propose a criterion for discrimination against a specified sensitive
attribute in supervised learning, where the goal is to predict some target
based on available features. Assuming data about the predictor, target, and
membership in the protected group are available, we show how to optimally
adjust any learned predictor so as to remove discrimination according to our
definition. Our framework also improves incentives by shifting the cost of poor
classification from disadvantaged groups to the decision maker, who can respond
by improving the classification accuracy.
In line with other studies, our notion is oblivious: it depends only on the
joint statistics of the predictor, the target and the protected attribute, but
not on interpretation of individualfeatures. We study the inherent limits of
defining and identifying biases based on such oblivious measures, outlining
what can and cannot be inferred from different oblivious tests.
We illustrate our notion using a case study of FICO credit scores. | http://arxiv.org/pdf/1610.02413 | Moritz Hardt, Eric Price, Nathan Srebro | cs.LG | null | null | cs.LG | 20161007 | 20161007 | [] |
1610.02136 | 4 | In summary, while softmax classiï¬er probabilities are not directly useful as conï¬dence estimates, estimating model conï¬dence is not as bleak as previously believed. Simple statistics derived from softmax distributions provide a surprisingly effective way to determine whether an example is mis- classiï¬ed or from a different distribution from the training data, as demonstrated by our experimental results spanning computer vision, natural language processing, and speech recognition tasks. This creates a strong baseline for detecting errors and out-of-distribution examples which we hope future research surpasses.
# 2 PROBLEM FORMULATION AND EVALUATION
In this paper, we are interested in two related problems. The ï¬rst is error and success prediction: can we predict whether a trained classiï¬er will make an error on a particular held-out test example; can we predict if it will correctly classify said example? The second is in- and out-of-distribution detection: can we predict whether a test example is from a different distribution from the training data; can we predict if it is from within the same distribution?1 Below we present a simple baseline for solving these two problems. To evaluate our solution, we use two evaluation metrics. | 1610.02136#4 | A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks | We consider the two related problems of detecting if an example is
misclassified or out-of-distribution. We present a simple baseline that
utilizes probabilities from softmax distributions. Correctly classified
examples tend to have greater maximum softmax probabilities than erroneously
classified and out-of-distribution examples, allowing for their detection. We
assess performance by defining several tasks in computer vision, natural
language processing, and automatic speech recognition, showing the
effectiveness of this baseline across all. We then show the baseline can
sometimes be surpassed, demonstrating the room for future research on these
underexplored detection tasks. | http://arxiv.org/pdf/1610.02136 | Dan Hendrycks, Kevin Gimpel | cs.NE, cs.CV, cs.LG | Published as a conference paper at ICLR 2017. 1 Figure in 1 Appendix.
Minor changes from the previous version | International Conference on Learning Representations 2017 | cs.NE | 20161007 | 20181003 | [] |
1610.02357 | 4 | At this point a new style of network emerged, the Incep- tion architecture, introduced by Szegedy et al. in 2014 [20]
1A variant of the process is to independently look at width-wise correConsider a simpliï¬ed version of an Inception module that only uses one size of convolution (e.g. 3x3) and does not include an average pooling tower (ï¬gure 2). This Incep- tion module can be reformulated as a large 1x1 convolution followed by spatial convolutions that would operate on non- overlapping segments of the output channels (ï¬gure 3). This observation naturally raises the question: what is the ef- fect of the number of segments in the partition (and their size)? Would it be reasonable to make a much stronger hypothesis than the Inception hypothesis, and assume that cross-channel correlations and spatial correlations can be mapped completely separately?
Figure 1. A canonical Inception module (Inception V3).
Concat 3x3 conv I 3x3 conv 3x3 conv 3x3 conv I I I 1x1 conv 1x1 conv Avg Pool 1x1 conv Input
Figure 2. A simpliï¬ed Inception module.
3x3 conv 3x3 conv 3x3 conv Input
# 1.2. The continuum between convolutions and sep- arable convolutions | 1610.02357#4 | Xception: Deep Learning with Depthwise Separable Convolutions | We present an interpretation of Inception modules in convolutional neural
networks as being an intermediate step in-between regular convolution and the
depthwise separable convolution operation (a depthwise convolution followed by
a pointwise convolution). In this light, a depthwise separable convolution can
be understood as an Inception module with a maximally large number of towers.
This observation leads us to propose a novel deep convolutional neural network
architecture inspired by Inception, where Inception modules have been replaced
with depthwise separable convolutions. We show that this architecture, dubbed
Xception, slightly outperforms Inception V3 on the ImageNet dataset (which
Inception V3 was designed for), and significantly outperforms Inception V3 on a
larger image classification dataset comprising 350 million images and 17,000
classes. Since the Xception architecture has the same number of parameters as
Inception V3, the performance gains are not due to increased capacity but
rather to a more efficient use of model parameters. | http://arxiv.org/pdf/1610.02357 | François Chollet | cs.CV | null | null | cs.CV | 20161007 | 20170404 | [
{
"id": "1608.04337"
},
{
"id": "1602.07261"
},
{
"id": "1511.07289"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
}
] |
1610.02413 | 4 | The notion we propose is âobliviousâ, in that it is based only on the joint distribution, or joint statistics, of the true target Y, the predictions Y, and the protected attribute A. In particular, it does not evaluate the features in X nor the functional form of the predictor Y(X) nor how it was derived. This matches other tests recently proposed and conducted, including demographic parity and different analyses of common risk scores. In many cases, only oblivious analysis is possible as the functional form of the score and underlying training data are not public. The only information about the score is the score itself, which can then be correlated with the target and protected attribute. Furthermore, even if the features or the functional form are available, going beyond oblivious analysis essentially requires subjective interpretation or casual assumptions about specific features, which we aim to avoid.
# 1.1 Summary of our contributions
We propose a simple, interpretable, and actionable framework for measuring and removing discrimination based on protected attributes. We argue that, unlike demographic parity, our framework provides a meaningful measure of discrimination, while demonstrating in theory and experiment that we also achieve much higher utility. Our key contributions are as follows: | 1610.02413#4 | Equality of Opportunity in Supervised Learning | We propose a criterion for discrimination against a specified sensitive
attribute in supervised learning, where the goal is to predict some target
based on available features. Assuming data about the predictor, target, and
membership in the protected group are available, we show how to optimally
adjust any learned predictor so as to remove discrimination according to our
definition. Our framework also improves incentives by shifting the cost of poor
classification from disadvantaged groups to the decision maker, who can respond
by improving the classification accuracy.
In line with other studies, our notion is oblivious: it depends only on the
joint statistics of the predictor, the target and the protected attribute, but
not on interpretation of individualfeatures. We study the inherent limits of
defining and identifying biases based on such oblivious measures, outlining
what can and cannot be inferred from different oblivious tests.
We illustrate our notion using a case study of FICO credit scores. | http://arxiv.org/pdf/1610.02413 | Moritz Hardt, Eric Price, Nathan Srebro | cs.LG | null | null | cs.LG | 20161007 | 20161007 | [] |
1610.02136 | 5 | Before mentioning the two evaluation metrics, we ï¬rst note that comparing detectors is not as straightforward as using accuracy. For detection we have two classes, and the detector outputs a score for both the positive and negative class. If the negative class is far more likely than the positive class, a model may always guess the negative class and obtain high accuracy, which can be mislead- ing (Provost et al., 1998). We must then specify a score threshold so that some positive examples are classiï¬ed correctly, but this depends upon the trade-off between false negatives (fn) and false positives (fp).
Faced with this issue, we employ the Area Under the Receiver Operating Characteristic curve (AU- ROC) metric, which is a threshold-independent performance evaluation (Davis & Goadrich, 2006). The ROC curve is a graph showing the true positive rate (tpr = tp/(tp + fn)) and the false positive rate (fpr = fp/(fp + tn)) against each other. Moreover, the AUROC can be interpreted as the prob- ability that a positive example has a greater detector score/value than a negative example (Fawcett, 2005). Consequently, a random positive example detector corresponds to a 50% AUROC, and a âperfectâ classiï¬er corresponds to 100%.2 | 1610.02136#5 | A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks | We consider the two related problems of detecting if an example is
misclassified or out-of-distribution. We present a simple baseline that
utilizes probabilities from softmax distributions. Correctly classified
examples tend to have greater maximum softmax probabilities than erroneously
classified and out-of-distribution examples, allowing for their detection. We
assess performance by defining several tasks in computer vision, natural
language processing, and automatic speech recognition, showing the
effectiveness of this baseline across all. We then show the baseline can
sometimes be surpassed, demonstrating the room for future research on these
underexplored detection tasks. | http://arxiv.org/pdf/1610.02136 | Dan Hendrycks, Kevin Gimpel | cs.NE, cs.CV, cs.LG | Published as a conference paper at ICLR 2017. 1 Figure in 1 Appendix.
Minor changes from the previous version | International Conference on Learning Representations 2017 | cs.NE | 20161007 | 20181003 | [] |
1610.02357 | 5 | 3x3 conv 3x3 conv 3x3 conv Input
# 1.2. The continuum between convolutions and sep- arable convolutions
An âextremeâ version of an Inception module, based on this stronger hypothesis, would ï¬rst use a 1x1 convolution to map cross-channel correlations, and would then separately map the spatial correlations of every output channel. This is shown in ï¬gure 4. We remark that this extreme form of an Inception module is almost identical to a depthwise sepa- rable convolution, an operation that has been used in neural
lations and height-wise correlations. This is implemented by some of the modules found in Inception V3, which alternate 7x1 and 1x7 convolutions. The use of such spatially separable convolutions has a long history in im- age processing and has been used in some convolutional neural network implementations since at least 2012 (possibly earlier).
Figure 3. A strictly equivalent reformulation of the simpliï¬ed In- ception module.
3x3 conv Input Output channels
Figure 4. An âextremeâ version of our Inception module, with one spatial convolution per output channel of the 1x1 convolution.
Ae ee ee ee Output channels 1x1 conv Input | 1610.02357#5 | Xception: Deep Learning with Depthwise Separable Convolutions | We present an interpretation of Inception modules in convolutional neural
networks as being an intermediate step in-between regular convolution and the
depthwise separable convolution operation (a depthwise convolution followed by
a pointwise convolution). In this light, a depthwise separable convolution can
be understood as an Inception module with a maximally large number of towers.
This observation leads us to propose a novel deep convolutional neural network
architecture inspired by Inception, where Inception modules have been replaced
with depthwise separable convolutions. We show that this architecture, dubbed
Xception, slightly outperforms Inception V3 on the ImageNet dataset (which
Inception V3 was designed for), and significantly outperforms Inception V3 on a
larger image classification dataset comprising 350 million images and 17,000
classes. Since the Xception architecture has the same number of parameters as
Inception V3, the performance gains are not due to increased capacity but
rather to a more efficient use of model parameters. | http://arxiv.org/pdf/1610.02357 | François Chollet | cs.CV | null | null | cs.CV | 20161007 | 20170404 | [
{
"id": "1608.04337"
},
{
"id": "1602.07261"
},
{
"id": "1511.07289"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
}
] |
1610.02413 | 5 | ⢠We propose an easily checkable and interpretable notion of avoiding discrimination based on protected attributes. Our notion enjoys a natural interpretation in terms of graphical dependency models. It can also be viewed as shifting the burden of uncertainty in classiï¬cation from the protected class to the decision maker. In doing so, our notion helps to incentivize the collection of better features, that depend more directly on the target rather then the protected attribute, and of data that allows better prediction for all protected classes.
2
⢠We give a simple and eï¬ective framework for constructing classiï¬ers satisfying our cri- terion from an arbitrary learned predictor. Rather than changing a possibly complex training pipeline, the result follows via a simple post-processing step that minimizes the loss in utility.
⢠We show that the Bayes optimal non-discriminating (according to our deï¬nition) classiï¬er is the classiï¬er derived from any Bayes optimal (not necessarily non-discriminating) regressor using our post-processing step. Moreover, we quantify the loss that follows from imposing our non-discrimination condition in case the score we start from deviates from Bayesian optimality. This result helps to justify the approach of deriving a fair classiï¬er via post-processing rather than changing the original training process. | 1610.02413#5 | Equality of Opportunity in Supervised Learning | We propose a criterion for discrimination against a specified sensitive
attribute in supervised learning, where the goal is to predict some target
based on available features. Assuming data about the predictor, target, and
membership in the protected group are available, we show how to optimally
adjust any learned predictor so as to remove discrimination according to our
definition. Our framework also improves incentives by shifting the cost of poor
classification from disadvantaged groups to the decision maker, who can respond
by improving the classification accuracy.
In line with other studies, our notion is oblivious: it depends only on the
joint statistics of the predictor, the target and the protected attribute, but
not on interpretation of individualfeatures. We study the inherent limits of
defining and identifying biases based on such oblivious measures, outlining
what can and cannot be inferred from different oblivious tests.
We illustrate our notion using a case study of FICO credit scores. | http://arxiv.org/pdf/1610.02413 | Moritz Hardt, Eric Price, Nathan Srebro | cs.LG | null | null | cs.LG | 20161007 | 20161007 | [] |
1610.02136 | 6 | The AUROC sidesteps the issue of threshold selection, as does the Area Under the Precision-Recall curve (AUPR) which is sometimes deemed more informative (Manning & Sch¨utze, 1999). This is because the AUROC is not ideal when the positive class and negative class have greatly differing base rates, and the AUPR adjusts for these different positive and negative base rates. For this reason, the AUPR is our second evaluation metric. The PR curve plots the precision (tp/(tp + fp)) and recall (tp/(tp + fn)) against each other. The baseline detector has an AUPR approximately equal to the precision (Saito & Rehmsmeier, 2015), and a âperfectâ classiï¬er has an AUPR of 100%. Conse- quently, the base rate of the positive class greatly inï¬uences the AUPR, so for detection we must specify which class is positive. In view of this, we show the AUPRs when we treat success/normal classes as positive, and then we show the areas when we treat the error/abnormal classes as positive. We can treat the error/abnormal classes as positive by multiplying the scores by â1 and labeling them positive. Note that treating error/abnormal | 1610.02136#6 | A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks | We consider the two related problems of detecting if an example is
misclassified or out-of-distribution. We present a simple baseline that
utilizes probabilities from softmax distributions. Correctly classified
examples tend to have greater maximum softmax probabilities than erroneously
classified and out-of-distribution examples, allowing for their detection. We
assess performance by defining several tasks in computer vision, natural
language processing, and automatic speech recognition, showing the
effectiveness of this baseline across all. We then show the baseline can
sometimes be surpassed, demonstrating the room for future research on these
underexplored detection tasks. | http://arxiv.org/pdf/1610.02136 | Dan Hendrycks, Kevin Gimpel | cs.NE, cs.CV, cs.LG | Published as a conference paper at ICLR 2017. 1 Figure in 1 Appendix.
Minor changes from the previous version | International Conference on Learning Representations 2017 | cs.NE | 20161007 | 20181003 | [] |
1610.02357 | 6 | Ae ee ee ee Output channels 1x1 conv Input
network design as early as 2014 [15] and has become more popular since its inclusion in the TensorFlow framework [1] in 2016.
A depthwise separable convolution, commonly called âseparable convolutionâ in deep learning frameworks such as TensorFlow and Keras, consists in a depthwise convolution, i.e. a spatial convolution performed independently over each channel of an input, followed by a pointwise convolution, i.e. a 1x1 convolution, projecting the channels output by the depthwise convolution onto a new channel space. This is not to be confused with a spatially separable convolution, which is also commonly called âseparable convolutionâ in the image processing community.
Two minor differences between and âextremeâ version of an Inception module and a depthwise separable convolution would be:
⢠The order of the operations: depthwise separable con- volutions as usually implemented (e.g. in TensorFlow) perform ï¬rst channel-wise spatial convolution and then perform 1x1 convolution, whereas Inception performs the 1x1 convolution ï¬rst. | 1610.02357#6 | Xception: Deep Learning with Depthwise Separable Convolutions | We present an interpretation of Inception modules in convolutional neural
networks as being an intermediate step in-between regular convolution and the
depthwise separable convolution operation (a depthwise convolution followed by
a pointwise convolution). In this light, a depthwise separable convolution can
be understood as an Inception module with a maximally large number of towers.
This observation leads us to propose a novel deep convolutional neural network
architecture inspired by Inception, where Inception modules have been replaced
with depthwise separable convolutions. We show that this architecture, dubbed
Xception, slightly outperforms Inception V3 on the ImageNet dataset (which
Inception V3 was designed for), and significantly outperforms Inception V3 on a
larger image classification dataset comprising 350 million images and 17,000
classes. Since the Xception architecture has the same number of parameters as
Inception V3, the performance gains are not due to increased capacity but
rather to a more efficient use of model parameters. | http://arxiv.org/pdf/1610.02357 | François Chollet | cs.CV | null | null | cs.CV | 20161007 | 20170404 | [
{
"id": "1608.04337"
},
{
"id": "1602.07261"
},
{
"id": "1511.07289"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
}
] |
1610.02413 | 6 | ⢠We capture the inherent limitations of our approach, as well as any other oblivious ap- proach, through a non-identiï¬ability result showing that diï¬erent dependency structures with possibly diï¬erent intuitive notions of fairness cannot be separated based on any oblivious notion or test.
Throughout our work, we assume a source distribution over (Y , X, A), where Y is the target or true outcome (e.g. âdefault on loanâ), X are the available features, and A is the protected attribute. Generally, the features X may be an arbitrary vector or an abstract object, such as an image. Our work does not refer to the particular form X has.
The objective of supervised learning is to construct a (possibly randomized) predictor Y = f(X,A) that predicts Y as is typically measured through a loss function. Furthermore, we would like to require that Y does not discriminate with respect to A, and the goal of this paper is to formalize this notion.
# 2 Equalized odds and equal opportunity
We now formally introduce our ï¬rst criterion.
Definition 2.1 (Equalized odds). We say that a predictor Y satisfies equalized odds with respect to protected attribute A and outcome Y, if Y and A are independent conditional on Y. | 1610.02413#6 | Equality of Opportunity in Supervised Learning | We propose a criterion for discrimination against a specified sensitive
attribute in supervised learning, where the goal is to predict some target
based on available features. Assuming data about the predictor, target, and
membership in the protected group are available, we show how to optimally
adjust any learned predictor so as to remove discrimination according to our
definition. Our framework also improves incentives by shifting the cost of poor
classification from disadvantaged groups to the decision maker, who can respond
by improving the classification accuracy.
In line with other studies, our notion is oblivious: it depends only on the
joint statistics of the predictor, the target and the protected attribute, but
not on interpretation of individualfeatures. We study the inherent limits of
defining and identifying biases based on such oblivious measures, outlining
what can and cannot be inferred from different oblivious tests.
We illustrate our notion using a case study of FICO credit scores. | http://arxiv.org/pdf/1610.02413 | Moritz Hardt, Eric Price, Nathan Srebro | cs.LG | null | null | cs.LG | 20161007 | 20161007 | [] |
1610.02136 | 7 | classes as positive. We can treat the error/abnormal classes as positive by multiplying the scores by â1 and labeling them positive. Note that treating error/abnormal classes as positive classes does not change the AU1We consider adversarial example detection techniques in a separate work (Hendrycks & Gimpel, 2016a). 2A debatable, imprecise interpretation of AUROC values may be as follows: 90%â100%: Excellent, 80%â90%: Good, 70%â80%: Fair, 60%â70%: Poor, 50%â60%: Fail. | 1610.02136#7 | A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks | We consider the two related problems of detecting if an example is
misclassified or out-of-distribution. We present a simple baseline that
utilizes probabilities from softmax distributions. Correctly classified
examples tend to have greater maximum softmax probabilities than erroneously
classified and out-of-distribution examples, allowing for their detection. We
assess performance by defining several tasks in computer vision, natural
language processing, and automatic speech recognition, showing the
effectiveness of this baseline across all. We then show the baseline can
sometimes be surpassed, demonstrating the room for future research on these
underexplored detection tasks. | http://arxiv.org/pdf/1610.02136 | Dan Hendrycks, Kevin Gimpel | cs.NE, cs.CV, cs.LG | Published as a conference paper at ICLR 2017. 1 Figure in 1 Appendix.
Minor changes from the previous version | International Conference on Learning Representations 2017 | cs.NE | 20161007 | 20181003 | [] |
1610.02357 | 7 | ⢠The presence or absence of a non-linearity after the ï¬rst operation. In Inception, both operations are fol- lowed by a ReLU non-linearity, however depthwise
separable convolutions are usually implemented with- out non-linearities.
We argue that the ï¬rst difference is unimportant, in par- ticular because these operations are meant to be used in a stacked setting. The second difference might matter, and we investigate it in the experimental section (in particular see ï¬gure 10).
We also note that other intermediate formulations of In- ception modules that lie in between regular Inception mod- ules and depthwise separable convolutions are also possible: in effect, there is a discrete spectrum between regular convo- lutions and depthwise separable convolutions, parametrized by the number of independent channel-space segments used for performing spatial convolutions. A regular convolution (preceded by a 1x1 convolution), at one extreme of this spectrum, corresponds to the single-segment case; a depth- wise separable convolution corresponds to the other extreme where there is one segment per channel; Inception modules lie in between, dividing a few hundreds of channels into 3 or 4 segments. The properties of such intermediate modules appear not to have been explored yet. | 1610.02357#7 | Xception: Deep Learning with Depthwise Separable Convolutions | We present an interpretation of Inception modules in convolutional neural
networks as being an intermediate step in-between regular convolution and the
depthwise separable convolution operation (a depthwise convolution followed by
a pointwise convolution). In this light, a depthwise separable convolution can
be understood as an Inception module with a maximally large number of towers.
This observation leads us to propose a novel deep convolutional neural network
architecture inspired by Inception, where Inception modules have been replaced
with depthwise separable convolutions. We show that this architecture, dubbed
Xception, slightly outperforms Inception V3 on the ImageNet dataset (which
Inception V3 was designed for), and significantly outperforms Inception V3 on a
larger image classification dataset comprising 350 million images and 17,000
classes. Since the Xception architecture has the same number of parameters as
Inception V3, the performance gains are not due to increased capacity but
rather to a more efficient use of model parameters. | http://arxiv.org/pdf/1610.02357 | François Chollet | cs.CV | null | null | cs.CV | 20161007 | 20170404 | [
{
"id": "1608.04337"
},
{
"id": "1602.07261"
},
{
"id": "1511.07289"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
}
] |
1610.02413 | 7 | Definition 2.1 (Equalized odds). We say that a predictor Y satisfies equalized odds with respect to protected attribute A and outcome Y, if Y and A are independent conditional on Y.
Unlike demographic parity, equalized odds allows Y to depend on A but only through the target variable Y. As such, the definition encourages the use of features that allow to directly predict Y, but prohibits abusing A as a proxy for Y.
As stated, equalized odds applies to targets and protected attributes taking values in any space, including binary, multi-class, continuous or structured settings. The case of binary random variables Y, Y and A is of central importance in many applications, encompassing the main conceptual and technical challenges. As a result, we focus most of our attention on this case, in which case equalized odds are equivalent to:
Pr{Y=1|A=0,Y =y}=Pr{Y=1/A=LyY=y}, yâ¬{0,1} | 1610.02413#7 | Equality of Opportunity in Supervised Learning | We propose a criterion for discrimination against a specified sensitive
attribute in supervised learning, where the goal is to predict some target
based on available features. Assuming data about the predictor, target, and
membership in the protected group are available, we show how to optimally
adjust any learned predictor so as to remove discrimination according to our
definition. Our framework also improves incentives by shifting the cost of poor
classification from disadvantaged groups to the decision maker, who can respond
by improving the classification accuracy.
In line with other studies, our notion is oblivious: it depends only on the
joint statistics of the predictor, the target and the protected attribute, but
not on interpretation of individualfeatures. We study the inherent limits of
defining and identifying biases based on such oblivious measures, outlining
what can and cannot be inferred from different oblivious tests.
We illustrate our notion using a case study of FICO credit scores. | http://arxiv.org/pdf/1610.02413 | Moritz Hardt, Eric Price, Nathan Srebro | cs.LG | null | null | cs.LG | 20161007 | 20161007 | [] |
1610.02136 | 8 | 2
Published as a conference paper at ICLR 2017
ROC since if S is a score for a successfully classiï¬ed value, and E is the score for an erroneously classiï¬ed value, AUROC = P (S > E) = P (âE > âS).
We begin our experiments in Section 3 where we describe a simple baseline which uses the maxi- mum probability from the softmax label distribution in neural network classiï¬ers. Then in Section 4 we describe a method that uses an additional, auxiliary model component trained to reconstruct the input.
# 3 SOFTMAX PREDICTION PROBABILITY AS A BASELINE | 1610.02136#8 | A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks | We consider the two related problems of detecting if an example is
misclassified or out-of-distribution. We present a simple baseline that
utilizes probabilities from softmax distributions. Correctly classified
examples tend to have greater maximum softmax probabilities than erroneously
classified and out-of-distribution examples, allowing for their detection. We
assess performance by defining several tasks in computer vision, natural
language processing, and automatic speech recognition, showing the
effectiveness of this baseline across all. We then show the baseline can
sometimes be surpassed, demonstrating the room for future research on these
underexplored detection tasks. | http://arxiv.org/pdf/1610.02136 | Dan Hendrycks, Kevin Gimpel | cs.NE, cs.CV, cs.LG | Published as a conference paper at ICLR 2017. 1 Figure in 1 Appendix.
Minor changes from the previous version | International Conference on Learning Representations 2017 | cs.NE | 20161007 | 20181003 | [] |
1610.02357 | 8 | Having made these observations, we suggest that it may be possible to improve upon the Inception family of archi- tectures by replacing Inception modules with depthwise sep- arable convolutions, i.e. by building models that would be stacks of depthwise separable convolutions. This is made practical by the efï¬cient depthwise convolution implementa- tion available in TensorFlow. In what follows, we present a convolutional neural network architecture based on this idea, with a similar number of parameters as Inception V3, and we evaluate its performance against Inception V3 on two large-scale image classiï¬cation task.
# 2. Prior work
The present work relies heavily on prior efforts in the following areas:
⢠Convolutional neural networks [10, 9, 25], in particular the VGG-16 architecture [18], which is schematically similar to our proposed architecture in a few respects.
⢠The Inception architecture family of convolutional neu- ral networks [20, 7, 21, 19], which ï¬rst demonstrated the advantages of factoring convolutions into multiple branches operating successively on channels and then on space. | 1610.02357#8 | Xception: Deep Learning with Depthwise Separable Convolutions | We present an interpretation of Inception modules in convolutional neural
networks as being an intermediate step in-between regular convolution and the
depthwise separable convolution operation (a depthwise convolution followed by
a pointwise convolution). In this light, a depthwise separable convolution can
be understood as an Inception module with a maximally large number of towers.
This observation leads us to propose a novel deep convolutional neural network
architecture inspired by Inception, where Inception modules have been replaced
with depthwise separable convolutions. We show that this architecture, dubbed
Xception, slightly outperforms Inception V3 on the ImageNet dataset (which
Inception V3 was designed for), and significantly outperforms Inception V3 on a
larger image classification dataset comprising 350 million images and 17,000
classes. Since the Xception architecture has the same number of parameters as
Inception V3, the performance gains are not due to increased capacity but
rather to a more efficient use of model parameters. | http://arxiv.org/pdf/1610.02357 | François Chollet | cs.CV | null | null | cs.CV | 20161007 | 20170404 | [
{
"id": "1608.04337"
},
{
"id": "1602.07261"
},
{
"id": "1511.07289"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
}
] |
1610.02413 | 8 | Pr{Y=1|A=0,Y =y}=Pr{Y=1/A=LyY=y}, yâ¬{0,1}
For the outcome y = 1, the constraint requires that Y has equal true positive rates across the two demographics A = 0 and A = 1. For y = 0, the constraint equalizes false positive rates. The definition aligns nicely with the central goal of building highly accurate classifiers, since Y = Y is always an acceptable solution. However, equalized odds enforces that the accuracy is equally high in all demographics, punishing models that perform well only on the majority.
3
# 2.1 Equal opportunity
In the binary case, we often think of the outcome Y = 1 as the âadvantagedâ outcome, such as ânot defaulting on a loanâ, âadmission to a collegeâ or âreceiving a promotionâ. A possible relaxation of equalized odds is to require non-discrimination only within the âadvantagedâ out- come group. That is, to require that people who pay back their loan, have an equal opportunity of getting the loan in the ï¬rst place (without specifying any requirement for those that will ultimately default). This leads to a relaxation of our notion that we call âequal opportunityâ. | 1610.02413#8 | Equality of Opportunity in Supervised Learning | We propose a criterion for discrimination against a specified sensitive
attribute in supervised learning, where the goal is to predict some target
based on available features. Assuming data about the predictor, target, and
membership in the protected group are available, we show how to optimally
adjust any learned predictor so as to remove discrimination according to our
definition. Our framework also improves incentives by shifting the cost of poor
classification from disadvantaged groups to the decision maker, who can respond
by improving the classification accuracy.
In line with other studies, our notion is oblivious: it depends only on the
joint statistics of the predictor, the target and the protected attribute, but
not on interpretation of individualfeatures. We study the inherent limits of
defining and identifying biases based on such oblivious measures, outlining
what can and cannot be inferred from different oblivious tests.
We illustrate our notion using a case study of FICO credit scores. | http://arxiv.org/pdf/1610.02413 | Moritz Hardt, Eric Price, Nathan Srebro | cs.LG | null | null | cs.LG | 20161007 | 20161007 | [] |
1610.02136 | 9 | # 3 SOFTMAX PREDICTION PROBABILITY AS A BASELINE
In what follows we retrieve the maximum/predicted class probability from a softmax distribution and thereby detect whether an example is erroneously classiï¬ed or out-of-distribution. Speciï¬cally, we separate correctly and incorrectly classiï¬ed test set examples and, for each example, compute the softmax probability of the predicted class, i.e., the maximum softmax probability.3 From these two groups we obtain the area under PR and ROC curves. These areas summarize the performance of a binary classiï¬er discriminating with values/scores (in this case, maximum probabilities from the softmaxes) across different thresholds. This description treats correctly classiï¬ed examples as In âErrorâ or âErrâ we treat the the positive class, denoted âSuccessâ or âSuccâ in our tables. the incorrectly classiï¬ed examples as the positive class; to do this we label incorrectly classiï¬ed examples as positive and take the negatives of the softmax probabilities of the predicted classes as the scores. | 1610.02136#9 | A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks | We consider the two related problems of detecting if an example is
misclassified or out-of-distribution. We present a simple baseline that
utilizes probabilities from softmax distributions. Correctly classified
examples tend to have greater maximum softmax probabilities than erroneously
classified and out-of-distribution examples, allowing for their detection. We
assess performance by defining several tasks in computer vision, natural
language processing, and automatic speech recognition, showing the
effectiveness of this baseline across all. We then show the baseline can
sometimes be surpassed, demonstrating the room for future research on these
underexplored detection tasks. | http://arxiv.org/pdf/1610.02136 | Dan Hendrycks, Kevin Gimpel | cs.NE, cs.CV, cs.LG | Published as a conference paper at ICLR 2017. 1 Figure in 1 Appendix.
Minor changes from the previous version | International Conference on Learning Representations 2017 | cs.NE | 20161007 | 20181003 | [] |
1610.02413 | 9 | Definition 2.2 (Equal opportunity). We say that a binary predictor Y satisfies equal opportunity with respect to A and Y if Pr{Y = 1JA=0,Y= i}= Pr{Y =1|A=1Y= 1}.
Equal opportunity is a weaker, though still interesting, notion of non-discrimination, and thus typically allows for stronger utility as we shall see in our case study.
# 2.2 Real-valued scores
Even if the target is binary, a real-valued predictive score R = f(X,A) is often used (e.g. FICO scores for predicting loan default), with the interpretation that higher values of R correspond to greater likelihood of Y = 1 and thus a bias toward predicting Y =1. A binary classifier Y can be obtained by thresholding the score, i.e. setting Y = I{R > t} for some threshold ¢. Varying this threshold changes the trade-off between sensitivity (true positive rate) and specificity (true negative rate).
Our definition for equalized odds can be applied also to such score functions: a score R satisfies equalized odds if R is independent of A given Y. If a score obeys equalized odds, then any thresholding Y =I{R> th of it also obeys equalized odds (as does any other predictor derived from R alone). | 1610.02413#9 | Equality of Opportunity in Supervised Learning | We propose a criterion for discrimination against a specified sensitive
attribute in supervised learning, where the goal is to predict some target
based on available features. Assuming data about the predictor, target, and
membership in the protected group are available, we show how to optimally
adjust any learned predictor so as to remove discrimination according to our
definition. Our framework also improves incentives by shifting the cost of poor
classification from disadvantaged groups to the decision maker, who can respond
by improving the classification accuracy.
In line with other studies, our notion is oblivious: it depends only on the
joint statistics of the predictor, the target and the protected attribute, but
not on interpretation of individualfeatures. We study the inherent limits of
defining and identifying biases based on such oblivious measures, outlining
what can and cannot be inferred from different oblivious tests.
We illustrate our notion using a case study of FICO credit scores. | http://arxiv.org/pdf/1610.02413 | Moritz Hardt, Eric Price, Nathan Srebro | cs.LG | null | null | cs.LG | 20161007 | 20161007 | [] |
1610.02136 | 10 | For âIn,â we treat the in-distribution, correctly classiï¬ed test set examples as positive and use the softmax probability for the predicted class as a score, while for âOutâ we treat the out-of-distribution examples as positive and use the negative of the aforementioned probability. Since the AUPRs for Success, Error, In, Out classiï¬ers depend on the rate of positive examples, we list what area a random detector would achieve with âBaseâ values. Also in the upcoming results we list the mean predicted class probability of wrongly classiï¬ed examples (Pred Prob Wrong (mean)) to demonstrate that the softmax prediction probability is a misleading conï¬dence proxy when viewed in isolation. The âPred. Prob (mean)â columns show this same shortcoming but for out-of-distribution examples.
Table labels aside, we begin experimentation with datasets from vision then consider tasks in natural language processing and automatic speech recognition. In all of the following experiments, the AU- ROCs differ from the random baselines with high statistical signiï¬cance according to the Wilcoxon rank-sum test.
3.1 COMPUTER VISION | 1610.02136#10 | A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks | We consider the two related problems of detecting if an example is
misclassified or out-of-distribution. We present a simple baseline that
utilizes probabilities from softmax distributions. Correctly classified
examples tend to have greater maximum softmax probabilities than erroneously
classified and out-of-distribution examples, allowing for their detection. We
assess performance by defining several tasks in computer vision, natural
language processing, and automatic speech recognition, showing the
effectiveness of this baseline across all. We then show the baseline can
sometimes be surpassed, demonstrating the room for future research on these
underexplored detection tasks. | http://arxiv.org/pdf/1610.02136 | Dan Hendrycks, Kevin Gimpel | cs.NE, cs.CV, cs.LG | Published as a conference paper at ICLR 2017. 1 Figure in 1 Appendix.
Minor changes from the previous version | International Conference on Learning Representations 2017 | cs.NE | 20161007 | 20181003 | [] |
1610.02357 | 10 | during an internship at Google Brain in 2013, and used them in AlexNet to obtain small gains in accuracy and large gains in convergence speed, as well as a signiï¬cant reduction in model size. An overview of his work was ï¬rst made public in a presentation at ICLR 2014 [23]. Detailed experimental results are reported in Sifreâs the- sis, section 6.2 [15]. This initial work on depthwise sep- arable convolutions was inspired by prior research from Sifre and Mallat on transformation-invariant scattering [16, 15]. Later, a depthwise separable convolution was used as the ï¬rst layer of Inception V1 and Inception V2 [20, 7]. Within Google, Andrew Howard [6] has introduced efï¬cient mobile models called MobileNets using depthwise separable convolutions. Jin et al. in 2014 [8] and Wang et al. in 2016 [24] also did related work aiming at reducing the size and computational cost of convolutional neural networks using separable convolutions. Additionally, our work is only possible due to the inclusion of an efï¬cient implementation of depthwise separable convolutions in the TensorFlow framework [1].
⢠Residual connections, introduced by He et al. in [4], which our proposed architecture uses extensively. | 1610.02357#10 | Xception: Deep Learning with Depthwise Separable Convolutions | We present an interpretation of Inception modules in convolutional neural
networks as being an intermediate step in-between regular convolution and the
depthwise separable convolution operation (a depthwise convolution followed by
a pointwise convolution). In this light, a depthwise separable convolution can
be understood as an Inception module with a maximally large number of towers.
This observation leads us to propose a novel deep convolutional neural network
architecture inspired by Inception, where Inception modules have been replaced
with depthwise separable convolutions. We show that this architecture, dubbed
Xception, slightly outperforms Inception V3 on the ImageNet dataset (which
Inception V3 was designed for), and significantly outperforms Inception V3 on a
larger image classification dataset comprising 350 million images and 17,000
classes. Since the Xception architecture has the same number of parameters as
Inception V3, the performance gains are not due to increased capacity but
rather to a more efficient use of model parameters. | http://arxiv.org/pdf/1610.02357 | François Chollet | cs.CV | null | null | cs.CV | 20161007 | 20170404 | [
{
"id": "1608.04337"
},
{
"id": "1602.07261"
},
{
"id": "1511.07289"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
}
] |
1610.02413 | 10 | In Section 4, we will consider scores that might not satisfy equalized odds, and see how equalized odds predictors can be derived from them and the protected attribute A, by using diï¬erent (possibly randomized) thresholds depending on the value of A. The same is possible for equality of opportunity without the need for randomized thresholds.
# 2.3 Oblivious measures
As stated before, our notions of non-discrimination are oblivious in the following formal sense.
Definition 2.3. A property of a predictor Y or score R is said to be oblivious if it only depends on the joint distribution of (Y,A, Y) or (Y,A,R), respectively.
As a consequence of being oblivious, all the information we need to verify our definitions is contained in the joint distribution of predictor, protected group and outcome, (Y,A, Y). In the binary case, when A and Y are reasonably well balanced, the joint distribution of (Y,A, Y) is determined by 8 parameters that can be estimated to very high accuracy from samples. We will therefore ignore the effect of finite sample perturbations and instead assume that we know the joint distribution of (Y,A, Y).
4
# 3 Comparison with related work | 1610.02413#10 | Equality of Opportunity in Supervised Learning | We propose a criterion for discrimination against a specified sensitive
attribute in supervised learning, where the goal is to predict some target
based on available features. Assuming data about the predictor, target, and
membership in the protected group are available, we show how to optimally
adjust any learned predictor so as to remove discrimination according to our
definition. Our framework also improves incentives by shifting the cost of poor
classification from disadvantaged groups to the decision maker, who can respond
by improving the classification accuracy.
In line with other studies, our notion is oblivious: it depends only on the
joint statistics of the predictor, the target and the protected attribute, but
not on interpretation of individualfeatures. We study the inherent limits of
defining and identifying biases based on such oblivious measures, outlining
what can and cannot be inferred from different oblivious tests.
We illustrate our notion using a case study of FICO credit scores. | http://arxiv.org/pdf/1610.02413 | Moritz Hardt, Eric Price, Nathan Srebro | cs.LG | null | null | cs.LG | 20161007 | 20161007 | [] |
1610.02136 | 11 | 3.1 COMPUTER VISION
In the following computer vision tasks, we use three datasets: MNIST, CIFAR-10, and CIFAR- 100 (Krizhevsky, 2009). MNIST is a dataset of handwritten digits, consisting of 60000 training and 10000 testing examples. Meanwhile, CIFAR-10 has colored images belonging to 10 different classes, with 50000 training and 10000 testing examples. CIFAR-100 is more difï¬cult, as it has 100 different classes with 50000 training and 10000 testing examples.
In Table 1, we see that correctly classiï¬ed and incorrectly classiï¬ed examples are sufï¬ciently distinct and thus allow reliable discrimination. Note that the area under the curves degrade with image recognizer test error. | 1610.02136#11 | A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks | We consider the two related problems of detecting if an example is
misclassified or out-of-distribution. We present a simple baseline that
utilizes probabilities from softmax distributions. Correctly classified
examples tend to have greater maximum softmax probabilities than erroneously
classified and out-of-distribution examples, allowing for their detection. We
assess performance by defining several tasks in computer vision, natural
language processing, and automatic speech recognition, showing the
effectiveness of this baseline across all. We then show the baseline can
sometimes be surpassed, demonstrating the room for future research on these
underexplored detection tasks. | http://arxiv.org/pdf/1610.02136 | Dan Hendrycks, Kevin Gimpel | cs.NE, cs.CV, cs.LG | Published as a conference paper at ICLR 2017. 1 Figure in 1 Appendix.
Minor changes from the previous version | International Conference on Learning Representations 2017 | cs.NE | 20161007 | 20181003 | [] |
1610.02357 | 11 | ⢠Residual connections, introduced by He et al. in [4], which our proposed architecture uses extensively.
# 3. The Xception architecture
We propose a convolutional neural network architecture based entirely on depthwise separable convolution layers. In effect, we make the following hypothesis: that the map- ping of cross-channels correlations and spatial correlations in the feature maps of convolutional neural networks can be entirely decoupled. Because this hypothesis is a stronger ver- sion of the hypothesis underlying the Inception architecture, we name our proposed architecture Xception, which stands for âExtreme Inceptionâ.
A complete description of the speciï¬cations of the net- work is given in ï¬gure 5. The Xception architecture has 36 convolutional layers forming the feature extraction base of the network. In our experimental evaluation we will ex- clusively investigate image classiï¬cation and therefore our convolutional base will be followed by a logistic regression layer. Optionally one may insert fully-connected layers be- fore the logistic regression layer, which is explored in the experimental evaluation section (in particular, see ï¬gures 7 and 8). The 36 convolutional layers are structured into 14 modules, all of which have linear residual connections around them, except for the ï¬rst and last modules. | 1610.02357#11 | Xception: Deep Learning with Depthwise Separable Convolutions | We present an interpretation of Inception modules in convolutional neural
networks as being an intermediate step in-between regular convolution and the
depthwise separable convolution operation (a depthwise convolution followed by
a pointwise convolution). In this light, a depthwise separable convolution can
be understood as an Inception module with a maximally large number of towers.
This observation leads us to propose a novel deep convolutional neural network
architecture inspired by Inception, where Inception modules have been replaced
with depthwise separable convolutions. We show that this architecture, dubbed
Xception, slightly outperforms Inception V3 on the ImageNet dataset (which
Inception V3 was designed for), and significantly outperforms Inception V3 on a
larger image classification dataset comprising 350 million images and 17,000
classes. Since the Xception architecture has the same number of parameters as
Inception V3, the performance gains are not due to increased capacity but
rather to a more efficient use of model parameters. | http://arxiv.org/pdf/1610.02357 | François Chollet | cs.CV | null | null | cs.CV | 20161007 | 20170404 | [
{
"id": "1608.04337"
},
{
"id": "1602.07261"
},
{
"id": "1511.07289"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
}
] |
1610.02413 | 11 | There is much work on this topic in the social sciences and legal scholarship; we point the reader to Barocas and Selbst [BS16] for an excellent entry point to this rich literature. See also the survey by Romei and Ruggieri [RR14], and the references at http: / /www.fatml.org/resources.html. In its various equivalent notions, demographic parity appears in many papers, such as [CKP09, Zli15, BZVGRG15] to name a few. Zemel et al. [ZWS*13] propose an interesting way of achiev- ing demographic parity by aiming to learn a representation of the data that is independent of the protected attribute, while retaining as much information about the features X as possible. Louizos et al. [LSL*15] extend on this approach with deep variational auto-encoders. Feldman et al. [FFM*15] propose a formalization of âlimiting disparate impactâ. For binary classifiers, the condition states that Pr{Y = 1|A= o} <0.8- Pr{Y =1|A= 1}. The authors argue that this corresponds to the â80% ruleâ in the legal | 1610.02413#11 | Equality of Opportunity in Supervised Learning | We propose a criterion for discrimination against a specified sensitive
attribute in supervised learning, where the goal is to predict some target
based on available features. Assuming data about the predictor, target, and
membership in the protected group are available, we show how to optimally
adjust any learned predictor so as to remove discrimination according to our
definition. Our framework also improves incentives by shifting the cost of poor
classification from disadvantaged groups to the decision maker, who can respond
by improving the classification accuracy.
In line with other studies, our notion is oblivious: it depends only on the
joint statistics of the predictor, the target and the protected attribute, but
not on interpretation of individualfeatures. We study the inherent limits of
defining and identifying biases based on such oblivious measures, outlining
what can and cannot be inferred from different oblivious tests.
We illustrate our notion using a case study of FICO credit scores. | http://arxiv.org/pdf/1610.02413 | Moritz Hardt, Eric Price, Nathan Srebro | cs.LG | null | null | cs.LG | 20161007 | 20161007 | [] |
1610.02136 | 12 | Next, let us consider using softmax distributions to determine whether an example is in- or out- of-distribution. We use all test set examples as the in-distribution (positive) examples. For out-of- distribution (negative) examples, we use realistic images and noise. For CIFAR-10 and CIFAR-100, we use realistic images from the Scene UNderstanding dataset (SUN), which consists of 397 differ- ent scenes (Xiao et al., 2010). For MNIST, we use grayscale realistic images from three sources. Omniglot (Lake et al., 2015) images are handwritten characters rather than the handwritten digits in MNIST. Next, notMNIST (Bulatov, 2011) consists of typeface characters. Last of the realistic im- ages, CIFAR-10bw are black and white rescaled CIFAR-10 images. The synthetic âGaussianâ data
3We also tried using the KL divergence of the softmax distribution from the uniform distribution for detec- tion. With divergence values, detector AUROCs and AUPRs were highly correlated with AUROCs and AUPRs from a detector using the maximum softmax probability. This divergence is similar to entropy (Steinhardt & Liang, 2016; Williams & Renals, 1997).
3
Published as a conference paper at ICLR 2017 | 1610.02136#12 | A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks | We consider the two related problems of detecting if an example is
misclassified or out-of-distribution. We present a simple baseline that
utilizes probabilities from softmax distributions. Correctly classified
examples tend to have greater maximum softmax probabilities than erroneously
classified and out-of-distribution examples, allowing for their detection. We
assess performance by defining several tasks in computer vision, natural
language processing, and automatic speech recognition, showing the
effectiveness of this baseline across all. We then show the baseline can
sometimes be surpassed, demonstrating the room for future research on these
underexplored detection tasks. | http://arxiv.org/pdf/1610.02136 | Dan Hendrycks, Kevin Gimpel | cs.NE, cs.CV, cs.LG | Published as a conference paper at ICLR 2017. 1 Figure in 1 Appendix.
Minor changes from the previous version | International Conference on Learning Representations 2017 | cs.NE | 20161007 | 20181003 | [] |
1610.02357 | 12 | In short, the Xception architecture is a linear stack of depthwise separable convolution layers with residual con- nections. This makes the architecture very easy to deï¬ne and modify; it takes only 30 to 40 lines of code using a high- level library such as Keras [2] or TensorFlow-Slim [17], not unlike an architecture such as VGG-16 [18], but rather unlike architectures such as Inception V2 or V3 which are far more complex to deï¬ne. An open-source implementation of Xception using Keras and TensorFlow is provided as part of the Keras Applications module2, under the MIT license.
# 4. Experimental evaluation
We choose to compare Xception to the Inception V3 ar- chitecture, due to their similarity of scale: Xception and Inception V3 have nearly the same number of parameters (table 3), and thus any performance gap could not be at- tributed to a difference in network capacity. We conduct our comparison on two image classiï¬cation tasks: one is the well-known 1000-class single-label classiï¬cation task on the ImageNet dataset [14], and the other is a 17,000-class multi-label classiï¬cation task on the large-scale JFT dataset.
# 4.1. The JFT dataset | 1610.02357#12 | Xception: Deep Learning with Depthwise Separable Convolutions | We present an interpretation of Inception modules in convolutional neural
networks as being an intermediate step in-between regular convolution and the
depthwise separable convolution operation (a depthwise convolution followed by
a pointwise convolution). In this light, a depthwise separable convolution can
be understood as an Inception module with a maximally large number of towers.
This observation leads us to propose a novel deep convolutional neural network
architecture inspired by Inception, where Inception modules have been replaced
with depthwise separable convolutions. We show that this architecture, dubbed
Xception, slightly outperforms Inception V3 on the ImageNet dataset (which
Inception V3 was designed for), and significantly outperforms Inception V3 on a
larger image classification dataset comprising 350 million images and 17,000
classes. Since the Xception architecture has the same number of parameters as
Inception V3, the performance gains are not due to increased capacity but
rather to a more efficient use of model parameters. | http://arxiv.org/pdf/1610.02357 | François Chollet | cs.CV | null | null | cs.CV | 20161007 | 20170404 | [
{
"id": "1608.04337"
},
{
"id": "1602.07261"
},
{
"id": "1511.07289"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
}
] |
1610.02136 | 13 | 3
Published as a conference paper at ICLR 2017
Dataset MNIST CIFAR-10 CIFAR-100 AUROC /Base 97/50 93/50 87/50 AUPR Succ/Base 100/98 100/95 96/79 AUPR Err/Base 48/1.7 43/5 62/21 Pred. Prob Wrong(mean) 86 80 66 Test Set Error 1.69 4.96 20.7
Table 1: The softmax predicted class probability allows for discrimination between correctly and incorrectly classiï¬ed test set examples. âPred. Prob Wrong(mean)â is the mean softmax probability for wrongly classiï¬ed examples, showcasing its shortcoming as a direct measure of conï¬dence. Succ/Err Base values are the AUROCs or AUPRs achieved by random classiï¬ers. All entries are percentages. | 1610.02136#13 | A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks | We consider the two related problems of detecting if an example is
misclassified or out-of-distribution. We present a simple baseline that
utilizes probabilities from softmax distributions. Correctly classified
examples tend to have greater maximum softmax probabilities than erroneously
classified and out-of-distribution examples, allowing for their detection. We
assess performance by defining several tasks in computer vision, natural
language processing, and automatic speech recognition, showing the
effectiveness of this baseline across all. We then show the baseline can
sometimes be surpassed, demonstrating the room for future research on these
underexplored detection tasks. | http://arxiv.org/pdf/1610.02136 | Dan Hendrycks, Kevin Gimpel | cs.NE, cs.CV, cs.LG | Published as a conference paper at ICLR 2017. 1 Figure in 1 Appendix.
Minor changes from the previous version | International Conference on Learning Representations 2017 | cs.NE | 20161007 | 20181003 | [] |
1610.02357 | 13 | # 4.1. The JFT dataset
JFT is an internal Google dataset for large-scale image classiï¬cation dataset, ï¬rst introduced by Hinton et al. in [5], which comprises over 350 million high-resolution images annotated with labels from a set of 17,000 classes. To eval- uate the performance of a model trained on JFT, we use an auxiliary dataset, FastEval14k.
FastEval14k is a dataset of 14,000 images with dense annotations from about 6,000 classes (36.5 labels per im- age on average). On this dataset we evaluate performance using Mean Average Precision for top 100 predictions (MAP@100), and we weight the contribution of each class to MAP@100 with a score estimating how common (and therefore important) the class is among social media images. This evaluation procedure is meant to capture performance on frequently occurring labels from social media, which is crucial for production models at Google.
# 4.2. Optimization conï¬guration
A different optimization conï¬guration was used for Ima- geNet and JFT:
⢠On ImageNet:
â Optimizer: SGD
â Momentum: 0.9
â Initial learning rate: 0.045
â Learning rate decay: decay of rate 0.94 every 2 epochs
⢠On JFT:
â Optimizer: RMSprop [22] | 1610.02357#13 | Xception: Deep Learning with Depthwise Separable Convolutions | We present an interpretation of Inception modules in convolutional neural
networks as being an intermediate step in-between regular convolution and the
depthwise separable convolution operation (a depthwise convolution followed by
a pointwise convolution). In this light, a depthwise separable convolution can
be understood as an Inception module with a maximally large number of towers.
This observation leads us to propose a novel deep convolutional neural network
architecture inspired by Inception, where Inception modules have been replaced
with depthwise separable convolutions. We show that this architecture, dubbed
Xception, slightly outperforms Inception V3 on the ImageNet dataset (which
Inception V3 was designed for), and significantly outperforms Inception V3 on a
larger image classification dataset comprising 350 million images and 17,000
classes. Since the Xception architecture has the same number of parameters as
Inception V3, the performance gains are not due to increased capacity but
rather to a more efficient use of model parameters. | http://arxiv.org/pdf/1610.02357 | François Chollet | cs.CV | null | null | cs.CV | 20161007 | 20170404 | [
{
"id": "1608.04337"
},
{
"id": "1602.07261"
},
{
"id": "1511.07289"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
}
] |
1610.02413 | 13 | While simple and seemingly intuitive, demographic parity has serious conceptual limitations as a fairness notion, many of which were pointed out in work of Dwork et al. [DHP*12]. In our experiments, we will see that demographic parity also falls short on utility. Dwork et al. [DHP*12] argue that a sound notion of fairness must be task-specific, and formalize fairness based on a hypothetical similarity measure d(x, xâ) requiring similar individuals to receive a similar distribution over outcomes. In practice, however, in can be difficult to come up with a suitable metric. Our notion is task-specific in the sense that it makes critical use of the final outcome Y, while avoiding the difficulty of dealing with the features X.
In a recent concurrent work, Kleinberg, Mullainathan and Raghavan [KMR16] showed that in general a score that is calibrated within each group does not satisfy a criterion equivalent to equalized odds for binary predictors. This result highlights that calibration alone does not imply non-discrimination according to our measure. Conversely, achieving equalized odds may in general compromise other desirable properties of a score.
Early work of Pedreshi et al. [PRT08] and several follow-up works explore a logical rule- based approach to non-discrimination. These approaches donât easily relate to our statistical approach.
# 4 Achieving equalized odds and equality of opportunity | 1610.02413#13 | Equality of Opportunity in Supervised Learning | We propose a criterion for discrimination against a specified sensitive
attribute in supervised learning, where the goal is to predict some target
based on available features. Assuming data about the predictor, target, and
membership in the protected group are available, we show how to optimally
adjust any learned predictor so as to remove discrimination according to our
definition. Our framework also improves incentives by shifting the cost of poor
classification from disadvantaged groups to the decision maker, who can respond
by improving the classification accuracy.
In line with other studies, our notion is oblivious: it depends only on the
joint statistics of the predictor, the target and the protected attribute, but
not on interpretation of individualfeatures. We study the inherent limits of
defining and identifying biases based on such oblivious measures, outlining
what can and cannot be inferred from different oblivious tests.
We illustrate our notion using a case study of FICO credit scores. | http://arxiv.org/pdf/1610.02413 | Moritz Hardt, Eric Price, Nathan Srebro | cs.LG | null | null | cs.LG | 20161007 | 20161007 | [] |
1610.02136 | 14 | In-Distribution / Out-of-Distribution CIFAR-10/SUN CIFAR-10/Gaussian CIFAR-10/All CIFAR-100/SUN CIFAR-100/Gaussian CIFAR-100/All MNIST/Omniglot MNIST/notMNIST MNIST/CIFAR-10bw MNIST/Gaussian MNIST/Uniform MNIST/All AUROC /Base 95/50 97/50 96/50 91/50 88/50 90/50 96/50 85/50 95/50 90/50 99/50 91/50 AUPR In /Base 89/33 98/49 88/24 83/27 92/43 81/21 97/52 86/50 95/50 90/50 99/50 76/20 AUPR Out/Base 97/67 95/51 98/76 96/73 80/57 96/79 96/48 88/50 95/50 91/50 98/50 98/80 Pred. Prob (mean) 72 77 74 56 77 63 86 92 87 91 83 89
Table 2: Distinguishing in- and out-of-distribution test set data for image classiï¬cation. CIFAR- 10/All is the same as CIFAR-10/(SUN, Gaussian). All values are percentages. | 1610.02136#14 | A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks | We consider the two related problems of detecting if an example is
misclassified or out-of-distribution. We present a simple baseline that
utilizes probabilities from softmax distributions. Correctly classified
examples tend to have greater maximum softmax probabilities than erroneously
classified and out-of-distribution examples, allowing for their detection. We
assess performance by defining several tasks in computer vision, natural
language processing, and automatic speech recognition, showing the
effectiveness of this baseline across all. We then show the baseline can
sometimes be surpassed, demonstrating the room for future research on these
underexplored detection tasks. | http://arxiv.org/pdf/1610.02136 | Dan Hendrycks, Kevin Gimpel | cs.NE, cs.CV, cs.LG | Published as a conference paper at ICLR 2017. 1 Figure in 1 Appendix.
Minor changes from the previous version | International Conference on Learning Representations 2017 | cs.NE | 20161007 | 20181003 | [] |
1610.02357 | 14 | â Initial learning rate: 0.045
â Learning rate decay: decay of rate 0.94 every 2 epochs
⢠On JFT:
â Optimizer: RMSprop [22]
â Momentum: 0.9
â Initial learning rate: 0.001
2https://keras.io/applications/#xception
â Learning rate decay: decay of rate 0.9 every 3,000,000 samples
For both datasets, the same exact same optimization con- ï¬guration was used for both Xception and Inception V3. Note that this conï¬guration was tuned for best performance with Inception V3; we did not attempt to tune optimization hyperparameters for Xception. Since the networks have dif- ferent training proï¬les (ï¬gure 6), this may be suboptimal, es- pecially on the ImageNet dataset, on which the optimization conï¬guration used had been carefully tuned for Inception V3.
Additionally, all models were evaluated using Polyak averaging [13] at inference time.
# 4.3. Regularization conï¬guration | 1610.02357#14 | Xception: Deep Learning with Depthwise Separable Convolutions | We present an interpretation of Inception modules in convolutional neural
networks as being an intermediate step in-between regular convolution and the
depthwise separable convolution operation (a depthwise convolution followed by
a pointwise convolution). In this light, a depthwise separable convolution can
be understood as an Inception module with a maximally large number of towers.
This observation leads us to propose a novel deep convolutional neural network
architecture inspired by Inception, where Inception modules have been replaced
with depthwise separable convolutions. We show that this architecture, dubbed
Xception, slightly outperforms Inception V3 on the ImageNet dataset (which
Inception V3 was designed for), and significantly outperforms Inception V3 on a
larger image classification dataset comprising 350 million images and 17,000
classes. Since the Xception architecture has the same number of parameters as
Inception V3, the performance gains are not due to increased capacity but
rather to a more efficient use of model parameters. | http://arxiv.org/pdf/1610.02357 | François Chollet | cs.CV | null | null | cs.CV | 20161007 | 20170404 | [
{
"id": "1608.04337"
},
{
"id": "1602.07261"
},
{
"id": "1511.07289"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
}
] |
1610.02413 | 14 | # 4 Achieving equalized odds and equality of opportunity
We now explain how to find an equalized odds or equal opportunity predictor Y derived from a, possibly discriminatory, learned binary predictor Y or score R. We envision that Y or R are whatever comes out of the existing training pipeline for the problem at hand. Importantly, we do not require changing the training process, as this might introduce additional complexity, but rather only a post-learning step. In particular, we will construct a non-discriminating predictor which is derived from Y or R:
Definition 4.1 (Derived predictor). A predictor Y is derived from a random variable R and the protected attribute A if it is a possibly randomized function of the random variables (R, A) alone. In particular, Y is independent of X conditional on (R, A).
The definition asks that the value of a derived predictor Y should only depend on R and the protected attribute, though it may introduce additional randomness. But the formulation of
5
Y (that is, the function applied to the values of R and A), depends on information about the joint distribution of (R,A, Y). In other words, this joint distribution (or an empirical estimate of it) is required at training time in order to construct the predictor Y, but at prediction time we only have access to values of (R, A). No further data about the underlying features X, nor their distribution, is required. | 1610.02413#14 | Equality of Opportunity in Supervised Learning | We propose a criterion for discrimination against a specified sensitive
attribute in supervised learning, where the goal is to predict some target
based on available features. Assuming data about the predictor, target, and
membership in the protected group are available, we show how to optimally
adjust any learned predictor so as to remove discrimination according to our
definition. Our framework also improves incentives by shifting the cost of poor
classification from disadvantaged groups to the decision maker, who can respond
by improving the classification accuracy.
In line with other studies, our notion is oblivious: it depends only on the
joint statistics of the predictor, the target and the protected attribute, but
not on interpretation of individualfeatures. We study the inherent limits of
defining and identifying biases based on such oblivious measures, outlining
what can and cannot be inferred from different oblivious tests.
We illustrate our notion using a case study of FICO credit scores. | http://arxiv.org/pdf/1610.02413 | Moritz Hardt, Eric Price, Nathan Srebro | cs.LG | null | null | cs.LG | 20161007 | 20161007 | [] |
1610.02136 | 15 | is random normal noise, and âUniformâ data is random uniform noise. Images are resized when necessary.
The results are shown in Table 2. Notice that the mean predicted/maximum class probabilities (Pred. Prob (mean)) are above 75%, but if the prediction probability alone is translated to conï¬dence, the softmax distribution should be more uniform for CIFAR-100. This again shows softmax probabil- ities should not be viewed as a direct representation of conï¬dence. Fortunately, out-of-distribution examples sufï¬ciently differ in the prediction probabilities from in-distribution examples, allowing for successful detection and generally high area under PR and ROC curves. | 1610.02136#15 | A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks | We consider the two related problems of detecting if an example is
misclassified or out-of-distribution. We present a simple baseline that
utilizes probabilities from softmax distributions. Correctly classified
examples tend to have greater maximum softmax probabilities than erroneously
classified and out-of-distribution examples, allowing for their detection. We
assess performance by defining several tasks in computer vision, natural
language processing, and automatic speech recognition, showing the
effectiveness of this baseline across all. We then show the baseline can
sometimes be surpassed, demonstrating the room for future research on these
underexplored detection tasks. | http://arxiv.org/pdf/1610.02136 | Dan Hendrycks, Kevin Gimpel | cs.NE, cs.CV, cs.LG | Published as a conference paper at ICLR 2017. 1 Figure in 1 Appendix.
Minor changes from the previous version | International Conference on Learning Representations 2017 | cs.NE | 20161007 | 20181003 | [] |
1610.02357 | 15 | Additionally, all models were evaluated using Polyak averaging [13] at inference time.
# 4.3. Regularization conï¬guration
⢠Weight decay: The Inception V3 model uses a weight decay (L2 regularization) rate of 4e â 5, which has been carefully tuned for performance on ImageNet. We found this rate to be quite suboptimal for Xception and instead settled for 1e â 5. We did not perform an extensive search for the optimal weight decay rate. The same weight decay rates were used both for the ImageNet experiments and the JFT experiments.
⢠Dropout: For the ImageNet experiments, both models include a dropout layer of rate 0.5 before the logistic regression layer. For the JFT experiments, no dropout was included due to the large size of the dataset which made overï¬tting unlikely in any reasonable amount of time.
⢠Auxiliary loss tower: The Inception V3 architecture may optionally include an auxiliary tower which back- propagates the classiï¬cation loss earlier in the network, serving as an additional regularization mechanism. For simplicity, we choose not to include this auxiliary tower in any of our models.
# 4.4. Training infrastructure | 1610.02357#15 | Xception: Deep Learning with Depthwise Separable Convolutions | We present an interpretation of Inception modules in convolutional neural
networks as being an intermediate step in-between regular convolution and the
depthwise separable convolution operation (a depthwise convolution followed by
a pointwise convolution). In this light, a depthwise separable convolution can
be understood as an Inception module with a maximally large number of towers.
This observation leads us to propose a novel deep convolutional neural network
architecture inspired by Inception, where Inception modules have been replaced
with depthwise separable convolutions. We show that this architecture, dubbed
Xception, slightly outperforms Inception V3 on the ImageNet dataset (which
Inception V3 was designed for), and significantly outperforms Inception V3 on a
larger image classification dataset comprising 350 million images and 17,000
classes. Since the Xception architecture has the same number of parameters as
Inception V3, the performance gains are not due to increased capacity but
rather to a more efficient use of model parameters. | http://arxiv.org/pdf/1610.02357 | François Chollet | cs.CV | null | null | cs.CV | 20161007 | 20170404 | [
{
"id": "1608.04337"
},
{
"id": "1602.07261"
},
{
"id": "1511.07289"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
}
] |
1610.02413 | 15 | Loss minimization. It is always easy to construct a trivial predictor satisfying equalized odds, by making decisions independent of X,A and R. For example, using the constant predictor Y =0or Y=1. The goal, of course, is to obtain a good predictor satisfying the condition. To quantify the notion of âgoodâ, we consider a loss function ¢: {0, 1}? > R that takes a pair of labels and returns a real number ¢(y, y) ⬠R which indicates the loss (or cost, or undesirability) of predicting 7 when the correct label is y. Our goal is then to design derived predictors Y that minimize the expected loss Eâ¬(Y, Y) subject to one of our definitions.
# 4.1 Deriving from a binary predictor
We will first develop an intuitive geometric solution in the case where we adjust a binary predictor Y and A is a binary protected attribute The proof generalizes directly to the case of a discrete protected attribute with more than two values. For convenience, we introduce the notation
yal ¥) (Pr{¥ =1]A=a,¥ = 0}, Pr{¥=1|A=a,Y=1)). (4.1) | 1610.02413#15 | Equality of Opportunity in Supervised Learning | We propose a criterion for discrimination against a specified sensitive
attribute in supervised learning, where the goal is to predict some target
based on available features. Assuming data about the predictor, target, and
membership in the protected group are available, we show how to optimally
adjust any learned predictor so as to remove discrimination according to our
definition. Our framework also improves incentives by shifting the cost of poor
classification from disadvantaged groups to the decision maker, who can respond
by improving the classification accuracy.
In line with other studies, our notion is oblivious: it depends only on the
joint statistics of the predictor, the target and the protected attribute, but
not on interpretation of individualfeatures. We study the inherent limits of
defining and identifying biases based on such oblivious measures, outlining
what can and cannot be inferred from different oblivious tests.
We illustrate our notion using a case study of FICO credit scores. | http://arxiv.org/pdf/1610.02413 | Moritz Hardt, Eric Price, Nathan Srebro | cs.LG | null | null | cs.LG | 20161007 | 20161007 | [] |
1610.02136 | 16 | For reproducibility, let us specify the model architectures. The MNIST classiï¬er is a three-layer, 256 neuron-wide, fully-connected network trained for 30 epochs with Adam (Kingma & Ba, 2015). It uses a GELU nonlinearity (Hendrycks & Gimpel, 2016b), xΦ(x), where Φ(x) is the CDF of the standard normal distribution. We initialize our weights according to (Hendrycks & Gimpel, 2016c), as it is suited for arbitrary nonlinearities. For CIFAR-10 and CIFAR-100, we train a 40-4 wide residual network (Zagoruyko & Komodakis, 2016) for 50 epochs with stochastic gradient descent using restarts (Loshchilov & Hutter, 2016), the GELU nonlinearity, and standard mirroring and cropping data augmentation.
3.2 NATURAL LANGUAGE PROCESSING
Let us turn to a variety of tasks and architectures used in natural language processing.
3.2.1 SENTIMENT CLASSIFICATION | 1610.02136#16 | A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks | We consider the two related problems of detecting if an example is
misclassified or out-of-distribution. We present a simple baseline that
utilizes probabilities from softmax distributions. Correctly classified
examples tend to have greater maximum softmax probabilities than erroneously
classified and out-of-distribution examples, allowing for their detection. We
assess performance by defining several tasks in computer vision, natural
language processing, and automatic speech recognition, showing the
effectiveness of this baseline across all. We then show the baseline can
sometimes be surpassed, demonstrating the room for future research on these
underexplored detection tasks. | http://arxiv.org/pdf/1610.02136 | Dan Hendrycks, Kevin Gimpel | cs.NE, cs.CV, cs.LG | Published as a conference paper at ICLR 2017. 1 Figure in 1 Appendix.
Minor changes from the previous version | International Conference on Learning Representations 2017 | cs.NE | 20161007 | 20181003 | [] |
1610.02357 | 16 | # 4.4. Training infrastructure
All networks were implemented using the TensorFlow framework [1] and trained on 60 NVIDIA K80 GPUs each. For the ImageNet experiments, we used data parallelism with synchronous gradient descent to achieve the best classi- ï¬cation performance, while for JFT we used asynchronous gradient descent so as to speed up training. The ImageNet experiments took approximately 3 days each, while the JFT experiments took over one month each. The JFT models were not trained to full convergence, which would have taken over three month per experiment.
Figure 5. The Xception architecture: the data ï¬rst goes through the entry ï¬ow, then through the middle ï¬ow which is repeated eight times, and ï¬nally through the exit ï¬ow. Note that all Convolution and SeparableConvolution layers are followed by batch normalization [7] (not included in the diagram). All SeparableConvolution layers use a depth multiplier of 1 (no depth expansion). | 1610.02357#16 | Xception: Deep Learning with Depthwise Separable Convolutions | We present an interpretation of Inception modules in convolutional neural
networks as being an intermediate step in-between regular convolution and the
depthwise separable convolution operation (a depthwise convolution followed by
a pointwise convolution). In this light, a depthwise separable convolution can
be understood as an Inception module with a maximally large number of towers.
This observation leads us to propose a novel deep convolutional neural network
architecture inspired by Inception, where Inception modules have been replaced
with depthwise separable convolutions. We show that this architecture, dubbed
Xception, slightly outperforms Inception V3 on the ImageNet dataset (which
Inception V3 was designed for), and significantly outperforms Inception V3 on a
larger image classification dataset comprising 350 million images and 17,000
classes. Since the Xception architecture has the same number of parameters as
Inception V3, the performance gains are not due to increased capacity but
rather to a more efficient use of model parameters. | http://arxiv.org/pdf/1610.02357 | François Chollet | cs.CV | null | null | cs.CV | 20161007 | 20170404 | [
{
"id": "1608.04337"
},
{
"id": "1602.07261"
},
{
"id": "1511.07289"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
}
] |
1610.02413 | 16 | The first component of y,(Y) is the false positive rate of Y within | the demographic satisfying A =a. Similarly, the second component is the true positive rate of Y within A =a. Observe that we can calculate y,(Y Y) given the joint distribution of (Y, A, »Y). The definitions of equalized odds and equal opportunity can be expressed in terms of y(Y Y), as formalized in the following straight-forward Lemma:
Lemma 4.2. A predictor Y satisfies:
1. equalized odds if and only if yo(Y) = y1(Y), and
2. equal opportunity if and only if yo(Y Y) and y,(Y) agree in the second component, i.e., yo(Y)2 = yi(Y)o.
For a â {0, 1}, consider the two-dimensional convex polytope deï¬ned as the convex hull of four vertices:
P,(Y) convhull{(0, 0), ya(),ya(1~ Y).(1,1)} (4.2) | 1610.02413#16 | Equality of Opportunity in Supervised Learning | We propose a criterion for discrimination against a specified sensitive
attribute in supervised learning, where the goal is to predict some target
based on available features. Assuming data about the predictor, target, and
membership in the protected group are available, we show how to optimally
adjust any learned predictor so as to remove discrimination according to our
definition. Our framework also improves incentives by shifting the cost of poor
classification from disadvantaged groups to the decision maker, who can respond
by improving the classification accuracy.
In line with other studies, our notion is oblivious: it depends only on the
joint statistics of the predictor, the target and the protected attribute, but
not on interpretation of individualfeatures. We study the inherent limits of
defining and identifying biases based on such oblivious measures, outlining
what can and cannot be inferred from different oblivious tests.
We illustrate our notion using a case study of FICO credit scores. | http://arxiv.org/pdf/1610.02413 | Moritz Hardt, Eric Price, Nathan Srebro | cs.LG | null | null | cs.LG | 20161007 | 20161007 | [] |
1610.02136 | 17 | Let us turn to a variety of tasks and architectures used in natural language processing.
3.2.1 SENTIMENT CLASSIFICATION
The ï¬rst NLP task is binary sentiment classiï¬cation using the IMDB dataset (Maas et al., 2011), a dataset of polarized movie reviews with 25000 training and 25000 test reviews. This task allows us to determine if classiï¬ers trained on a relatively small dataset still produce informative softmax
4
Published as a conference paper at ICLR 2017
Dataset IMDB AUROC /Base 82/50 AUPR Succ/Base 97/88 AUPR Err/Base 36/12 Pred. Prob Wrong(mean) 74 Test Set Error 11.9
Table 3: Detecting correct and incorrect classiï¬cations for binary sentiment classiï¬cation.
In-Distribution / Out-of-Distribution IMDB/Customer Reviews IMDB/Movie Reviews IMDB/All AUROC /Base 95/50 94/50 94/50 AUPR In /Base 99/89 98/72 97/66 AUPR Out/Base 60/11 80/28 84/34
Table 4: Distinguishing in- and out-of-distribution test set data for binary sentiment classiï¬cation. IMDB/All is the same as IMDB/(Customer Reviews, Movie Reviews). All values are percentages. | 1610.02136#17 | A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks | We consider the two related problems of detecting if an example is
misclassified or out-of-distribution. We present a simple baseline that
utilizes probabilities from softmax distributions. Correctly classified
examples tend to have greater maximum softmax probabilities than erroneously
classified and out-of-distribution examples, allowing for their detection. We
assess performance by defining several tasks in computer vision, natural
language processing, and automatic speech recognition, showing the
effectiveness of this baseline across all. We then show the baseline can
sometimes be surpassed, demonstrating the room for future research on these
underexplored detection tasks. | http://arxiv.org/pdf/1610.02136 | Dan Hendrycks, Kevin Gimpel | cs.NE, cs.CV, cs.LG | Published as a conference paper at ICLR 2017. 1 Figure in 1 Appendix.
Minor changes from the previous version | International Conference on Learning Representations 2017 | cs.NE | 20161007 | 20181003 | [] |
1610.02357 | 17 | Entry flow 299x299x3 images Conv 32, 3x3, stride=2x2 ReLU ReLU Middle flow 19x19x728 feature maps Exit flow 19x19x728 feature maps ReLU Conv 64, 3x3 SeparableConv 728, 3x3 SeparableConv 728, 3x3 ReLU SeparableConv 128, 3x3 I ReLU SeparableConv 128, 3x3 Conv 1x1 stride=2x2 MaxPooling 3x3, stride=2x2 SeparableConv 256, 3x3 Conv 1x1 stride=2x2 SeparableConv 256, 3x3 MaxPooling 3x3, stride=2x2 ReLU SeparableConv 728, 3x3 I Conv 1x1 ReLU stride=2x2 SeparableConv 728, 3x3 MaxPooling 3x3, stride=2x2 19x19x728 feature maps SeparableConv 728, 3x3 SeparableConv 728, 3x3 19x19x728 feature maps Repeated 8 times Conv 1x1 ReLU SeparableConv 1024, 3x3 stride=2x2 MaxPooling 3x3, stride=2x2 SeparableConv 1536, | 1610.02357#17 | Xception: Deep Learning with Depthwise Separable Convolutions | We present an interpretation of Inception modules in convolutional neural
networks as being an intermediate step in-between regular convolution and the
depthwise separable convolution operation (a depthwise convolution followed by
a pointwise convolution). In this light, a depthwise separable convolution can
be understood as an Inception module with a maximally large number of towers.
This observation leads us to propose a novel deep convolutional neural network
architecture inspired by Inception, where Inception modules have been replaced
with depthwise separable convolutions. We show that this architecture, dubbed
Xception, slightly outperforms Inception V3 on the ImageNet dataset (which
Inception V3 was designed for), and significantly outperforms Inception V3 on a
larger image classification dataset comprising 350 million images and 17,000
classes. Since the Xception architecture has the same number of parameters as
Inception V3, the performance gains are not due to increased capacity but
rather to a more efficient use of model parameters. | http://arxiv.org/pdf/1610.02357 | François Chollet | cs.CV | null | null | cs.CV | 20161007 | 20170404 | [
{
"id": "1608.04337"
},
{
"id": "1602.07261"
},
{
"id": "1511.07289"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
}
] |
1610.02413 | 17 | P,(Y) convhull{(0, 0), ya(),ya(1~ Y).(1,1)} (4.2)
Our next lemma shows that Py(Y) and P,(Y) characterize exactly the trade-offs between false positives and true positives that we can achieve with any derived classifier. The polytopes are visualized in Figure 1. Lemma 4.3. A predictor Y is derived if and only if for all a ⬠{0,1}, we have y,(Y Y)e P,(Y). Proof. Since a derived predictor Y can only depend on (Y,A) and these variables are binary, the predictor Y is completely described by four parameters in [0,1] corresponding to the probabilities Pr{Y =1|/Y= YA= =a} for 9,4 ⬠{0,1}. Each of these parameter choices leads to one of the points in P,(Y) and every point in the convex hull can be achieved by some parameter setting.
6 | 1610.02413#17 | Equality of Opportunity in Supervised Learning | We propose a criterion for discrimination against a specified sensitive
attribute in supervised learning, where the goal is to predict some target
based on available features. Assuming data about the predictor, target, and
membership in the protected group are available, we show how to optimally
adjust any learned predictor so as to remove discrimination according to our
definition. Our framework also improves incentives by shifting the cost of poor
classification from disadvantaged groups to the decision maker, who can respond
by improving the classification accuracy.
In line with other studies, our notion is oblivious: it depends only on the
joint statistics of the predictor, the target and the protected attribute, but
not on interpretation of individualfeatures. We study the inherent limits of
defining and identifying biases based on such oblivious measures, outlining
what can and cannot be inferred from different oblivious tests.
We illustrate our notion using a case study of FICO credit scores. | http://arxiv.org/pdf/1610.02413 | Moritz Hardt, Eric Price, Nathan Srebro | cs.LG | null | null | cs.LG | 20161007 | 20161007 | [] |
1610.02136 | 18 | distributions. For this task we use a linear classiï¬er taking as input the average of trainable, randomly initialized word vectors with dimension 50 (Joulin et al., 2016; Iyyer et al., 2015). We train for 15 epochs with Adam and early stopping based upon 5000 held-out training reviews. Again, Table 3 shows that the softmax distributions differ between correctly and incorrectly classiï¬ed examples, so prediction probabilities allow us to detect reliably which examples are right and wrong.
Now we use the Customer Review (Hu & Liu, 2004) and Movie Review (Pang et al., 2002) datasets as out-of-distribution examples. The Customer Review dataset has reviews of products rather than only movies, and the Movie Review dataset has snippets from professional movie reviewers rather than full-length amateur reviews. We leave all test set examples from IMDB as in-distribution examples, and out-of-distribution examples are the 500 or 1000 test reviews from Customer Review and Movie Review datasets, respectively. Table 4 displays detection results, showing a similar story to Table 2.
# 3.2.2 TEXT CATEGORIZATION | 1610.02136#18 | A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks | We consider the two related problems of detecting if an example is
misclassified or out-of-distribution. We present a simple baseline that
utilizes probabilities from softmax distributions. Correctly classified
examples tend to have greater maximum softmax probabilities than erroneously
classified and out-of-distribution examples, allowing for their detection. We
assess performance by defining several tasks in computer vision, natural
language processing, and automatic speech recognition, showing the
effectiveness of this baseline across all. We then show the baseline can
sometimes be surpassed, demonstrating the room for future research on these
underexplored detection tasks. | http://arxiv.org/pdf/1610.02136 | Dan Hendrycks, Kevin Gimpel | cs.NE, cs.CV, cs.LG | Published as a conference paper at ICLR 2017. 1 Figure in 1 Appendix.
Minor changes from the previous version | International Conference on Learning Representations 2017 | cs.NE | 20161007 | 20181003 | [] |
1610.02413 | 18 | 6
For equal odds, result lies For equal opportunity, results lie 10 below all ROC curves. 1.000 the same horizontal line lM Achievable region (A=0) ll Achievable region (A=1) | = °® 08 Ml Overla| M M Peo ~ 0.6 ~ 0.6 + Result for Y=Y < <x x Result for Y=1âÂ¥ 7 04 7 04 w Equal-odds optimum m~ a @ Equal opportunity (A=0) | & 02 & 02 @ Equal opportunity (A=1) °%0 02 04 06 08 1.0 °%0 02 04 06 08 1.0 Pr[Y=1| A, Y=0] Pr[Y=1| A, Y=0]
Figure 1: Finding the optimal equalized odds predictor (left), and equal opportunity predictor (right).
Combining Lemma 4.2 with Lemma 4.3, we see that the following optimization problem gives the optimal derived predictor with equalized odds:
min Eéâ¬(Y,Y) (4.3) Y
# min Eéâ¬(Y,Y) Y st. Yae (0,1): ya(Â¥) ⬠PAY)
(derived)
yo(Â¥) = yi(Â¥) (equalized odds) | 1610.02413#18 | Equality of Opportunity in Supervised Learning | We propose a criterion for discrimination against a specified sensitive
attribute in supervised learning, where the goal is to predict some target
based on available features. Assuming data about the predictor, target, and
membership in the protected group are available, we show how to optimally
adjust any learned predictor so as to remove discrimination according to our
definition. Our framework also improves incentives by shifting the cost of poor
classification from disadvantaged groups to the decision maker, who can respond
by improving the classification accuracy.
In line with other studies, our notion is oblivious: it depends only on the
joint statistics of the predictor, the target and the protected attribute, but
not on interpretation of individualfeatures. We study the inherent limits of
defining and identifying biases based on such oblivious measures, outlining
what can and cannot be inferred from different oblivious tests.
We illustrate our notion using a case study of FICO credit scores. | http://arxiv.org/pdf/1610.02413 | Moritz Hardt, Eric Price, Nathan Srebro | cs.LG | null | null | cs.LG | 20161007 | 20161007 | [] |
1610.02136 | 19 | # 3.2.2 TEXT CATEGORIZATION
We turn to text categorization tasks to determine whether softmax distributions are useful for de- tecting similar but out-of-distribution examples. In the following text categorization tasks, we train classiï¬ers to predict the subject of the text they are processing. In the 20 Newsgroups dataset (Lang, 1995), there are 20 different newsgroup subjects with a total of 20000 documents for the whole dataset. The Reuters 8 (Lewis et al., 2004) dataset has eight different news subjects with nearly 8000 stories in total. The Reuters 52 dataset has 52 news subjects with slightly over 9000 news stories; this dataset can have as few as three stories for a single subject. | 1610.02136#19 | A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks | We consider the two related problems of detecting if an example is
misclassified or out-of-distribution. We present a simple baseline that
utilizes probabilities from softmax distributions. Correctly classified
examples tend to have greater maximum softmax probabilities than erroneously
classified and out-of-distribution examples, allowing for their detection. We
assess performance by defining several tasks in computer vision, natural
language processing, and automatic speech recognition, showing the
effectiveness of this baseline across all. We then show the baseline can
sometimes be surpassed, demonstrating the room for future research on these
underexplored detection tasks. | http://arxiv.org/pdf/1610.02136 | Dan Hendrycks, Kevin Gimpel | cs.NE, cs.CV, cs.LG | Published as a conference paper at ICLR 2017. 1 Figure in 1 Appendix.
Minor changes from the previous version | International Conference on Learning Representations 2017 | cs.NE | 20161007 | 20181003 | [] |
1610.02357 | 19 | # 4.5. Comparison with Inception V3
152 [4].
# 4.5.1 Classiï¬cation performance
All evaluations were run with a single crop of the inputs images and a single model. ImageNet results are reported on the validation set rather than the test set (i.e. on the non-blacklisted images from the validation set of ILSVRC 2012). JFT results are reported after 30 million iterations (one month of training) rather than after full convergence. Results are provided in table 1 and table 2, as well as ï¬gure 6, ï¬gure 7, ï¬gure 8. On JFT, we tested both versions of our networks that did not include any fully-connected layers, and versions that included two fully-connected layers of 4096 units each before the logistic regression layer. | 1610.02357#19 | Xception: Deep Learning with Depthwise Separable Convolutions | We present an interpretation of Inception modules in convolutional neural
networks as being an intermediate step in-between regular convolution and the
depthwise separable convolution operation (a depthwise convolution followed by
a pointwise convolution). In this light, a depthwise separable convolution can
be understood as an Inception module with a maximally large number of towers.
This observation leads us to propose a novel deep convolutional neural network
architecture inspired by Inception, where Inception modules have been replaced
with depthwise separable convolutions. We show that this architecture, dubbed
Xception, slightly outperforms Inception V3 on the ImageNet dataset (which
Inception V3 was designed for), and significantly outperforms Inception V3 on a
larger image classification dataset comprising 350 million images and 17,000
classes. Since the Xception architecture has the same number of parameters as
Inception V3, the performance gains are not due to increased capacity but
rather to a more efficient use of model parameters. | http://arxiv.org/pdf/1610.02357 | François Chollet | cs.CV | null | null | cs.CV | 20161007 | 20170404 | [
{
"id": "1608.04337"
},
{
"id": "1602.07261"
},
{
"id": "1511.07289"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
}
] |
1610.02413 | 19 | (derived)
yo(Â¥) = yi(Â¥) (equalized odds)
Figure 1 gives a simple geometric picture for the solution of the linear program whose guarantees are summarized next.
Proposition 4.4. The optimization problem (4.3) is a linear program in four variables whose coeffi- cients can be computed from the joint distribution of (Y,A,Y). Moreover, its solution is an optimal equalized odds predictor derived from Y and A.
Proof of Proposition 4.4. The second claim follows by combining Lemma 4.2 with Lemma 4.3. To argue the ï¬rst claim, we saw in the proof of Lemma 4.3 that a derived predictor is speciï¬ed by four parameters and the constraint region is an intersection of two-dimensional linear constraints. It remains to show that the objective function is a linear function in these parameters. Writing out the objective, we have
[a(v,Â¥)] = C(y,yâ)Pr{Â¥ = y,Y= y} . yyâe(0,1}
Further, | 1610.02413#19 | Equality of Opportunity in Supervised Learning | We propose a criterion for discrimination against a specified sensitive
attribute in supervised learning, where the goal is to predict some target
based on available features. Assuming data about the predictor, target, and
membership in the protected group are available, we show how to optimally
adjust any learned predictor so as to remove discrimination according to our
definition. Our framework also improves incentives by shifting the cost of poor
classification from disadvantaged groups to the decision maker, who can respond
by improving the classification accuracy.
In line with other studies, our notion is oblivious: it depends only on the
joint statistics of the predictor, the target and the protected attribute, but
not on interpretation of individualfeatures. We study the inherent limits of
defining and identifying biases based on such oblivious measures, outlining
what can and cannot be inferred from different oblivious tests.
We illustrate our notion using a case study of FICO credit scores. | http://arxiv.org/pdf/1610.02413 | Moritz Hardt, Eric Price, Nathan Srebro | cs.LG | null | null | cs.LG | 20161007 | 20161007 | [] |
1610.02136 | 20 | For the 20 Newsgroups dataset we train a linear classiï¬er on 30-dimensional word vectors for 20 epochs. Meanwhile, Reuters 8 and Retuers 52 use one-layer neural networks with a bag-of-words input and a GELU nonlinearity, all optimized with Adam for 5 epochs. We train on a subset of subjects, leaving out 5 newsgroup subjects from 20 Newsgroups, 2 news subjects from Reuters 8, and 12 news subjects from Reuters 52, leaving the rest as out-of-distribution examples. Table 5 shows that with these datasets and architectures, we can detect errors dependably, and Table 6 informs us that the softmax prediction probabilities allow for detecting out-of-distribution subjects.
Dataset 15 Newsgroups Reuters 6 Reuters 40 AUROC /Base 89/50 89/50 91/50 AUPR Succ/Base 99/93 100/98 99/92 AUPR Err/Base 42/7.3 35/2.5 45/7.6 Pred.Prob Wrong(mean) 53 77 62 Test Set Error 7.31 2.53 7.55
Table 5: Detecting correct and incorrect classiï¬cations for text categorization.
5
Published as a conference paper at ICLR 2017 | 1610.02136#20 | A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks | We consider the two related problems of detecting if an example is
misclassified or out-of-distribution. We present a simple baseline that
utilizes probabilities from softmax distributions. Correctly classified
examples tend to have greater maximum softmax probabilities than erroneously
classified and out-of-distribution examples, allowing for their detection. We
assess performance by defining several tasks in computer vision, natural
language processing, and automatic speech recognition, showing the
effectiveness of this baseline across all. We then show the baseline can
sometimes be surpassed, demonstrating the room for future research on these
underexplored detection tasks. | http://arxiv.org/pdf/1610.02136 | Dan Hendrycks, Kevin Gimpel | cs.NE, cs.CV, cs.LG | Published as a conference paper at ICLR 2017. 1 Figure in 1 Appendix.
Minor changes from the previous version | International Conference on Learning Representations 2017 | cs.NE | 20161007 | 20181003 | [] |
1610.02357 | 20 | On ImageNet, Xception shows marginally better results than Inception V3. On JFT, Xception shows a 4.3% rel- ative improvement on the FastEval14k MAP@100 metric. We also note that Xception outperforms ImageNet results reported by He et al. for ResNet-50, ResNet-101 and ResNetTable 1. Classiï¬cation performance comparison on ImageNet (sin- gle crop, single model). VGG-16 and ResNet-152 numbers are only included as a reminder. The version of Inception V3 being benchmarked does not include the auxiliary tower.
Top-1 accuracy Top-5 accuracy VGG-16 ResNet-152 Inception V3 Xception 0.715 0.770 0.782 0.790 0.901 0.933 0.941 0.945 | 1610.02357#20 | Xception: Deep Learning with Depthwise Separable Convolutions | We present an interpretation of Inception modules in convolutional neural
networks as being an intermediate step in-between regular convolution and the
depthwise separable convolution operation (a depthwise convolution followed by
a pointwise convolution). In this light, a depthwise separable convolution can
be understood as an Inception module with a maximally large number of towers.
This observation leads us to propose a novel deep convolutional neural network
architecture inspired by Inception, where Inception modules have been replaced
with depthwise separable convolutions. We show that this architecture, dubbed
Xception, slightly outperforms Inception V3 on the ImageNet dataset (which
Inception V3 was designed for), and significantly outperforms Inception V3 on a
larger image classification dataset comprising 350 million images and 17,000
classes. Since the Xception architecture has the same number of parameters as
Inception V3, the performance gains are not due to increased capacity but
rather to a more efficient use of model parameters. | http://arxiv.org/pdf/1610.02357 | François Chollet | cs.CV | null | null | cs.CV | 20161007 | 20170404 | [
{
"id": "1608.04337"
},
{
"id": "1602.07261"
},
{
"id": "1511.07289"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
}
] |
1610.02413 | 20 | Further,
=yâ,Â¥ =y}=Pr{Y=y,y=y|Y=Y}pr{y =Â¥} +Pr{Y=y,Y =| Y# Y|Pr{Â¥ + y} =Pr{Â¥ =yâ,y =y}Pr{Y =Y}+Pr{Â¥=1-yâ,Â¥ =y}Pr{Â¥ + Y}.
# Pr{Y
# Pr
All probabilities in the last line that do not involve Y can be computed from the joint distribution. The probabilities that do involve Y are each a linear function of the parameters that specify Y.
7
The corresponding optimization problem for equation opportunity is the same except that it has a weaker constraint yo(Y)2 = 71(Y)2. The proof is analogous to that of Proposition 4.4. Figure 1 explains the solution geometrically.
# 4.2 Deriving from a score function | 1610.02413#20 | Equality of Opportunity in Supervised Learning | We propose a criterion for discrimination against a specified sensitive
attribute in supervised learning, where the goal is to predict some target
based on available features. Assuming data about the predictor, target, and
membership in the protected group are available, we show how to optimally
adjust any learned predictor so as to remove discrimination according to our
definition. Our framework also improves incentives by shifting the cost of poor
classification from disadvantaged groups to the decision maker, who can respond
by improving the classification accuracy.
In line with other studies, our notion is oblivious: it depends only on the
joint statistics of the predictor, the target and the protected attribute, but
not on interpretation of individualfeatures. We study the inherent limits of
defining and identifying biases based on such oblivious measures, outlining
what can and cannot be inferred from different oblivious tests.
We illustrate our notion using a case study of FICO credit scores. | http://arxiv.org/pdf/1610.02413 | Moritz Hardt, Eric Price, Nathan Srebro | cs.LG | null | null | cs.LG | 20161007 | 20161007 | [] |
1610.02136 | 21 | Table 5: Detecting correct and incorrect classiï¬cations for text categorization.
5
Published as a conference paper at ICLR 2017
In-Distribution / Out-of-Distribution 15/5 Newsgroups Reuters6/Reuters2 Reuters40/Reuters12 AUROC /Base 75/50 92/50 95/50 AUPR In/Base 92/84 100/95 100/93 AUPR Out/Base 45/16 56/4.5 60/7.2 Pred. Prob (mean) 65 72 47
Table 6: Distinguishing in- and out-of-distribution test set data for text categorization.
Dataset WSJ Twitter AUROC /Base 96/50 89/50 AUPR Succ/Base 100/96 98/87 AUPR Err/Base 51/3.7 53/13 Pred. Prob Wrong(mean) 71 69 Test Set Error 3.68 12.59
Table 7: Detecting correct and incorrect classiï¬cations for part-of-speech tagging.
3.2.3 PART-OF-SPEECH TAGGING | 1610.02136#21 | A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks | We consider the two related problems of detecting if an example is
misclassified or out-of-distribution. We present a simple baseline that
utilizes probabilities from softmax distributions. Correctly classified
examples tend to have greater maximum softmax probabilities than erroneously
classified and out-of-distribution examples, allowing for their detection. We
assess performance by defining several tasks in computer vision, natural
language processing, and automatic speech recognition, showing the
effectiveness of this baseline across all. We then show the baseline can
sometimes be surpassed, demonstrating the room for future research on these
underexplored detection tasks. | http://arxiv.org/pdf/1610.02136 | Dan Hendrycks, Kevin Gimpel | cs.NE, cs.CV, cs.LG | Published as a conference paper at ICLR 2017. 1 Figure in 1 Appendix.
Minor changes from the previous version | International Conference on Learning Representations 2017 | cs.NE | 20161007 | 20181003 | [] |
1610.02357 | 21 | The Xception architecture shows a much larger perfor- mance improvement on the JFT dataset compared to the ImageNet dataset. We believe this may be due to the fact that Inception V3 was developed with a focus on ImageNet and may thus be by design over-ï¬t to this speciï¬c task. On the other hand, neither architecture was tuned for JFT. It is likely that a search for better hyperparameters for Xception on ImageNet (in particular optimization parameters and regTable 2. Classiï¬cation performance comparison on JFT (single crop, single model).
Inception V3 - no FC layers Xception - no FC layers Inception V3 with FC layers Xception with FC layers FastEval14k MAP@100 6.36 6.70 6.50 6.78
Figure 6. Training proï¬le on ImageNet
Xception Inception V3 ImageNet validation accuracy E © a 3 2 a a 0:50 20000 40000 60000 80000 100000 120000 Gradient descent steps
Figure 7. Training proï¬le on JFT, without fully-connected layers
Xception Inception V3 5.5 FastEvali4k MAP@100 (no FC layers) 58.0 os 1.0 15 20 25 3.0 Gradient descent steps 307
ularization parameters) would yield signiï¬cant additional improvement. | 1610.02357#21 | Xception: Deep Learning with Depthwise Separable Convolutions | We present an interpretation of Inception modules in convolutional neural
networks as being an intermediate step in-between regular convolution and the
depthwise separable convolution operation (a depthwise convolution followed by
a pointwise convolution). In this light, a depthwise separable convolution can
be understood as an Inception module with a maximally large number of towers.
This observation leads us to propose a novel deep convolutional neural network
architecture inspired by Inception, where Inception modules have been replaced
with depthwise separable convolutions. We show that this architecture, dubbed
Xception, slightly outperforms Inception V3 on the ImageNet dataset (which
Inception V3 was designed for), and significantly outperforms Inception V3 on a
larger image classification dataset comprising 350 million images and 17,000
classes. Since the Xception architecture has the same number of parameters as
Inception V3, the performance gains are not due to increased capacity but
rather to a more efficient use of model parameters. | http://arxiv.org/pdf/1610.02357 | François Chollet | cs.CV | null | null | cs.CV | 20161007 | 20170404 | [
{
"id": "1608.04337"
},
{
"id": "1602.07261"
},
{
"id": "1511.07289"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
}
] |
1610.02413 | 21 | # 4.2 Deriving from a score function
We now consider deriving non-discriminating predictors from a real valued score R ⬠[0,1]. The motivation is that in many realistic scenarios (such as FICO scores), the data are summarized by a one-dimensional score function and a decision is made based on the score, typically by thresholding it. Since a continuous statistic can carry more information than a binary outcome Y, we can hope to achieve higher utility when working with R directly, rather then with a binary predictor Y.
A âprotected attribute blindâ way of deriving a binary predictor from R would be to threshold it, ie. using Y= I{R>t}. If R satisfied equalized odds, then so will such a predictor, and the optimal threshold should be chosen to balance false positive and false negatives so as to minimize the expected loss. When R does not already satisfy equalized odds, we might need to use different thresholds for different values of A (different protected groups), i.e. Y =I{R > t,}. As we will see, even this might not be sufficient, and we might need to introduce additional randomness as in the preceding section. | 1610.02413#21 | Equality of Opportunity in Supervised Learning | We propose a criterion for discrimination against a specified sensitive
attribute in supervised learning, where the goal is to predict some target
based on available features. Assuming data about the predictor, target, and
membership in the protected group are available, we show how to optimally
adjust any learned predictor so as to remove discrimination according to our
definition. Our framework also improves incentives by shifting the cost of poor
classification from disadvantaged groups to the decision maker, who can respond
by improving the classification accuracy.
In line with other studies, our notion is oblivious: it depends only on the
joint statistics of the predictor, the target and the protected attribute, but
not on interpretation of individualfeatures. We study the inherent limits of
defining and identifying biases based on such oblivious measures, outlining
what can and cannot be inferred from different oblivious tests.
We illustrate our notion using a case study of FICO credit scores. | http://arxiv.org/pdf/1610.02413 | Moritz Hardt, Eric Price, Nathan Srebro | cs.LG | null | null | cs.LG | 20161007 | 20161007 | [] |
1610.02136 | 22 | Part-of-speech (POS) tagging of newswire and social media text is our next challenge. We use the Wall Street Journal portion of the Penn Treebank (Marcus et al., 1993) which contains 45 distinct POS tags. For social media, we use POS-annotated tweets (Gimpel et al., 2011; Owoputi et al., 2013) which contain 25 tags. For the WSJ tagger, we train a bidirectional long short-term memory recurrent neural network (Hochreiter & Schmidhuber, 1997) with three layers, 128 neurons per layer, with randomly initialized word vectors, and this is trained on 90% of the corpus for 10 epochs with stochastic gradient descent with a batch size of 32. The tweet tagger is simpler, as it is two- layer neural network with a GELU nonlinearity, a weight initialization according to (Hendrycks & Gimpel, 2016c), pretrained word vectors trained on a corpus of 56 million tweets (Owoputi et al., 2013), and a hidden layer size of 256, all while training on 1000 tweets for 30 epochs with Adam and early stopping with 327 validation tweets. Error detection results are in Table 7. For out-of- | 1610.02136#22 | A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks | We consider the two related problems of detecting if an example is
misclassified or out-of-distribution. We present a simple baseline that
utilizes probabilities from softmax distributions. Correctly classified
examples tend to have greater maximum softmax probabilities than erroneously
classified and out-of-distribution examples, allowing for their detection. We
assess performance by defining several tasks in computer vision, natural
language processing, and automatic speech recognition, showing the
effectiveness of this baseline across all. We then show the baseline can
sometimes be surpassed, demonstrating the room for future research on these
underexplored detection tasks. | http://arxiv.org/pdf/1610.02136 | Dan Hendrycks, Kevin Gimpel | cs.NE, cs.CV, cs.LG | Published as a conference paper at ICLR 2017. 1 Figure in 1 Appendix.
Minor changes from the previous version | International Conference on Learning Representations 2017 | cs.NE | 20161007 | 20181003 | [] |
1610.02357 | 22 | ularization parameters) would yield signiï¬cant additional improvement.
# 4.5.2 Size and speed
Table 3. Size and training speed comparison.
Inception V3 Xception Parameter count 23,626,728 22,855,952 Steps/second 31 28
In table 3 we compare the size and speed of Inception
Figure 8. Training proï¬le on JFT, with fully-connected layers
'Xception' FastEvali4k MAP@100 (with FC layers) 580 05 1.0 15 2.0 25 3.0 Gradient descent steps
V3 and Xception. Parameter count is reported on ImageNet (1000 classes, no fully-connected layers) and the number of training steps (gradient updates) per second is reported on ImageNet with 60 K80 GPUs running synchronous gradient descent. Both architectures have approximately the same size (within 3.5%), and Xception is marginally slower. We expect that engineering optimizations at the level of the depthwise convolution operations can make Xception faster than Inception V3 in the near future. The fact that both architectures have almost the same number of parameters indicates that the improvement seen on ImageNet and JFT does not come from added capacity but rather from a more efï¬cient use of the model parameters.
# 4.6. Effect of the residual connections
Figure 9. Training proï¬le with and without residual connections. | 1610.02357#22 | Xception: Deep Learning with Depthwise Separable Convolutions | We present an interpretation of Inception modules in convolutional neural
networks as being an intermediate step in-between regular convolution and the
depthwise separable convolution operation (a depthwise convolution followed by
a pointwise convolution). In this light, a depthwise separable convolution can
be understood as an Inception module with a maximally large number of towers.
This observation leads us to propose a novel deep convolutional neural network
architecture inspired by Inception, where Inception modules have been replaced
with depthwise separable convolutions. We show that this architecture, dubbed
Xception, slightly outperforms Inception V3 on the ImageNet dataset (which
Inception V3 was designed for), and significantly outperforms Inception V3 on a
larger image classification dataset comprising 350 million images and 17,000
classes. Since the Xception architecture has the same number of parameters as
Inception V3, the performance gains are not due to increased capacity but
rather to a more efficient use of model parameters. | http://arxiv.org/pdf/1610.02357 | François Chollet | cs.CV | null | null | cs.CV | 20161007 | 20170404 | [
{
"id": "1608.04337"
},
{
"id": "1602.07261"
},
{
"id": "1511.07289"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
}
] |
1610.02413 | 22 | Central to our study is the ROC (Receiver Operator Characteristic) curve of the score, which captures the false positive and true positive (equivalently, false negative) rates at diï¬erent thresholds. These are curves in a two dimensional plane, where the horizontal axes is the false positive rate of a predictor and the vertical axes is the true positive rate. As discussed in the previous section, equalized odds can be stated as requiring the true positive and false positive ), agree between diï¬erent values of a of rates, (Pr the protected attribute. That is, that for all values of the protected attribute, the conditional behavior of the predictor is at exactly the same point in this space. We will therefor consider the A-conditional ROC curves
C(t) (Pr{R> t|A=aY =0},Pr{R>t|A=aY=1}).
.
Since the ROC curves exactly specify the conditional distributions R|A, Y, a score function obeys equalized odds if and only if the ROC curves for all values of the protected attribute agree, that is C,(t) = C,,(t) for all values of a and t. In this case, any thresholding of R yields an equalized odds predictor (all protected groups are at the same point on the curve, and the same point in false/true-positive plane). | 1610.02413#22 | Equality of Opportunity in Supervised Learning | We propose a criterion for discrimination against a specified sensitive
attribute in supervised learning, where the goal is to predict some target
based on available features. Assuming data about the predictor, target, and
membership in the protected group are available, we show how to optimally
adjust any learned predictor so as to remove discrimination according to our
definition. Our framework also improves incentives by shifting the cost of poor
classification from disadvantaged groups to the decision maker, who can respond
by improving the classification accuracy.
In line with other studies, our notion is oblivious: it depends only on the
joint statistics of the predictor, the target and the protected attribute, but
not on interpretation of individualfeatures. We study the inherent limits of
defining and identifying biases based on such oblivious measures, outlining
what can and cannot be inferred from different oblivious tests.
We illustrate our notion using a case study of FICO credit scores. | http://arxiv.org/pdf/1610.02413 | Moritz Hardt, Eric Price, Nathan Srebro | cs.LG | null | null | cs.LG | 20161007 | 20161007 | [] |
1610.02136 | 23 | while training on 1000 tweets for 30 epochs with Adam and early stopping with 327 validation tweets. Error detection results are in Table 7. For out-of- distribution detection, we use the WSJ tagger on the tweets as well as weblog data from the English Web Treebank (Bies et al., 2012). The results are shown in Table 8. Since the weblog data is closer in style to newswire than are the tweets, it is harder to detect whether a weblog sentence is out- of-distribution than a tweet. Indeed, since POS tagging is done at the word-level, we are detecting whether each word is out-of-distribution given the word and contextual features. With this in mind, we see that it is easier to detect words as out-of-distribution if they are from tweets than from blogs. | 1610.02136#23 | A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks | We consider the two related problems of detecting if an example is
misclassified or out-of-distribution. We present a simple baseline that
utilizes probabilities from softmax distributions. Correctly classified
examples tend to have greater maximum softmax probabilities than erroneously
classified and out-of-distribution examples, allowing for their detection. We
assess performance by defining several tasks in computer vision, natural
language processing, and automatic speech recognition, showing the
effectiveness of this baseline across all. We then show the baseline can
sometimes be surpassed, demonstrating the room for future research on these
underexplored detection tasks. | http://arxiv.org/pdf/1610.02136 | Dan Hendrycks, Kevin Gimpel | cs.NE, cs.CV, cs.LG | Published as a conference paper at ICLR 2017. 1 Figure in 1 Appendix.
Minor changes from the previous version | International Conference on Learning Representations 2017 | cs.NE | 20161007 | 20181003 | [] |
1610.02357 | 23 | # 4.6. Effect of the residual connections
Figure 9. Training proï¬le with and without residual connections.
0.8 o7 Xception 06 Xception - Non-residual a a 0.4 0.3 0.2 ImageNet validation accuracy 0.1 0% 20000 40000 60000 80000 100000 120000 Gradient descent steps
To quantify the beneï¬ts of residual connections in the Xception architecture, we benchmarked on ImageNet a mod- iï¬ed version of Xception that does not include any residual
connections. Results are shown in ï¬gure 9. Residual con- nections are clearly essential in helping with convergence, both in terms of speed and ï¬nal classiï¬cation performance. However we will note that benchmarking the non-residual model with the same optimization conï¬guration as the resid- ual model may be uncharitable and that better optimization conï¬gurations might yield more competitive results. | 1610.02357#23 | Xception: Deep Learning with Depthwise Separable Convolutions | We present an interpretation of Inception modules in convolutional neural
networks as being an intermediate step in-between regular convolution and the
depthwise separable convolution operation (a depthwise convolution followed by
a pointwise convolution). In this light, a depthwise separable convolution can
be understood as an Inception module with a maximally large number of towers.
This observation leads us to propose a novel deep convolutional neural network
architecture inspired by Inception, where Inception modules have been replaced
with depthwise separable convolutions. We show that this architecture, dubbed
Xception, slightly outperforms Inception V3 on the ImageNet dataset (which
Inception V3 was designed for), and significantly outperforms Inception V3 on a
larger image classification dataset comprising 350 million images and 17,000
classes. Since the Xception architecture has the same number of parameters as
Inception V3, the performance gains are not due to increased capacity but
rather to a more efficient use of model parameters. | http://arxiv.org/pdf/1610.02357 | François Chollet | cs.CV | null | null | cs.CV | 20161007 | 20170404 | [
{
"id": "1608.04337"
},
{
"id": "1602.07261"
},
{
"id": "1511.07289"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
}
] |
1610.02413 | 23 | When the ROC curves do not agree, we might choose diï¬erent thresholds ta for the diï¬erent protected groups. This yields diï¬erent points on each A-conditional ROC curve. For the resulting predictor to satisfy equalized odds, these must be at the same point in the false/true- positive plane. This is possible only at points where all A-conditional ROC curves intersect. But the ROC curves might not all intersect except at the trivial endpoints, and even if they do, their point of intersection might represent a poor tradeoï¬ between false positive and false negatives. As with the case of correcting a binary predictor, we can use randomization to ï¬ll the span of possible derived predictors and allow for signiï¬cant intersection in the false/true-positive plane. In particular, for every protected group a, consider the convex hull of the image of the conditional ROC curve:
Da def = convhull {Ca(t) : t â [0, 1]} (4.4)
8 | 1610.02413#23 | Equality of Opportunity in Supervised Learning | We propose a criterion for discrimination against a specified sensitive
attribute in supervised learning, where the goal is to predict some target
based on available features. Assuming data about the predictor, target, and
membership in the protected group are available, we show how to optimally
adjust any learned predictor so as to remove discrimination according to our
definition. Our framework also improves incentives by shifting the cost of poor
classification from disadvantaged groups to the decision maker, who can respond
by improving the classification accuracy.
In line with other studies, our notion is oblivious: it depends only on the
joint statistics of the predictor, the target and the protected attribute, but
not on interpretation of individualfeatures. We study the inherent limits of
defining and identifying biases based on such oblivious measures, outlining
what can and cannot be inferred from different oblivious tests.
We illustrate our notion using a case study of FICO credit scores. | http://arxiv.org/pdf/1610.02413 | Moritz Hardt, Eric Price, Nathan Srebro | cs.LG | null | null | cs.LG | 20161007 | 20161007 | [] |
1610.02357 | 24 | Additionally, let us note that this result merely shows the importance of residual connections for this speciï¬c architec- ture, and that residual connections are in no way required in order to build models that are stacks of depthwise sepa- rable convolutions. We also obtained excellent results with non-residual VGG-style models where all convolution layers were replaced with depthwise separable convolutions (with a depth multiplier of 1), superior to Inception V3 on JFT at equal parameter count.
# 4.7. Effect of an intermediate activation after point- wise convolutions
Figure 10. Training proï¬le with different activations between the depthwise and pointwise operations of the separable convolution layers.
0.80 No intermediate activation Intermediate ELU Intermediate ReLU ImageNet validation accuracy 0-000 40000 60000 80000 100000 120000 140000 160000 Gradient descent steps | 1610.02357#24 | Xception: Deep Learning with Depthwise Separable Convolutions | We present an interpretation of Inception modules in convolutional neural
networks as being an intermediate step in-between regular convolution and the
depthwise separable convolution operation (a depthwise convolution followed by
a pointwise convolution). In this light, a depthwise separable convolution can
be understood as an Inception module with a maximally large number of towers.
This observation leads us to propose a novel deep convolutional neural network
architecture inspired by Inception, where Inception modules have been replaced
with depthwise separable convolutions. We show that this architecture, dubbed
Xception, slightly outperforms Inception V3 on the ImageNet dataset (which
Inception V3 was designed for), and significantly outperforms Inception V3 on a
larger image classification dataset comprising 350 million images and 17,000
classes. Since the Xception architecture has the same number of parameters as
Inception V3, the performance gains are not due to increased capacity but
rather to a more efficient use of model parameters. | http://arxiv.org/pdf/1610.02357 | François Chollet | cs.CV | null | null | cs.CV | 20161007 | 20170404 | [
{
"id": "1608.04337"
},
{
"id": "1602.07261"
},
{
"id": "1511.07289"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
}
] |
1610.02413 | 24 | Da def = convhull {Ca(t) : t â [0, 1]} (4.4)
8
Within each group, max profit Equal odds makes the average Equal opportunity cost is is a tangent of the ROC curve vector tangent to the interior convex function of TP rate 1.0 1.0 1.0 0.8 0.8 08 ll i} i} Rn â â A=0 > 06 > 0.6 > 0.6 = â A=i = = = â Average | 04 | 04 | 04 . he be he * â Optimal a a a 0.2 0.2 02 0.0 0.0 0.0 00 02 04 06 O08 10 00 02 04 06 O08 10 00 02 O04 06 O08 10 PrfÂ¥ =1| A,Y =0] PrfÂ¥ =1| A,Y =0] Cost of best solution for given true rate
# positive | 1610.02413#24 | Equality of Opportunity in Supervised Learning | We propose a criterion for discrimination against a specified sensitive
attribute in supervised learning, where the goal is to predict some target
based on available features. Assuming data about the predictor, target, and
membership in the protected group are available, we show how to optimally
adjust any learned predictor so as to remove discrimination according to our
definition. Our framework also improves incentives by shifting the cost of poor
classification from disadvantaged groups to the decision maker, who can respond
by improving the classification accuracy.
In line with other studies, our notion is oblivious: it depends only on the
joint statistics of the predictor, the target and the protected attribute, but
not on interpretation of individualfeatures. We study the inherent limits of
defining and identifying biases based on such oblivious measures, outlining
what can and cannot be inferred from different oblivious tests.
We illustrate our notion using a case study of FICO credit scores. | http://arxiv.org/pdf/1610.02413 | Moritz Hardt, Eric Price, Nathan Srebro | cs.LG | null | null | cs.LG | 20161007 | 20161007 | [] |
1610.02136 | 25 | 3.3 AUTOMATIC SPEECH RECOGNITION
Now we consider a task which uses softmax values to construct entire sequences rather than deter- mine an inputâs class. Our sequence prediction system uses a bidirectional LSTM with two-layers and a clipped GELU nonlinearity, optimized for 60 epochs with RMSProp trained on 80% of the TIMIT corpus (Garofolo et al., 1993). The LSTM is trained with connectionist temporal classiï¬ca- tion (CTC) (Graves et al., 2006) for predicting sequences of phones given MFCCs, energy, and ï¬rst and second deltas of a 25ms frame. When trained with CTC, the LSTM learns to have its phone label probabilities spike momentarily while mostly predicting blank symbols otherwise. In this way, the softmax is used differently from typical classiï¬cation problems, providing a unique test for our detection methods.
We do not show how the system performs on correctness/incorrectness detection because errors are not binary and instead lie along a range of edit distances. However, we can perform out-of6
Published as a conference paper at ICLR 2017 | 1610.02136#25 | A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks | We consider the two related problems of detecting if an example is
misclassified or out-of-distribution. We present a simple baseline that
utilizes probabilities from softmax distributions. Correctly classified
examples tend to have greater maximum softmax probabilities than erroneously
classified and out-of-distribution examples, allowing for their detection. We
assess performance by defining several tasks in computer vision, natural
language processing, and automatic speech recognition, showing the
effectiveness of this baseline across all. We then show the baseline can
sometimes be surpassed, demonstrating the room for future research on these
underexplored detection tasks. | http://arxiv.org/pdf/1610.02136 | Dan Hendrycks, Kevin Gimpel | cs.NE, cs.CV, cs.LG | Published as a conference paper at ICLR 2017. 1 Figure in 1 Appendix.
Minor changes from the previous version | International Conference on Learning Representations 2017 | cs.NE | 20161007 | 20181003 | [] |
1610.02357 | 25 | 0.80 No intermediate activation Intermediate ELU Intermediate ReLU ImageNet validation accuracy 0-000 40000 60000 80000 100000 120000 140000 160000 Gradient descent steps
We mentioned earlier that the analogy between depth- wise separable convolutions and Inception modules suggests that depthwise separable convolutions should potentially in- clude a non-linearity between the depthwise and pointwise operations. In the experiments reported so far, no such non- linearity was included. However we also experimentally tested the inclusion of either ReLU or ELU [3] as intermedi- ate non-linearity. Results are reported on ImageNet in ï¬gure 10, and show that the absence of any non-linearity leads to both faster convergence and better ï¬nal performance.
This is a remarkable observation, since Szegedy et al. re- port the opposite result in [21] for Inception modules. It may be that the depth of the intermediate feature spaces on which spatial convolutions are applied is critical to the usefulness those of the non-linearity: for deep feature spaces (e.g.
found in Inception modules) the non-linearity is helpful, but for shallow ones (e.g. the 1-channel deep feature spaces of depthwise separable convolutions) it becomes harmful, possibly due to a loss of information.
# 5. Future directions | 1610.02357#25 | Xception: Deep Learning with Depthwise Separable Convolutions | We present an interpretation of Inception modules in convolutional neural
networks as being an intermediate step in-between regular convolution and the
depthwise separable convolution operation (a depthwise convolution followed by
a pointwise convolution). In this light, a depthwise separable convolution can
be understood as an Inception module with a maximally large number of towers.
This observation leads us to propose a novel deep convolutional neural network
architecture inspired by Inception, where Inception modules have been replaced
with depthwise separable convolutions. We show that this architecture, dubbed
Xception, slightly outperforms Inception V3 on the ImageNet dataset (which
Inception V3 was designed for), and significantly outperforms Inception V3 on a
larger image classification dataset comprising 350 million images and 17,000
classes. Since the Xception architecture has the same number of parameters as
Inception V3, the performance gains are not due to increased capacity but
rather to a more efficient use of model parameters. | http://arxiv.org/pdf/1610.02357 | François Chollet | cs.CV | null | null | cs.CV | 20161007 | 20170404 | [
{
"id": "1608.04337"
},
{
"id": "1602.07261"
},
{
"id": "1511.07289"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
}
] |
1610.02413 | 25 | # positive
Figure 2: Finding the optimal equalized odds threshold predictor (middle), and equal oppor- tunity threshold predictor (right). For the equal opportunity predictor, within each group the cost for a given true positive rate is proportional to the horizontal gap between the ROC curve and the proï¬t-maximizing tangent line (i.e., the two curves on the left plot), so it is a convex function of the true positive rate (right). This lets us optimize it eï¬ciently with ternary search.
The deï¬nition of Da is analogous to the polytope Pa in the previous section, except that here we do not consider points below the main diagonal (line from (0, 0) to (1, 1)), which are worse than ârandom guessingâ and hence never desirable for any reasonable loss function.
Deriving an optimal equalized odds threshold predictor. Any point in the convex hull D, represents the false/true positive rates, conditioned on A = a, of a randomized derived predictor based on R. In particular, since the space is only two-dimensional, such a predictor Y can always be taken to be a mixture of two threshold predictors (corresponding to the convex hull of two points on the ROC curve). Conditional on A =a, the predictor Y behaves as
Y=I{R>T}, | 1610.02413#25 | Equality of Opportunity in Supervised Learning | We propose a criterion for discrimination against a specified sensitive
attribute in supervised learning, where the goal is to predict some target
based on available features. Assuming data about the predictor, target, and
membership in the protected group are available, we show how to optimally
adjust any learned predictor so as to remove discrimination according to our
definition. Our framework also improves incentives by shifting the cost of poor
classification from disadvantaged groups to the decision maker, who can respond
by improving the classification accuracy.
In line with other studies, our notion is oblivious: it depends only on the
joint statistics of the predictor, the target and the protected attribute, but
not on interpretation of individualfeatures. We study the inherent limits of
defining and identifying biases based on such oblivious measures, outlining
what can and cannot be inferred from different oblivious tests.
We illustrate our notion using a case study of FICO credit scores. | http://arxiv.org/pdf/1610.02413 | Moritz Hardt, Eric Price, Nathan Srebro | cs.LG | null | null | cs.LG | 20161007 | 20161007 | [] |
1610.02136 | 26 | Published as a conference paper at ICLR 2017
AUROC /Base 99/50 100/50 98/50 100/50 98/50 100/50 100/50 100/50 85/50 97/50 AUPR In/Base 99/50 100/50 98/50 100/50 98/50 100/50 100/50 100/50 80/34 79/10 AUPR Out/Base 99/50 100/50 98/50 100/50 98/50 100/50 100/50 100/50 90/66 100/90 Pred. Prob (mean) 59 55 59 57 60 52 56 58 64 58
Table 9: Detecting out-of-distribution distorted speech. All values are percentages. | 1610.02136#26 | A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks | We consider the two related problems of detecting if an example is
misclassified or out-of-distribution. We present a simple baseline that
utilizes probabilities from softmax distributions. Correctly classified
examples tend to have greater maximum softmax probabilities than erroneously
classified and out-of-distribution examples, allowing for their detection. We
assess performance by defining several tasks in computer vision, natural
language processing, and automatic speech recognition, showing the
effectiveness of this baseline across all. We then show the baseline can
sometimes be surpassed, demonstrating the room for future research on these
underexplored detection tasks. | http://arxiv.org/pdf/1610.02136 | Dan Hendrycks, Kevin Gimpel | cs.NE, cs.CV, cs.LG | Published as a conference paper at ICLR 2017. 1 Figure in 1 Appendix.
Minor changes from the previous version | International Conference on Learning Representations 2017 | cs.NE | 20161007 | 20181003 | [] |
1610.02357 | 26 | # 5. Future directions
We noted earlier the existence of a discrete spectrum be- tween regular convolutions and depthwise separable convo- lutions, parametrized by the number of independent channel- space segments used for performing spatial convolutions. In- ception modules are one point on this spectrum. We showed in our empirical evaluation that the extreme formulation of an Inception module, the depthwise separable convolution, may have advantages over regular a regular Inception mod- ule. However, there is no reason to believe that depthwise separable convolutions are optimal. It may be that intermedi- ate points on the spectrum, lying between regular Inception modules and depthwise separable convolutions, hold further advantages. This question is left for future investigation.
# 6. Conclusions | 1610.02357#26 | Xception: Deep Learning with Depthwise Separable Convolutions | We present an interpretation of Inception modules in convolutional neural
networks as being an intermediate step in-between regular convolution and the
depthwise separable convolution operation (a depthwise convolution followed by
a pointwise convolution). In this light, a depthwise separable convolution can
be understood as an Inception module with a maximally large number of towers.
This observation leads us to propose a novel deep convolutional neural network
architecture inspired by Inception, where Inception modules have been replaced
with depthwise separable convolutions. We show that this architecture, dubbed
Xception, slightly outperforms Inception V3 on the ImageNet dataset (which
Inception V3 was designed for), and significantly outperforms Inception V3 on a
larger image classification dataset comprising 350 million images and 17,000
classes. Since the Xception architecture has the same number of parameters as
Inception V3, the performance gains are not due to increased capacity but
rather to a more efficient use of model parameters. | http://arxiv.org/pdf/1610.02357 | François Chollet | cs.CV | null | null | cs.CV | 20161007 | 20170404 | [
{
"id": "1608.04337"
},
{
"id": "1602.07261"
},
{
"id": "1511.07289"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
}
] |
1610.02413 | 26 | Y=I{R>T},
where T, is a randomized threshold assuming the value t, with probability P. and the value f, with probability p,. In other words, to construct an equalized odds predictor, we should choose a point in the intersection of these convex hulls, y = (vo, 71) â¬NqD,, and then for each ean group realize the true/false-positive rates y with a (possible randomized) predictor Y|(A = a) = I{R > Ty} resulting in the predictor Y = PrI{R > Th For each group a, we either use a fixed âhres T, = t, or a mixture of two thresholds t, . In the latter case, if A=aand R<t, we always set Y = 0, if R>7, we always set Y = 1, bat: if t, <R<t,, we flip a coin and set Y= 1 with probability Pp.
# a | 1610.02413#26 | Equality of Opportunity in Supervised Learning | We propose a criterion for discrimination against a specified sensitive
attribute in supervised learning, where the goal is to predict some target
based on available features. Assuming data about the predictor, target, and
membership in the protected group are available, we show how to optimally
adjust any learned predictor so as to remove discrimination according to our
definition. Our framework also improves incentives by shifting the cost of poor
classification from disadvantaged groups to the decision maker, who can respond
by improving the classification accuracy.
In line with other studies, our notion is oblivious: it depends only on the
joint statistics of the predictor, the target and the protected attribute, but
not on interpretation of individualfeatures. We study the inherent limits of
defining and identifying biases based on such oblivious measures, outlining
what can and cannot be inferred from different oblivious tests.
We illustrate our notion using a case study of FICO credit scores. | http://arxiv.org/pdf/1610.02413 | Moritz Hardt, Eric Price, Nathan Srebro | cs.LG | null | null | cs.LG | 20161007 | 20161007 | [] |
1610.02136 | 27 | Table 9: Detecting out-of-distribution distorted speech. All values are percentages.
distribution detection. Mixing the TIMIT audio with realistic noises from the Aurora-2 dataset (Hirsch & Pearce, 2000), we keep the TIMIT audio volume at 100% and noise volume at 30%, giving a mean SNR of approximately 5. Speakers are still clearly audible to the human ear but confuse the phone recognizer because the prediction edit distance more than doubles. For more out- of-distribution examples, we use the test examples from the THCHS-30 dataset (Wang & Zhang, 2015), a Chinese speech corpus. Table 9 shows the results. Crucially, when performing detection, we compute the softmax probabilities while ignoring the blank symbolâs logit. With the blank symbolâs presence, the softmax distributions at most time steps predict a blank symbol with high conï¬dence, but without the blank symbol we can better differentiate between normal and abnormal distributions. With this modiï¬cation, the softmax prediction probabilities allow us to detect whether an example is out-of-distribution.
# 4 ABNORMALITY DETECTION WITH AUXILIARY DECODERS | 1610.02136#27 | A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks | We consider the two related problems of detecting if an example is
misclassified or out-of-distribution. We present a simple baseline that
utilizes probabilities from softmax distributions. Correctly classified
examples tend to have greater maximum softmax probabilities than erroneously
classified and out-of-distribution examples, allowing for their detection. We
assess performance by defining several tasks in computer vision, natural
language processing, and automatic speech recognition, showing the
effectiveness of this baseline across all. We then show the baseline can
sometimes be surpassed, demonstrating the room for future research on these
underexplored detection tasks. | http://arxiv.org/pdf/1610.02136 | Dan Hendrycks, Kevin Gimpel | cs.NE, cs.CV, cs.LG | Published as a conference paper at ICLR 2017. 1 Figure in 1 Appendix.
Minor changes from the previous version | International Conference on Learning Representations 2017 | cs.NE | 20161007 | 20181003 | [] |
1610.02357 | 27 | # 6. Conclusions
We showed how convolutions and depthwise separable convolutions lie at both extremes of a discrete spectrum, with Inception modules being an intermediate point in be- tween. This observation has led to us to propose replacing Inception modules with depthwise separable convolutions in neural computer vision architectures. We presented a novel architecture based on this idea, named Xception, which has a similar parameter count as Inception V3. Compared to Inception V3, Xception shows small gains in classiï¬cation performance on the ImageNet dataset and large gains on the JFT dataset. We expect depthwise separable convolutions to become a cornerstone of convolutional neural network architecture design in the future, since they offer similar properties as Inception modules, yet are as easy to use as regular convolution layers.
# References | 1610.02357#27 | Xception: Deep Learning with Depthwise Separable Convolutions | We present an interpretation of Inception modules in convolutional neural
networks as being an intermediate step in-between regular convolution and the
depthwise separable convolution operation (a depthwise convolution followed by
a pointwise convolution). In this light, a depthwise separable convolution can
be understood as an Inception module with a maximally large number of towers.
This observation leads us to propose a novel deep convolutional neural network
architecture inspired by Inception, where Inception modules have been replaced
with depthwise separable convolutions. We show that this architecture, dubbed
Xception, slightly outperforms Inception V3 on the ImageNet dataset (which
Inception V3 was designed for), and significantly outperforms Inception V3 on a
larger image classification dataset comprising 350 million images and 17,000
classes. Since the Xception architecture has the same number of parameters as
Inception V3, the performance gains are not due to increased capacity but
rather to a more efficient use of model parameters. | http://arxiv.org/pdf/1610.02357 | François Chollet | cs.CV | null | null | cs.CV | 20161007 | 20170404 | [
{
"id": "1608.04337"
},
{
"id": "1602.07261"
},
{
"id": "1511.07289"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
}
] |
1610.02413 | 27 | # a
The feasible set of false/true positive rates of possible equalized odds predictors is thus the intersection of the areas under the A-conditional ROC curves, and above the main diagonal (see Figure 2). Since for any loss function the optimal false/true-positive rate will always be on the upper-left boundary of this feasible set, this is eï¬ectively the ROC curve of the equalized odds predictors. This ROC curve is the pointwise minimum of all A-conditional ROC curves. The performance of an equalized odds predictor is thus determined by the minimum performance among all protected groups. Said diï¬erently, requiring equalized odds incentivizes the learner to build good predictors for all classes. For a given loss function, ï¬nding the optimal tradeoï¬
9
amounts to optimizing (assuming w.l.o.g. â¬(0,0) = â¬(1,1) = 0):
min yol(1,0) + (1â71)â¬(0,1) (4.5) Va: yeDa
This is no longer a linear program, since Da are not polytopes, or at least are not speciï¬ed as such. Nevertheless, (4.5) can be eï¬ciently optimized numerically using ternary search. | 1610.02413#27 | Equality of Opportunity in Supervised Learning | We propose a criterion for discrimination against a specified sensitive
attribute in supervised learning, where the goal is to predict some target
based on available features. Assuming data about the predictor, target, and
membership in the protected group are available, we show how to optimally
adjust any learned predictor so as to remove discrimination according to our
definition. Our framework also improves incentives by shifting the cost of poor
classification from disadvantaged groups to the decision maker, who can respond
by improving the classification accuracy.
In line with other studies, our notion is oblivious: it depends only on the
joint statistics of the predictor, the target and the protected attribute, but
not on interpretation of individualfeatures. We study the inherent limits of
defining and identifying biases based on such oblivious measures, outlining
what can and cannot be inferred from different oblivious tests.
We illustrate our notion using a case study of FICO credit scores. | http://arxiv.org/pdf/1610.02413 | Moritz Hardt, Eric Price, Nathan Srebro | cs.LG | null | null | cs.LG | 20161007 | 20161007 | [] |
1610.02136 | 28 | # 4 ABNORMALITY DETECTION WITH AUXILIARY DECODERS
Having seen that softmax prediction probabilities enable abnormality detection, we now show there is other information sometimes more useful for detection. To demonstrate this, we exploit the learned internal representations of neural networks. We start by training a normal classiï¬er and append an auxiliary decoder which reconstructs the input, shown in Figure 1. Auxiliary decoders are sometimes known to increase classiï¬cation performance (Zhang et al., 2016). The decoder and scorer are trained jointly on in-distribution examples. Thereafter, the blue layers in Figure 1 are frozen. Then we train red layers on clean and noised training examples, and the sigmoid output of the red layers scores how normal the input is. Consequently, noised examples are in the abnormal class, clean examples are of the normal class, and the sigmoid is trained to output to which class an input belongs. After training we consequently have a normal classiï¬er, an auxiliary decoder, and what we call an abnormality module. The gains from the abnormality module demonstrate there are possible research avenues for outperforming the baseline.
# 4.1 TIMIT | 1610.02136#28 | A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks | We consider the two related problems of detecting if an example is
misclassified or out-of-distribution. We present a simple baseline that
utilizes probabilities from softmax distributions. Correctly classified
examples tend to have greater maximum softmax probabilities than erroneously
classified and out-of-distribution examples, allowing for their detection. We
assess performance by defining several tasks in computer vision, natural
language processing, and automatic speech recognition, showing the
effectiveness of this baseline across all. We then show the baseline can
sometimes be surpassed, demonstrating the room for future research on these
underexplored detection tasks. | http://arxiv.org/pdf/1610.02136 | Dan Hendrycks, Kevin Gimpel | cs.NE, cs.CV, cs.LG | Published as a conference paper at ICLR 2017. 1 Figure in 1 Appendix.
Minor changes from the previous version | International Conference on Learning Representations 2017 | cs.NE | 20161007 | 20181003 | [] |
1610.02357 | 28 | [1] M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin, S. Ghe- mawat, I. Goodfellow, A. Harp, G. Irving, M. Isard, Y. Jia, R. Jozefowicz, L. Kaiser, M. Kudlur, J. Levenberg, D. Man´e, R. Monga, S. Moore, D. Murray, C. Olah, M. Schuster, J. Shlens, B. Steiner, I. Sutskever, K. Talwar, P. Tucker, V. Van- houcke, V. Vasudevan, F. Vi´egas, O. Vinyals, P. Warden, M. Wattenberg, M. Wicke, Y. Yu, and X. Zheng. Tensor- Flow: Large-scale machine learning on heterogeneous sys- tems, 2015. Software available from tensorï¬ow.org. [2] F. Chollet. Keras. https://github.com/fchollet/keras, 2015. [3] D.-A. | 1610.02357#28 | Xception: Deep Learning with Depthwise Separable Convolutions | We present an interpretation of Inception modules in convolutional neural
networks as being an intermediate step in-between regular convolution and the
depthwise separable convolution operation (a depthwise convolution followed by
a pointwise convolution). In this light, a depthwise separable convolution can
be understood as an Inception module with a maximally large number of towers.
This observation leads us to propose a novel deep convolutional neural network
architecture inspired by Inception, where Inception modules have been replaced
with depthwise separable convolutions. We show that this architecture, dubbed
Xception, slightly outperforms Inception V3 on the ImageNet dataset (which
Inception V3 was designed for), and significantly outperforms Inception V3 on a
larger image classification dataset comprising 350 million images and 17,000
classes. Since the Xception architecture has the same number of parameters as
Inception V3, the performance gains are not due to increased capacity but
rather to a more efficient use of model parameters. | http://arxiv.org/pdf/1610.02357 | François Chollet | cs.CV | null | null | cs.CV | 20161007 | 20170404 | [
{
"id": "1608.04337"
},
{
"id": "1602.07261"
},
{
"id": "1511.07289"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
}
] |
1610.02413 | 28 | Deriving an optimal equal opportunity threshold predictor. The construction follows the same approach except that there is one fewer constraint. We only need to ï¬nd points on the conditional ROC curves that have the same true positive rates in both groups. Assuming continuity of the conditional ROC curves, this means we can always ï¬nd points on the boundary of the conditional ROC curves. In this case, no randomization is necessary. The optimal solution corresponds to two deterministic thresholds, one for each group. As before, the optimization problem can be solved eï¬ciently using ternary search over the target true positive value. Here we use, as Figure 2 illustrates, that the cost of the best solution is convex as a function of its true positive rate.
# 5 Bayes optimal predictors
In this section, we develop the theory a theory for non-discriminating Bayes optimal classiï¬- cation. We will ï¬rst show that a Bayes optimal equalized odds predictor can be obtained as an derived threshold predictor of the Bayes optimal regressor. Second, we quantify the loss of deriving an equalized odds predictor based on a regressor that deviates from the Bayes optimal regressor. This can be used to justify the approach of ï¬rst training classiï¬ers without any fairness constraint, and then deriving an equalized odds predictor in a second step. | 1610.02413#28 | Equality of Opportunity in Supervised Learning | We propose a criterion for discrimination against a specified sensitive
attribute in supervised learning, where the goal is to predict some target
based on available features. Assuming data about the predictor, target, and
membership in the protected group are available, we show how to optimally
adjust any learned predictor so as to remove discrimination according to our
definition. Our framework also improves incentives by shifting the cost of poor
classification from disadvantaged groups to the decision maker, who can respond
by improving the classification accuracy.
In line with other studies, our notion is oblivious: it depends only on the
joint statistics of the predictor, the target and the protected attribute, but
not on interpretation of individualfeatures. We study the inherent limits of
defining and identifying biases based on such oblivious measures, outlining
what can and cannot be inferred from different oblivious tests.
We illustrate our notion using a case study of FICO credit scores. | http://arxiv.org/pdf/1610.02413 | Moritz Hardt, Eric Price, Nathan Srebro | cs.LG | null | null | cs.LG | 20161007 | 20161007 | [] |
1610.02136 | 29 | # 4.1 TIMIT
We test the abnormality module by revisiting the TIMIT task with a different architecture and show how these auxiliary components can greatly improve detection. The system is a three-layer, 1024- neuron wide classiï¬er with an auxiliary decoder and abnormality module. This network takes as input 11 frames and must predict the phone of the center frame, 26 features per frame. Weights are initialized according to (Hendrycks & Gimpel, 2016c). This network trains for 20 epochs, and the abnormality module trains for two. The abnormality module sees clean examples and, as negative examples, TIMIT examples distorted with either white noise, brown noise (noise with its spectral density proportional to 1/f 2), or pink noise (noise with its spectral density proportional to 1/f ) at various volumes.
We note that the abnormality module is not trained on the same type of noise added to the test examples. Nonetheless, Table 10 shows that simple noised examples translate to effective detection of realistically distorted audio. We detect abnormal examples by comparing the typical abnormality
7
Published as a conference paper at ICLR 2017 | 1610.02136#29 | A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks | We consider the two related problems of detecting if an example is
misclassified or out-of-distribution. We present a simple baseline that
utilizes probabilities from softmax distributions. Correctly classified
examples tend to have greater maximum softmax probabilities than erroneously
classified and out-of-distribution examples, allowing for their detection. We
assess performance by defining several tasks in computer vision, natural
language processing, and automatic speech recognition, showing the
effectiveness of this baseline across all. We then show the baseline can
sometimes be surpassed, demonstrating the room for future research on these
underexplored detection tasks. | http://arxiv.org/pdf/1610.02136 | Dan Hendrycks, Kevin Gimpel | cs.NE, cs.CV, cs.LG | Published as a conference paper at ICLR 2017. 1 Figure in 1 Appendix.
Minor changes from the previous version | International Conference on Learning Representations 2017 | cs.NE | 20161007 | 20181003 | [] |
1610.02413 | 29 | Definition 5.1 (Bayes optimal regressor). Given random variables (X,A) and a target variable Y, the Bayes optimal regressor is R = arg min, x,q) [(v - r(X,A))?| = 1*(X,A) with r*(x,a) = E[Y | X = x,A=al.
The Bayes optimal classiï¬er, for any proper loss, is then a threshold predictor of R, where the threshold depends on the loss function (see, e.g., [Was10]). We will extend this result to the case where we additionally ask the classiï¬er to satisfy an oblivious property, such as our non-discrimination properties.
Proposition 5.2. For any source distribution over (Y , X, A) with Bayes optimal regressor R(X, A), any loss function, and any oblivious property C, there exists a predictor Y
1. Y* is an optimal predictor satisfying C. That is, Eâ¬(Y*, Y) < Eâ¬(Y, Y) for any predictor Y(X, A) which satisfies C.
â
â
2. Y is derived from (R, A). | 1610.02413#29 | Equality of Opportunity in Supervised Learning | We propose a criterion for discrimination against a specified sensitive
attribute in supervised learning, where the goal is to predict some target
based on available features. Assuming data about the predictor, target, and
membership in the protected group are available, we show how to optimally
adjust any learned predictor so as to remove discrimination according to our
definition. Our framework also improves incentives by shifting the cost of poor
classification from disadvantaged groups to the decision maker, who can respond
by improving the classification accuracy.
In line with other studies, our notion is oblivious: it depends only on the
joint statistics of the predictor, the target and the protected attribute, but
not on interpretation of individualfeatures. We study the inherent limits of
defining and identifying biases based on such oblivious measures, outlining
what can and cannot be inferred from different oblivious tests.
We illustrate our notion using a case study of FICO credit scores. | http://arxiv.org/pdf/1610.02413 | Moritz Hardt, Eric Price, Nathan Srebro | cs.LG | null | null | cs.LG | 20161007 | 20161007 | [] |
1610.02136 | 30 | 7
Published as a conference paper at ICLR 2017
In-Distribution / Out-of-Distribution TIMIT/+Airport TIMIT/+Babble TIMIT/+Car TIMIT/+Exhib. TIMIT/+Rest. TIMIT/+Subway TIMIT/+Street TIMIT/+Train TIMIT/Chinese Average AUROC /Base Softmax 75/50 94/50 70/50 91/50 68/50 76/50 89/50 80/50 79/50 80 AUROC /Base AbMod 100/50 100/50 98/50 98/50 95/50 96/50 98/50 100/50 90/50 97 AUPR In/Base Softmax 77/41 95/41 69/41 92/41 70/41 77/41 91/41 82/41 41/12 77 AUPR In/Base AbMod 100/41 100/41 98/41 98/41 96/41 96/41 99/41 100/41 66/12 95 AUPR Out/Base Softmax 73/59 91/59 70/59 91/59 67/59 74/59 85/59 77/59 96/88 80 AUPR Out/Base AbMod 100/59 100/59 98/59 98/59 95/59 96/59 98/59 100/59 98/88 98 | 1610.02136#30 | A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks | We consider the two related problems of detecting if an example is
misclassified or out-of-distribution. We present a simple baseline that
utilizes probabilities from softmax distributions. Correctly classified
examples tend to have greater maximum softmax probabilities than erroneously
classified and out-of-distribution examples, allowing for their detection. We
assess performance by defining several tasks in computer vision, natural
language processing, and automatic speech recognition, showing the
effectiveness of this baseline across all. We then show the baseline can
sometimes be surpassed, demonstrating the room for future research on these
underexplored detection tasks. | http://arxiv.org/pdf/1610.02136 | Dan Hendrycks, Kevin Gimpel | cs.NE, cs.CV, cs.LG | Published as a conference paper at ICLR 2017. 1 Figure in 1 Appendix.
Minor changes from the previous version | International Conference on Learning Representations 2017 | cs.NE | 20161007 | 20181003 | [] |
1610.02357 | 30 | [4] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. arXiv preprint arXiv:1512.03385, 2015.
[5] G. Hinton, O. Vinyals, and J. Dean. Distilling the knowledge in a neural network, 2015.
[6] A. Howard. Mobilenets: Efï¬cient convolutional neural net- works for mobile vision applications. Forthcoming.
[7] S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of The 32nd International Conference on Machine Learning, pages 448â456, 2015.
[8] J. Jin, A. Dundar, and E. Culurciello. Flattened convolutional neural networks for feedforward acceleration. arXiv preprint arXiv:1412.5474, 2014.
Imagenet classiï¬cation with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097â1105, 2012. | 1610.02357#30 | Xception: Deep Learning with Depthwise Separable Convolutions | We present an interpretation of Inception modules in convolutional neural
networks as being an intermediate step in-between regular convolution and the
depthwise separable convolution operation (a depthwise convolution followed by
a pointwise convolution). In this light, a depthwise separable convolution can
be understood as an Inception module with a maximally large number of towers.
This observation leads us to propose a novel deep convolutional neural network
architecture inspired by Inception, where Inception modules have been replaced
with depthwise separable convolutions. We show that this architecture, dubbed
Xception, slightly outperforms Inception V3 on the ImageNet dataset (which
Inception V3 was designed for), and significantly outperforms Inception V3 on a
larger image classification dataset comprising 350 million images and 17,000
classes. Since the Xception architecture has the same number of parameters as
Inception V3, the performance gains are not due to increased capacity but
rather to a more efficient use of model parameters. | http://arxiv.org/pdf/1610.02357 | François Chollet | cs.CV | null | null | cs.CV | 20161007 | 20170404 | [
{
"id": "1608.04337"
},
{
"id": "1602.07261"
},
{
"id": "1511.07289"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
}
] |
1610.02413 | 30 | â
â
2. Y is derived from (R, A).
Proof. Consider an arbitrary classifier Y on the attributes (X, A), defined by a (possibly random- ized) function Y = f(X,A). Given (R = r,A =a), we can draw a fresh Xâ from the distribution (X | R=1,A =a), and set Y* = f(Xâ,a). This satisfies (2). Moreover, since Y is binary with expectation R, Y is independent of X conditioned on (R,A). Hence (Y,X,R,A) and (Y,Xâ,R,A) have identical distributions, so (Y*,A, Y) and (Y,A, Y) also have identical distributions. This implies Y* satisfies (1) as desired.
10
RO ELD oe
Figure 3: Graphical model for the proof of Proposition 5.2.
Corollary 5.3 (Optimality characterization). An optimal equalized odds predictor can be derived from the Bayes optimal regressor R and the protected attribute A. The same is true for an optimal equal opportunity predictor.
# 5.1 Near optimality | 1610.02413#30 | Equality of Opportunity in Supervised Learning | We propose a criterion for discrimination against a specified sensitive
attribute in supervised learning, where the goal is to predict some target
based on available features. Assuming data about the predictor, target, and
membership in the protected group are available, we show how to optimally
adjust any learned predictor so as to remove discrimination according to our
definition. Our framework also improves incentives by shifting the cost of poor
classification from disadvantaged groups to the decision maker, who can respond
by improving the classification accuracy.
In line with other studies, our notion is oblivious: it depends only on the
joint statistics of the predictor, the target and the protected attribute, but
not on interpretation of individualfeatures. We study the inherent limits of
defining and identifying biases based on such oblivious measures, outlining
what can and cannot be inferred from different oblivious tests.
We illustrate our notion using a case study of FICO credit scores. | http://arxiv.org/pdf/1610.02413 | Moritz Hardt, Eric Price, Nathan Srebro | cs.LG | null | null | cs.LG | 20161007 | 20161007 | [] |
1610.02136 | 31 | Table 10: Abnormality modules can generalize to novel distortions and detect out-of-distribution examples even when they do not severely degrade accuracy. All values are percentages.
~
In-Distribution / Out-of-Distribution MNIST/Omniglot MNIST/notMNIST MNIST/CIFAR-10bw MNIST/Gaussian MNIST/Uniform Average AUROC /Base Softmax 95/50 87/50 98/50 88/50 99/50 93 AUROC /Base AbMod 100/50 100/50 100/50 100/50 100/50 100 AUPR In/Base Softmax 95/52 88/50 98/50 88/50 99/50 94 AUPR In/Base AbMod 100/52 100/50 100/50 100/50 100/50 100 AUPR Out/Base Softmax 95/48 90/50 98/50 90/50 99/50 94 AUPR Out/Base AbMod 100/48 100/50 100/50 100/50 100/50 100
Table 11: Improved detection using the abnormality module. All values are percentages. | 1610.02136#31 | A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks | We consider the two related problems of detecting if an example is
misclassified or out-of-distribution. We present a simple baseline that
utilizes probabilities from softmax distributions. Correctly classified
examples tend to have greater maximum softmax probabilities than erroneously
classified and out-of-distribution examples, allowing for their detection. We
assess performance by defining several tasks in computer vision, natural
language processing, and automatic speech recognition, showing the
effectiveness of this baseline across all. We then show the baseline can
sometimes be surpassed, demonstrating the room for future research on these
underexplored detection tasks. | http://arxiv.org/pdf/1610.02136 | Dan Hendrycks, Kevin Gimpel | cs.NE, cs.CV, cs.LG | Published as a conference paper at ICLR 2017. 1 Figure in 1 Appendix.
Minor changes from the previous version | International Conference on Learning Representations 2017 | cs.NE | 20161007 | 20181003 | [] |
1610.02357 | 31 | Imagenet classiï¬cation with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097â1105, 2012.
[10] Y. LeCun, L. Jackel, L. Bottou, C. Cortes, J. S. Denker, H. Drucker, I. Guyon, U. Muller, E. Sackinger, P. Simard, et al. Learning algorithms for classiï¬cation: A comparison on handwritten digit recognition. Neural networks: the statistical mechanics perspective, 261:276, 1995.
[11] M. Lin, Q. Chen, and S. Yan. Network in network. arXiv preprint arXiv:1312.4400, 2013.
[12] F. Mamalet and C. Garcia. Simplifying ConvNets for Fast Learning. In International Conference on Artiï¬cial Neural Networks (ICANN 2012), pages 58â65. Springer, 2012. [13] B. T. Polyak and A. B. Juditsky. Acceleration of stochas- tic approximation by averaging. SIAM J. Control Optim., 30(4):838â855, July 1992. | 1610.02357#31 | Xception: Deep Learning with Depthwise Separable Convolutions | We present an interpretation of Inception modules in convolutional neural
networks as being an intermediate step in-between regular convolution and the
depthwise separable convolution operation (a depthwise convolution followed by
a pointwise convolution). In this light, a depthwise separable convolution can
be understood as an Inception module with a maximally large number of towers.
This observation leads us to propose a novel deep convolutional neural network
architecture inspired by Inception, where Inception modules have been replaced
with depthwise separable convolutions. We show that this architecture, dubbed
Xception, slightly outperforms Inception V3 on the ImageNet dataset (which
Inception V3 was designed for), and significantly outperforms Inception V3 on a
larger image classification dataset comprising 350 million images and 17,000
classes. Since the Xception architecture has the same number of parameters as
Inception V3, the performance gains are not due to increased capacity but
rather to a more efficient use of model parameters. | http://arxiv.org/pdf/1610.02357 | François Chollet | cs.CV | null | null | cs.CV | 20161007 | 20170404 | [
{
"id": "1608.04337"
},
{
"id": "1602.07261"
},
{
"id": "1511.07289"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
}
] |
1610.02413 | 31 | # 5.1 Near optimality
We can furthermore show that if we can approximate the (unconstrained) Bayes optimal regres- sor well enough, then we can also construct a nearly optimal non-discriminating classiï¬er. To state the result, we introduce the following distance measure on random variables.
Deï¬nition 5.4. We deï¬ne the conditional Kolmogorov distance between two random variables R, R
dy(R,Râ) âmax sup |Pr{R>t|A=a,Y =y}-Pr{Râ>t|A=a,Y =y}]. (5.1) 4ye(0,1} te[o,1]
Without the conditioning on A and Y , this deï¬nition coincides with the standard Kolmogorov distance. Closeness in Kolmogorov distance is a rather weak requirement. We need the slightly stronger condition that the Kolmogorov distance is small for each of the four conditionings on A and Y . This captures the distance between the restricted ROC curves, as formalized next. | 1610.02413#31 | Equality of Opportunity in Supervised Learning | We propose a criterion for discrimination against a specified sensitive
attribute in supervised learning, where the goal is to predict some target
based on available features. Assuming data about the predictor, target, and
membership in the protected group are available, we show how to optimally
adjust any learned predictor so as to remove discrimination according to our
definition. Our framework also improves incentives by shifting the cost of poor
classification from disadvantaged groups to the decision maker, who can respond
by improving the classification accuracy.
In line with other studies, our notion is oblivious: it depends only on the
joint statistics of the predictor, the target and the protected attribute, but
not on interpretation of individualfeatures. We study the inherent limits of
defining and identifying biases based on such oblivious measures, outlining
what can and cannot be inferred from different oblivious tests.
We illustrate our notion using a case study of FICO credit scores. | http://arxiv.org/pdf/1610.02413 | Moritz Hardt, Eric Price, Nathan Srebro | cs.LG | null | null | cs.LG | 20161007 | 20161007 | [] |
1610.02136 | 32 | module outputs for clean examples with the outputs for the distorted examples. The noises are from Aurora-2 and are added to TIMIT examples with 30% volume. We also use the THCHS-30 dataset for Chinese speech. Unlike before, we use the THCHS-30 training examples rather than test set examples because fully connected networks can evaluate the whole training set sufï¬ciently quickly. It is worth mentioning that fully connected deep neural networks are noise robust (Seltzer et al., 2013), yet the abnormality module can still detect whether an example is out-of-distribution. To see why this is remarkable, note that the networkâs frame classiï¬cation error is 29.69% on the entire test (not core) dataset, and the average classiï¬cation error for distorted examples is 30.43%âthis is unlike the bidirectional LSTM which had a more pronounced performance decline. Because the classiï¬cation degradation was only slight, the softmax statistics alone did not provide useful out- of-distribution detection. In contrast, the abnormality module provided scores which allowed the detection of different-but-similar examples. In practice, it may be important to determine whether an example is out-of-distribution even if it does not greatly confuse the network, and the abnormality module facilitates this.
# 4.2 MNIST | 1610.02136#32 | A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks | We consider the two related problems of detecting if an example is
misclassified or out-of-distribution. We present a simple baseline that
utilizes probabilities from softmax distributions. Correctly classified
examples tend to have greater maximum softmax probabilities than erroneously
classified and out-of-distribution examples, allowing for their detection. We
assess performance by defining several tasks in computer vision, natural
language processing, and automatic speech recognition, showing the
effectiveness of this baseline across all. We then show the baseline can
sometimes be surpassed, demonstrating the room for future research on these
underexplored detection tasks. | http://arxiv.org/pdf/1610.02136 | Dan Hendrycks, Kevin Gimpel | cs.NE, cs.CV, cs.LG | Published as a conference paper at ICLR 2017. 1 Figure in 1 Appendix.
Minor changes from the previous version | International Conference on Learning Representations 2017 | cs.NE | 20161007 | 20181003 | [] |
1610.02357 | 32 | [14] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, et al. Ima- genet large scale visual recognition challenge. 2014. [15] L. Sifre. Rigid-motion scattering for image classiï¬cation,
2014. Ph.D. thesis.
[16] L. Sifre and S. Mallat. Rotation, scaling and deformation invariant scattering for texture discrimination. In 2013 IEEE Conference on Computer Vision and Pattern Recognition, Portland, OR, USA, June 23-28, 2013, pages 1233â1240, 2013.
[17] N. Silberman and S. Guadarrama. Tf-slim, 2016. [18] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
Inception-v4, inception-resnet and the impact of residual connections on learning. arXiv preprint arXiv:1602.07261, 2016. | 1610.02357#32 | Xception: Deep Learning with Depthwise Separable Convolutions | We present an interpretation of Inception modules in convolutional neural
networks as being an intermediate step in-between regular convolution and the
depthwise separable convolution operation (a depthwise convolution followed by
a pointwise convolution). In this light, a depthwise separable convolution can
be understood as an Inception module with a maximally large number of towers.
This observation leads us to propose a novel deep convolutional neural network
architecture inspired by Inception, where Inception modules have been replaced
with depthwise separable convolutions. We show that this architecture, dubbed
Xception, slightly outperforms Inception V3 on the ImageNet dataset (which
Inception V3 was designed for), and significantly outperforms Inception V3 on a
larger image classification dataset comprising 350 million images and 17,000
classes. Since the Xception architecture has the same number of parameters as
Inception V3, the performance gains are not due to increased capacity but
rather to a more efficient use of model parameters. | http://arxiv.org/pdf/1610.02357 | François Chollet | cs.CV | null | null | cs.CV | 20161007 | 20170404 | [
{
"id": "1608.04337"
},
{
"id": "1602.07261"
},
{
"id": "1511.07289"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
}
] |
1610.02413 | 32 | Lemma 5.5. Let R,Râ ⬠[0,1] be random variables in the same probability space as A and Y. Then, for any point p on a restricted ROC curve of R, there is a point q on the corresponding restricted ROC curve of Râ such that \|p â qllz < V2-dx(R,Râ).
Proof. Assume the point p is achieved by thresholding R at t â [0, 1]. Let q be the point on the ROC curve achieved by thresholding R . After applying the deï¬nition to bound the distance in each coordinate, the claim follows from Pythagorasâ theorem.
We can now show that an equalized odds predictor derived from a nearly optimal regressor is still nearly optimal among all equal odds predictors, while quantifying the loss in terms of the conditional Kolmogorov distance.
Theorem 5.6 (Near optimality). Assume that ⬠is a bounded loss function, and let R ⬠[0,1] be an arbitrary random variable. Then, there is an optimal equalized odds predictor Y* and an equalized odds predictor Y derived from (R, A) such that
â
L(Y, Y) < Eâ¬(Y*, Y) +2V2-dk(R,Râ), | 1610.02413#32 | Equality of Opportunity in Supervised Learning | We propose a criterion for discrimination against a specified sensitive
attribute in supervised learning, where the goal is to predict some target
based on available features. Assuming data about the predictor, target, and
membership in the protected group are available, we show how to optimally
adjust any learned predictor so as to remove discrimination according to our
definition. Our framework also improves incentives by shifting the cost of poor
classification from disadvantaged groups to the decision maker, who can respond
by improving the classification accuracy.
In line with other studies, our notion is oblivious: it depends only on the
joint statistics of the predictor, the target and the protected attribute, but
not on interpretation of individualfeatures. We study the inherent limits of
defining and identifying biases based on such oblivious measures, outlining
what can and cannot be inferred from different oblivious tests.
We illustrate our notion using a case study of FICO credit scores. | http://arxiv.org/pdf/1610.02413 | Moritz Hardt, Eric Price, Nathan Srebro | cs.LG | null | null | cs.LG | 20161007 | 20161007 | [] |
1610.02136 | 33 | # 4.2 MNIST
Finally, much like in a previous experiment, we train an MNIST classiï¬er with three layers of width 256. This time, we also use an auxiliary decoder and abnormality module rather than relying on only softmax statistics. For abnormal examples we blur, rotate, or add Gaussian noise to training images. Gains from the abnormality module are shown in Table 11, and there is a consistent out-of-sample detection improvement compared to softmax prediction probabilities. Even for highly dissimilar examples the abnormality module can further improve detection.
8
Published as a conference paper at ICLR 2017
# 5 DISCUSSION AND FUTURE WORK | 1610.02136#33 | A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks | We consider the two related problems of detecting if an example is
misclassified or out-of-distribution. We present a simple baseline that
utilizes probabilities from softmax distributions. Correctly classified
examples tend to have greater maximum softmax probabilities than erroneously
classified and out-of-distribution examples, allowing for their detection. We
assess performance by defining several tasks in computer vision, natural
language processing, and automatic speech recognition, showing the
effectiveness of this baseline across all. We then show the baseline can
sometimes be surpassed, demonstrating the room for future research on these
underexplored detection tasks. | http://arxiv.org/pdf/1610.02136 | Dan Hendrycks, Kevin Gimpel | cs.NE, cs.CV, cs.LG | Published as a conference paper at ICLR 2017. 1 Figure in 1 Appendix.
Minor changes from the previous version | International Conference on Learning Representations 2017 | cs.NE | 20161007 | 20181003 | [] |
1610.02357 | 33 | Inception-v4, inception-resnet and the impact of residual connections on learning. arXiv preprint arXiv:1602.07261, 2016.
[20] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1â9, 2015. [21] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna. Rethinking the inception architecture for computer vision. arXiv preprint arXiv:1512.00567, 2015.
[22] T. Tieleman and G. Hinton. Divide the gradient by a run- ning average of its recent magnitude. COURSERA: Neural
Networks for Machine Learning, 4, 2012. Accessed: 2015- 11-05.
[23] V. Vanhoucke. Learning visual representations at scale. ICLR, 2014.
[24] M. Wang, B. Liu, and H. Foroosh. Factorized convolutional neural networks. arXiv preprint arXiv:1608.04337, 2016. | 1610.02357#33 | Xception: Deep Learning with Depthwise Separable Convolutions | We present an interpretation of Inception modules in convolutional neural
networks as being an intermediate step in-between regular convolution and the
depthwise separable convolution operation (a depthwise convolution followed by
a pointwise convolution). In this light, a depthwise separable convolution can
be understood as an Inception module with a maximally large number of towers.
This observation leads us to propose a novel deep convolutional neural network
architecture inspired by Inception, where Inception modules have been replaced
with depthwise separable convolutions. We show that this architecture, dubbed
Xception, slightly outperforms Inception V3 on the ImageNet dataset (which
Inception V3 was designed for), and significantly outperforms Inception V3 on a
larger image classification dataset comprising 350 million images and 17,000
classes. Since the Xception architecture has the same number of parameters as
Inception V3, the performance gains are not due to increased capacity but
rather to a more efficient use of model parameters. | http://arxiv.org/pdf/1610.02357 | François Chollet | cs.CV | null | null | cs.CV | 20161007 | 20170404 | [
{
"id": "1608.04337"
},
{
"id": "1602.07261"
},
{
"id": "1511.07289"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.