id
stringlengths 12
15
| title
stringlengths 8
162
| content
stringlengths 1
17.6k
| prechunk_id
stringlengths 0
15
| postchunk_id
stringlengths 0
15
| arxiv_id
stringlengths 10
10
| references
listlengths 1
1
|
---|---|---|---|---|---|---|
1702.08734#76 | Billion-scale similarity search with GPUs | Eï¬ cient k-nearest neighbor graph construction using mapreduce for large-scale data sets. IEICE Transactions, 97-D(12):3142â 3154, 2014. [46] R. Weber, H.-J. Schek, and S. Blott. A quantitative analysis and performance study for similarity-search methods in high-dimensional spaces. In Proc. International Conference on Very Large DataBases, pages 194â 205, 1998. [47] P. Wieschollek, O. Wang, A. Sorkine-Hornung, and H. P. A. Lensch. Eï¬ | 1702.08734#75 | 1702.08734#77 | 1702.08734 | [
"1510.00149"
]
|
1702.08734#77 | Billion-scale similarity search with GPUs | cient large-scale approximate nearest neighbor search on the GPU. In Proc. IEEE Conference on Computer Vision and Pattern Recognition, pages 2027â 2035, June 2016. [48] S. Williams, A. Waterman, and D. Patterson. Rooï¬ ine: An insightful visual performance model for multicore architectures. Communications of the ACM, 52(4):65â 76, April 2009. Appendix: Complexity analysis of WarpSelect We derive the average number of times updates are triggered in WarpSelect, for use in Section 4.3. Let the input to k-selection be a sequence {a1, a2, ..., ac} (1-based indexing), a randomly chosen permutation of a set of distinct elements. Elements are read sequentially in c groups of size w (the warp; in our case, w = 32); assume is a multiple of w, so c = ¢/w. Recall that t is the thread queue length. We call elements prior to or at position n in the min-k seen so far the successive min-k (at n). The likelihood that a, is in the successive min-k at n is: α(n, k) := 1 k/n if n â ¤ k if n > k (13) as each an, n > k has a k/n chance as all permutations are equally likely, and all elements in the ï¬ rst k qualify. Counting the insertion sorts. In a given lane, an inser- tion sort is triggered if the incoming value is in the successive min-k + t values, but the lane has â | 1702.08734#76 | 1702.08734#78 | 1702.08734 | [
"1510.00149"
]
|
1702.08734#78 | Billion-scale similarity search with GPUs | seenâ only wc0 + (c â c0) values, where c0 is the previous won warp ballot. The prob- ability of this happening is: α(wc0 + (c â c0), k + t) â k + t wc for c > k. (14) The approximation considers that the thread queue has seen all the wc values, not just those assigned to its lane. The probability of any lane triggering an insertion sort is then: 1 â 1 â k + t wc â k + t c . (15) Here the approximation is a first-order Taylor expansion. Summing up the probabilities over c gives an expected num- ber of insertions of N2 % (k + t) log(c) = O(k log(é/w)). Counting full sorts. We seck N3 = m(é,k,t,w), the ex- pected number of full sorts required for WARPSELECT. | 1702.08734#77 | 1702.08734#79 | 1702.08734 | [
"1510.00149"
]
|
1702.08734#79 | Billion-scale similarity search with GPUs | Single lane. For now, we assume w = 1, soc = @. Let (l,m, k) be the probability that in an sequence {a1,..., ac}, exactly m of the elements as encountered by a sequential scanner (w = 1) are in the successive min-k. Given m, there are ({) places where these successive min-k elements can occur. It is given by a recurrence relation: 1 £=0andm=0 0 £=Oandm>0 y(é,m,k) = 40 £>0Oandm=0 (y(@-â 1,mâ 1,k)-a(â ¬,k)+ y(â 1,m,k)-(1â a(é,k))) otherwise. (16) > 12 The last case is the probability of: there is a ¢â 1 se- quence with m â 1 successive min-k elements preceding us, and the current element is in the successive min-k, or the current element is not in the successive min-k, m ones are before us. We can then develop a recurrence relationship for m(l,k,t, 1). Note that min((bt-+max(0,tâ 1)),0) 6(6,b,k,t) := y(é,m,k) (17) m=bt for b where 0 < bt < @ is the fraction of all sequences of length @ that will force b sorts of data by winning the thread queue ballot, as there have to be bt to (bt + max(0,t â 1)) elements in the successive min-k for these sorts to happen (as the min-k elements will overflow the thread queues). There are at most |¢/t| won ballots that can occur, as it takes t separate sequential current min-k seen elements to win the ballot. m(¢,k,t,1) is thus the expectation of this over all possible b: Le/t) m(l,k,t,1) = S> d- 4(6,6,k, t). b=1 (18) This can be computed by dynamic programming. Analyti- cally, note that for t = 1, k = 1, m(â | 1702.08734#78 | 1702.08734#80 | 1702.08734 | [
"1510.00149"
]
|
1702.08734#80 | Billion-scale similarity search with GPUs | ¬,1,1,1) is the harmonic number Hy = 1+4+4+...+4, which converges to In(¢) +7 (the Euler-Mascheroni constant y) as £ > oo. Fort =1,k > 1,¢> k, w(é,k,1,1) = k+k(He â Ae) or O(klog(é)), as the first k elements are in the successive min-k, and the expectation for the rest is ma + ms +. s. Fort > 1,k > 1,£> k, note that there are some number D,k < D < £ of successive min-k determinations D made for each possible {a1,...,a¢}. The number of won ballots for each case is by definition |D/t], as the thread queue must fill up t times. Thus, 7(¢,k,t, 1) = O(k log(é)/t). | 1702.08734#79 | 1702.08734#81 | 1702.08734 | [
"1510.00149"
]
|
1702.08734#81 | Billion-scale similarity search with GPUs | Multiple lanes. The w > 1 case is complicated by the fact that there are joint probabilities to consider (if more than one of the w workers triggers a sort for a given group, only one sort takes place). However, the likelihood can be bounded. Let 7â (¢,k,t,w) be the expected won ballots as- suming no mutual interference between the w workers for winning ballots (i.e., we win b ballots if there are b < w workers that independently win a ballot at a single step), but with the shared min-k set after each sort from the joint sequence. Assume that k > w. | 1702.08734#80 | 1702.08734#82 | 1702.08734 | [
"1510.00149"
]
|
1702.08734#82 | Billion-scale similarity search with GPUs | Then: - [e/w]â [k/wl w'(â ¬,k,1,w) <u( «| + » aera) â ¬/w),k,1,1) = O(wk log(£/w)) < wa( (19) (19) where the likelihood of the w workers seeing a successive min-k element has an upper bound of that of the first worker at each step. As before, the number of won ballots is scaled by t, so 1'(â ¬,k,t, w) = O(wk log(é/w)/t). Mutual interfer- ence can only reduce the number of ballots, so we obtain the same upper bound for 7(é, k, t, w). | 1702.08734#81 | 1702.08734 | [
"1510.00149"
]
|
|
1702.08608#0 | Towards A Rigorous Science of Interpretable Machine Learning | 7 1 0 2 r a M 2 ] L M . t a t s [ 2 v 8 0 6 8 0 . 2 0 7 1 : v i X r a Towards A Rigorous Science of Interpretable Machine Learning Finale Doshi-Velezâ and Been Kimâ From autonomous cars and adaptive email-ï¬ lters to predictive policing systems, machine learn- ing (ML) systems are increasingly ubiquitous; they outperform humans on speciï¬ c tasks [Mnih et al., 2013, Silver et al., 2016, Hamill, 2017] and often guide processes of human understanding and decisions [Carton et al., 2016, Doshi-Velez et al., 2014]. The deployment of ML systems in complex applications has led to a surge of interest in systems optimized not only for expected task performance but also other important criteria such as safety [Otte, 2013, Amodei et al., 2016, Varshney and Alemzadeh, 2016], nondiscrimination [Bostrom and Yudkowsky, 2014, Ruggieri et al., 2010, Hardt et al., 2016], avoiding technical debt [Sculley et al., 2015], or providing the right to explanation [Goodman and Flaxman, 2016]. For ML systems to be used safely, satisfying these auxiliary criteria is critical. However, unlike measures of performance such as accuracy, these crite- ria often cannot be completely quantiï¬ | 1702.08608#1 | 1702.08608 | [
"1606.04155"
]
|
|
1702.08608#1 | Towards A Rigorous Science of Interpretable Machine Learning | ed. For example, we might not be able to enumerate all unit tests required for the safe operation of a semi-autonomous car or all confounds that might cause a credit scoring system to be discriminatory. In such cases, a popular fallback is the criterion of interpretability: if the system can explain its reasoning, we then can verify whether that reasoning is sound with respect to these auxiliary criteria. Unfortunately, there is little consensus on what interpretability in machine learning is and how to evaluate it for benchmarking. Current interpretability evaluation typically falls into two categories. | 1702.08608#0 | 1702.08608#2 | 1702.08608 | [
"1606.04155"
]
|
1702.08608#2 | Towards A Rigorous Science of Interpretable Machine Learning | The ï¬ rst evaluates interpretability in the context of an application: if the system is useful in either a practical application or a simpliï¬ ed version of it, then it must be somehow interpretable (e.g. Ribeiro et al. [2016], Lei et al. [2016], Kim et al. [2015a], Doshi-Velez et al. [2015], Kim et al. [2015b]). The second evaluates interpretability via a quantiï¬ able proxy: a researcher might ï¬ rst sparse linear models, rule lists, gradient boosted treesâ are claim that some model classâ e.g. interpretable and then present algorithms to optimize within that class (e.g. Bucilu et al. [2006], Wang et al. [2017], Wang and Rudin [2015], Lou et al. [2012]). To large extent, both evaluation approaches rely on some notion of â youâ ll know it when you see it.â | 1702.08608#1 | 1702.08608#3 | 1702.08608 | [
"1606.04155"
]
|
1702.08608#3 | Towards A Rigorous Science of Interpretable Machine Learning | Should we be concerned about a lack of rigor? Yes and no: the notions of interpretability above appear reasonable because they are reasonable: they meet the ï¬ rst test of having face- validity on the correct test set of subjects: human beings. However, this basic notion leaves many kinds of questions unanswerable: Are all models in all deï¬ ned-to-be-interpretable model classes equally interpretable? Quantiï¬ able proxies such as sparsity may seem to allow for comparison, but how does one think about comparing a model sparse in features to a model sparse in prototypes? Moreover, do all applications have the same interpretability needs? | 1702.08608#2 | 1702.08608#4 | 1702.08608 | [
"1606.04155"
]
|
1702.08608#4 | Towards A Rigorous Science of Interpretable Machine Learning | If we are to move this ï¬ eld forwardâ to compare methods and understand when methods may generalizeâ we need to formalize these notions and make them evidence-based. The objective of this review is to chart a path toward the deï¬ nition and rigorous evaluation of interpretability. The need is urgent: recent European Union regulation will require algorithms â Authors contributed equally. 1 Humans Tasks Application-grounded Evaluation [nce More " . . Real Simple Specific Human-grounded Evaluation and Costly . . No Real Proxy Functionally-grounded Evaluation Humans Tasks Figure 1: Taxonomy of evaluation approaches for interpretability that make decisions based on user-level predictors, which â | 1702.08608#3 | 1702.08608#5 | 1702.08608 | [
"1606.04155"
]
|
1702.08608#5 | Towards A Rigorous Science of Interpretable Machine Learning | signiï¬ cantly aï¬ ectâ users to provide explanation (â right to explanationâ ) by 2018 [Parliament and of the European Union, 2016]. In addition, the volume of research on interpretability is rapidly growing.1 In section 1, we discuss what interpretability is and contrast with other criteria such as reliability and fairness. In section 2, we consider scenarios in which interpretability is needed and why. In section 3, we propose a taxonomy for the evaluation of interpretabilityâ application-grounded, human-grounded and functionally- grounded. We conclude with important open questions in section 4 and speciï¬ c suggestions for researchers doing work in interpretability in section 5. # 1 What is Interpretability? | 1702.08608#4 | 1702.08608#6 | 1702.08608 | [
"1606.04155"
]
|
1702.08608#6 | Towards A Rigorous Science of Interpretable Machine Learning | Deï¬ nition Interpret means to explain or to present in understandable terms.2 In the context of ML systems, we deï¬ ne interpretability as the ability to explain or to present in understandable terms to a human. A formal deï¬ nition of explanation remains elusive; in the ï¬ eld of psychology, Lombrozo [2006] states â explanations are... the currency in which we exchanged beliefsâ and notes that questions such as what constitutes an explanation, what makes some explanations better than others, how explanations are generated and when explanations are sought are just beginning to be addressed. Researchers have classiï¬ ed explanations from being â deductive-nomologicalâ in nature [Hempel and Oppenheim, 1948] (i.e. as logical proofs) to providing some sense of mechanism [Bechtel and Abrahamsen, 2005, Chater and Oaksford, 2006, Glennan, 2002]. Keil [2006] considered a broader deï¬ nition: implicit explanatory understanding. In this work, we propose data-driven ways to derive operational deï¬ nitions and evaluations of explanations, and thus, interpretability. Interpretability is used to conï¬ rm other important desiderata of ML systems There exist many auxiliary criteria that one may wish to optimize. Notions of fairness or unbiasedness imply that protected groups (explicit or implicit) are not somehow discriminated against. Privacy means the method protects sensitive information in the data. Properties such as reliability and robustness ascertain whether algorithms reach certain levels of performance in the face of parameter or input variation. Causality implies that the predicted change in output due to a perturbation will occur in the real system. Usable methods provide information that assist users to accomplish a taskâ e.g. a knob to tweak image lightingâ while trusted systems have the conï¬ dence of human usersâ e.g. aircraft collision avoidance systems. Some areas, such as the fairness [Hardt et al., 1Google Scholar ï¬ nds more than 20,000 publications related to interpretability in ML in the last ï¬ ve years. 2Merriam-Webster dictionary, accessed 2017-02-07 2 | 1702.08608#5 | 1702.08608#7 | 1702.08608 | [
"1606.04155"
]
|
1702.08608#7 | Towards A Rigorous Science of Interpretable Machine Learning | 2016] and privacy [Toubiana et al., 2010, Dwork et al., 2012, Hardt and Talwar, 2010] the research communities have formalized their criteria, and these formalizations have allowed for a blossoming of rigorous research in these ï¬ elds (without the need for interpretability). However, in many cases, formal deï¬ nitions remain elusive. Following the psychology literature, where Keil et al. [2004] notes â explanations may highlight an incompleteness,â we argue that interpretability can assist in qual- itatively ascertaining whether other desiderataâ such as fairness, privacy, reliability, robustness, causality, usability and trustâ are met. For example, one can provide a feasible explanation that fails to correspond to a causal structure, exposing a potential concern. # 2 Why interpretability? Incompleteness Not all ML systems require interpretability. Ad servers, postal code sorting, air craft collision avoidance systemsâ all compute their output without human intervention. Explanation is not necessary either because (1) there are no signiï¬ cant consequences for unacceptable results or (2) the problem is suï¬ ciently well-studied and validated in real applications that we trust the systemâ s decision, even if the system is not perfect. So when is explanation necessary and appropriate? We argue that the need for interpretability stems from an incompleteness in the problem formalization, creating a fundamental barrier to optimization and evaluation. Note that incompleteness is distinct from uncertainty: the fused estimate of a missile location may be uncertain, but such uncertainty can be rigorously quantiï¬ ed and formally reasoned about. In machine learning terms, we distinguish between cases where unknowns result in quantiï¬ ed varianceâ e.g. trying to learn from small data set or with limited the eï¬ ect of sensorsâ and incompleteness that produces some kind of unquantiï¬ ed biasâ e.g. including domain knowledge in a model selection process. Below are some illustrative scenarios: â ¢ Scientiï¬ c Understanding: The humanâ s goal is to gain knowledge. We do not have a complete way of stating what knowledge is; thus the best we can do is ask for explanations we can convert into knowledge. | 1702.08608#6 | 1702.08608#8 | 1702.08608 | [
"1606.04155"
]
|
1702.08608#8 | Towards A Rigorous Science of Interpretable Machine Learning | â ¢ Safety: For complex tasks, the end-to-end system is almost never completely testable; one cannot create a complete list of scenarios in which the system may fail. Enumerating all possible outputs given all possible inputs be computationally or logistically infeasible, and we may be unable to ï¬ ag all undesirable outputs. â ¢ Ethics: The human may want to guard against certain kinds of discrimination, and their notion of fairness may be too abstract to be completely encoded into the system (e.g., one might desire a â fairâ classiï¬ er for loan approval). Even if we can encode protections for speciï¬ c protected classes into the system, there might be biases that we did not consider a priori (e.g., one may not build gender-biased word embeddings on purpose, but it was a pattern in data that became apparent only after the fact). â ¢ Mismatched objectives: The agentâ s algorithm may be optimizing an incomplete objectiveâ that is, a proxy function for the ultimate goal. For example, a clinical system may be opti- mized for cholesterol control, without considering the likelihood of adherence; an automotive engineer may be interested in engine data not to make predictions about engine failures but to more broadly build a better car. | 1702.08608#7 | 1702.08608#9 | 1702.08608 | [
"1606.04155"
]
|
1702.08608#9 | Towards A Rigorous Science of Interpretable Machine Learning | 3 â ¢ Multi-objective trade-oï¬ s: Two well-deï¬ ned desiderata in ML systems may compete with each other, such as privacy and prediction quality [Hardt et al., 2016] or privacy and non- discrimination [Strahilevitz, 2008]. Even if each objectives are fully-speciï¬ ed, the exact dy- namics of the trade-oï¬ may not be fully known, and the decision may have to be case-by-case. In the presence of an incompleteness, explanations are one of ways to ensure that eï¬ ects of gaps in problem formalization are visible to us. | 1702.08608#8 | 1702.08608#10 | 1702.08608 | [
"1606.04155"
]
|
1702.08608#10 | Towards A Rigorous Science of Interpretable Machine Learning | # 3 How? A Taxonomy of Interpretability Evaluation Even in standard ML settings, there exists a taxonomy of evaluation that is considered appropriate. In particular, the evaluation should match the claimed contribution. Evaluation of applied work should demonstrate success in the application: a game-playing agent might best a human player, a classiï¬ er may correctly identify star types relevant to astronomers. In contrast, core methods work should demonstrate generalizability via careful evaluation on a variety of synthetic and standard benchmarks. In this section we lay out an analogous taxonomy of evaluation approaches for interpretabil- ity: application-grounded, human-grounded, and functionally-grounded. These range from task- relevant to general, also acknowledge that while human evaluation is essential to assessing in- terpretability, human-subject evaluation is not an easy task. A human experiment needs to be well-designed to minimize confounding factors, consumed time, and other resources. | 1702.08608#9 | 1702.08608#11 | 1702.08608 | [
"1606.04155"
]
|
1702.08608#11 | Towards A Rigorous Science of Interpretable Machine Learning | We discuss the trade-oï¬ s between each type of evaluation and when each would be appropriate. # 3.1 Application-grounded Evaluation: Real humans, real tasks Application-grounded evaluation involves conducting human experiments within a real application. If the researcher has a concrete application in mindâ such as working with doctors on diagnosing patients with a particular diseaseâ the best way to show that the model works is to evaluate it with respect to the task: doctors performing diagnoses. This reasoning aligns with the methods of evaluation common in the human-computer interaction and visualization communities, where there exists a strong ethos around making sure that the system delivers on its intended task [Antunes et al., 2012, Lazar et al., 2010]. For example, a visualization for correcting segmentations from microscopy data would be evaluated via user studies on segmentation on the target image task [Suissa-Peleg et al., 2016]; a homework-hint system is evaluated on whether the student achieves better post-test performance [Williams et al., 2016]. | 1702.08608#10 | 1702.08608#12 | 1702.08608 | [
"1606.04155"
]
|
1702.08608#12 | Towards A Rigorous Science of Interpretable Machine Learning | Speciï¬ cally, we evaluate the quality of an explanation in the context of its end-task, such as whether it results in better identiï¬ cation of errors, new facts, or less discrimination. Examples of experiments include: â ¢ Domain expert experiment with the exact application task. â ¢ Domain expert experiment with a simpler or partial task to shorten experiment time and increase the pool of potentially-willing subjects. In both cases, an important baseline is how well human-produced explanations assist in other humans trying to complete the task. To make high impact in real world applications, it is essential that we as a community respect the time and eï¬ ort involved to do such evaluations, and also demand 4 high standards of experimental design when such evaluations are performed. As HCI community recognizes [Antunes et al., 2012], this is not an easy evaluation metric. Nonetheless, it directly tests the objective that the system is built for, and thus performance with respect to that objective gives strong evidence of success. # 3.2 Human-grounded Metrics: Real humans, simpliï¬ ed tasks Human-grounded evaluation is about conducting simpler human-subject experiments that maintain the essence of the target application. Such an evaluation is appealing when experiments with the target community is challenging. These evaluations can be completed with lay humans, allowing for both a bigger subject pool and less expenses, since we do not have to compensate highly trained domain experts. Human-grounded evaluation is most appropriate when one wishes to test more general notions of the quality of an explanation. For example, to study what kinds of explanations are best understood under severe time constraints, one might create abstract tasks in which other factorsâ such as the overall task complexityâ can be controlled [Kim et al., 2013, Lakkaraju et al., 2016] The key question, of course, is how we can evaluate the quality of an explanation without a speciï¬ c end-goal (such as identifying errors in a safety-oriented task or identifying relevant patterns in a science-oriented task). Ideally, our evaluation approach will depend only on the quality of the explanation, regardless of whether the explanation is the model itself or a post-hoc interpretation of a black-box model, and regardless of the correctness of the associated prediction. Examples of potential experiments include: | 1702.08608#11 | 1702.08608#13 | 1702.08608 | [
"1606.04155"
]
|
1702.08608#13 | Towards A Rigorous Science of Interpretable Machine Learning | â ¢ Binary forced choice: humans are presented with pairs of explanations, and must choose the one that they ï¬ nd of higher quality (basic face-validity test made quantitative). â ¢ Forward simulation/prediction: humans are presented with an explanation and an input, and must correctly simulate the modelâ s output (regardless of the true output). â ¢ Counterfactual simulation: humans are presented with an explanation, an input, and an output, and are asked what must be changed to change the methodâ s prediction to a desired output (and related variants). Here is a concrete example. The common intrusion-detection test [Chang et al., 2009] in topic models is a form of the forward simulation/prediction task: we ask the human to ï¬ nd the diï¬ erence between the modelâ s true output and some corrupted output as a way to determine whether the human has correctly understood what the modelâ s true output is. # 3.3 Functionally-grounded Evaluation: No humans, proxy tasks Functionally-grounded evaluation requires no human experiments; instead, it uses some formal deï¬ nition of interpretability as a proxy for explanation quality. Such experiments are appealing because even general human-subject experiments require time and costs both to perform and to get necessary approvals (e.g., IRBs), which may be beyond the resources of a machine learning researcher. Functionally-grounded evaluations are most appropriate once we have a class of models or regularizers that have already been validated, e.g. via human-grounded experiments. They may also be appropriate when a method is not yet mature or when human subject experiments are unethical. | 1702.08608#12 | 1702.08608#14 | 1702.08608 | [
"1606.04155"
]
|
1702.08608#14 | Towards A Rigorous Science of Interpretable Machine Learning | 5 The challenge, of course, is to determine what proxies to use. For example, decision trees have been considered interpretable in many situations [Freitas, 2014]. In section 4, we describe open problems in determining what proxies are reasonable. Once a proxy has been formalized, the challenge is squarely an optimization problem, as the model class or regularizer is likely to be discrete, non-convex and often non-diï¬ erentiable. Examples of experiments include â ¢ Show the improvement of prediction performance of a model that is already proven to be interpretable (assumes that someone has run human experiments to show that the model class is interpretable). â ¢ Show that oneâ s method performs better with respect to certain regularizersâ for example, is more sparseâ compared to other baselines (assumes someone has run human experiments to show that the regularizer is appropriate). # 4 Open Problems in the Science of Interpretability, Theory and Practice It is essential that the three types of evaluation in the previous section inform each other: the factors that capture the essential needs of real world tasks should inform what kinds of simpliï¬ ed tasks we perform, and the performance of our methods with respect to functional proxies should reï¬ ect their performance in real-world settings. In this section, we describe some important open problems for creating these links between the three types of evaluations: 1. What proxies are best for what real-world applications? (functionally to application-grounded) 2. What are the important factors to consider when designing simpler tasks that maintain the essence of the real end-task? (human to application-grounded) 3. What are the important factors to consider when characterizing proxies for explanation qual- ity? (human to functionally-grounded) Below, we describe a path to answering each of these questions. # 4.1 Data-driven approach to discover factors of interpretability Imagine a matrix where rows are speciï¬ c real-world tasks, columns are speciï¬ c methods, and the entries are the performance of the method on the end-task. For example, one could represent how well a decision tree of depth less than 4 worked in assisting doctors in identifying pneumonia patients under age 30 in US. Once constructed, methods in machine learning could be used to identify latent dimensions that represent factors that are important to interpretability. | 1702.08608#13 | 1702.08608#15 | 1702.08608 | [
"1606.04155"
]
|
1702.08608#15 | Towards A Rigorous Science of Interpretable Machine Learning | This approach is similar to eï¬ orts to characterize classiï¬ cation [Ho and Basu, 2002] and clustering problems [Garg and Kalai, 2016]. For example, one might perform matrix factorization to embed both tasks and methods respectively in low-dimensional spaces (which we can then seek to interpret), as shown in Figure 2. These embeddings could help predict what methods would be most promising for a new problem, similarly to collaborative ï¬ ltering. The challenge, of course, is in creating this matrix. For example, one could imagine creating a repository of clinical cases in which the ML system has access to the patientâ s record but not certain 6 methods K methods domain ~N f( domain 5 _) Figure 2: An example of data-driven approach to discover factors in interpretability current features that are only accessible to the clinician, or a repository of discrimination-in-loan cases where the ML system must provide outputs that assist a lawyer in their decision. Ideally these would be linked to domain experts who have agreed to be employed to evaluate methods when applied to their domain of expertise. Just as there are now large open repositories for problems in classiï¬ cation, regression, and reinforcement learning [Blake and Merz, 1998, Brockman et al., 2016, Vanschoren et al., 2014], we advocate for the creation of repositories that contain problems corresponding to real-world tasks in which human-input is required. Creating such repositories will be more challenging than creating collections of standard machine learning datasets because they must include a system for human assessment, but with the availablity of crowdsourcing tools these technical challenges can be surmounted. In practice, constructing such a matrix will be expensive since each cell must be evaluated in the context of a real application, and interpreting the latent dimensions will be an iterative eï¬ ort of hypothesizing why certain tasks or methods share dimensions and then checking whether our hypotheses are true. In the next two open problems, we lay out some hypotheses about what latent dimensions may correspond to; these hypotheses can be tested via much less expensive human- grounded evaluations on simulated tasks. # 4.2 Hypothesis: task-related latent dimensions of interpretability | 1702.08608#14 | 1702.08608#16 | 1702.08608 | [
"1606.04155"
]
|
1702.08608#16 | Towards A Rigorous Science of Interpretable Machine Learning | Disparate-seeming applications may share common categories: an application involving preventing medical error at the bedside and an application involving support for identifying inappropriate language on social media might be similar in that they involve making a decision about a speciï¬ c caseâ a patient, a postâ in a relatively short period of time. However, when it comes to time constraints, the needs in those scenarios might be diï¬ erent from an application involving the un- derstanding of the main characteristics of a large omics data set, where the goalâ scienceâ is much more abstract and the scientist may have hours or days to inspect the model outputs. Below, we list a (non-exhaustive!) set of hypotheses about what might make tasks similar in their explanation needs: | 1702.08608#15 | 1702.08608#17 | 1702.08608 | [
"1606.04155"
]
|
1702.08608#17 | Towards A Rigorous Science of Interpretable Machine Learning | â ¢ Global vs. Local. Global interpretability implies knowing what patterns are present in general (such as key features governing galaxy formation), while local interpretability implies knowing the reasons for a speciï¬ c decision (such as why a particular loan application was rejected). The former may be important for when scientiï¬ c understanding or bias detection is the goal; the latter when one needs a justiï¬ cation for a speciï¬ c decision. â ¢ Area, Severity of Incompleteness. What part of the problem formulation is incomplete, and how incomplete is it? We hypothesize that the types of explanations needed may vary de- pending on whether the source of concern is due to incompletely speciï¬ ed inputs, constraints, 7 domains, internal model structure, costs, or even in the need to understand the training al- gorithm. The severity of the incompleteness may also aï¬ ect explanation needs. For example, one can imagine a spectrum of questions about the safety of self-driving cars. On one end, one may have general curiosity about how autonomous cars make decisions. At the other, one may wish to check a speciï¬ c list of scenarios (e.g., sets of sensor inputs that causes the car to drive oï¬ of the road by 10cm). In between, one might want to check a general propertyâ safe urban drivingâ without an exhaustive list of scenarios and safety criteria. â ¢ Time Constraints. How long can the user aï¬ ord to spend to understand the explanation? A decision that needs to be made at the bedside or during the operation of a plant must be understood quickly, while in scientiï¬ c or anti-discrimination applications, the end-user may be willing to spend hours trying to fully understand an explanation. â ¢ Nature of User Expertise. How experienced is the user in the task? The userâ s experience will aï¬ | 1702.08608#16 | 1702.08608#18 | 1702.08608 | [
"1606.04155"
]
|
1702.08608#18 | Towards A Rigorous Science of Interpretable Machine Learning | ect what kind of cognitive chunks they have, that is, how they organize individual elements of information into collections [Neath and Surprenant, 2003]. For example, a clinician may have a notion that autism and ADHD are both developmental diseases. The nature of the userâ s expertise will also inï¬ uence what level of sophistication they expect in their explana- tions. For example, domain experts may expect or prefer a somewhat larger and sophisticated modelâ which conï¬ rms facts they knowâ over a smaller, more opaque one. These preferences may be quite diï¬ erent from hospital ethicist who may be more narrowly concerned about whether decisions are being made in an ethical manner. More broadly, decison-makers, sci- entists, compliance and safety engineers, data scientists, and machine learning researchers all come with diï¬ erent background knowledge and communication styles. Each of these factors can be isolated in human-grounded experiments in simulated tasks to deter- mine which methods work best when they are present. # 4.3 Hypothesis: method-related latent dimensions of interpretability Just as disparate applications may share common categories, disparate methods may share common qualities that correlate to their utility as explanation. As before, we provide a (non-exhaustive!) set of factors that may correspond to diï¬ erent explanation needs: Here, we deï¬ ne cognitive chunks to be the basic units of explanation. â ¢ Form of cognitive chunks. What are the basic units of the explanation? Are they raw features? Derived features that have some semantic meaning to the expert (e.g. â | 1702.08608#17 | 1702.08608#19 | 1702.08608 | [
"1606.04155"
]
|
1702.08608#19 | Towards A Rigorous Science of Interpretable Machine Learning | neurological disorderâ for a collection of diseases or â chairâ for a collection of pixels)? Prototypes? â ¢ Number of cognitive chunks. How many cognitive chunks does the explanation contain? How does the quantity interact with the type: for example, a prototype can contain a lot more information than a feature; can we handle them in similar quantities? â ¢ Level of compositionality. Are the cognitive chunks organized in a structured way? Rules, hierarchies, and other abstractions can limit what a human needs to process at one time. For example, part of an explanation may involve deï¬ ning a new unit (a chunk) that is a function of raw units, and then providing an explanation in terms of that new unit. | 1702.08608#18 | 1702.08608#20 | 1702.08608 | [
"1606.04155"
]
|
1702.08608#20 | Towards A Rigorous Science of Interpretable Machine Learning | 8 â ¢ Monotonicity and other interactions between cognitive chunks. Does it matter if the cognitive chunks are combined in linear or nonlinear ways? In monotone ways [Gupta et al., 2016]? Are some functions more natural to humans than others [Wilson et al., 2015, Schulz et al., 2016]? â ¢ Uncertainty and stochasticity. How well do people understand uncertainty measures? To what extent is stochasticity understood by humans? # 5 Conclusion: Recommendations for Researchers In this work, we have laid the groundwork for a process to rigorously deï¬ ne and evaluate inter- pretability. There are many open questions in creating the formal links between applications, the science of human understanding, and more traditional machine learning regularizers. In the mean time, we encourage the community to consider some general principles. The claim of the research should match the type of the evaluation. Just as one would be critical of a reliability-oriented paper that only cites accuracy statistics, the choice of evaluation should match the speciï¬ city of the claim being made. A contribution that is focused on a particular application should be expected to be evaluated in the context of that application (application- grounded evaluation), or on a human experiment with a closely-related task (human-grounded evaluation). A contribution that is focused on better optimizing a model class for some deï¬ nition of interpretability should be expected to be evaluated with functionally-grounded metrics. As a community, we must be careful in the work on interpretability, both recognizing the need for and the costs of human-subject experiments. In section 4, we hypothesized factors that may be the latent dimensions of interpretability. Creating a shared language around such factors is essential not only to evaluation, but also for the citation and comparison of related work. For example, work on creating a safe healthcare agent might be framed as focused on the need for explanation due to unknown inputs at the local scale, evaluated at the level of an application. In contrast, work on learning sparse linear models might also be framed as focused on the need for explanation due to unknown inputs, but this time evaluated at global scale. As we share each of our work with the community, we can do each other a service by describing factors such as | 1702.08608#19 | 1702.08608#21 | 1702.08608 | [
"1606.04155"
]
|
1702.08608#21 | Towards A Rigorous Science of Interpretable Machine Learning | 1. How is the problem formulation incomplete? (Section 2) 2. At what level is the evaluation being performed? (application, general user study, proxy; Section 3) 3. What are task-related relevant factors? (e.g. global vs. local, severity of incompleteness, level of user expertise, time constraints; Section 4.2) 4. What are method-related relevant factors being explored? (e.g. form of cognitive chunks, number of cognitive chunks, compositionality, monotonicity, uncertainty; Section 4.3) and of course, adding and reï¬ ning these factors as our taxonomies evolve. These considerations should move us away from vague claims about the interpretability of a particular model and toward classifying applications by a common set of terms. | 1702.08608#20 | 1702.08608#22 | 1702.08608 | [
"1606.04155"
]
|
1702.08608#22 | Towards A Rigorous Science of Interpretable Machine Learning | 9 Acknowledgments This piece would not have been possible without the dozens of deep conver- sations about interpretability with machine learning researchers and domain experts. Our friends and colleagues, we appreciate your support. We want to particularity thank Ian Goodfellow, Kush Varshney, Hanna Wallach, Solon Barocas, Stefan Rping and Jesse Johnson for their feedback. # References Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, and Dan Man´e. Concrete problems in AI safety. arXiv preprint arXiv:1606.06565, 2016. Pedro Antunes, Valeria Herskovic, Sergio F Ochoa, and Jose A Pino. Structuring dimensions for collaborative systems evaluation. ACM Computing Surveys, 2012. William Bechtel and Adele Abrahamsen. | 1702.08608#21 | 1702.08608#23 | 1702.08608 | [
"1606.04155"
]
|
1702.08608#23 | Towards A Rigorous Science of Interpretable Machine Learning | Explanation: A mechanist alternative. Studies in History and Philosophy of Science Part C: Studies in History and Philosophy of Biological and Biomedical Sciences, 2005. Catherine Blake and Christopher J Merz. {UCI} repository of machine learning databases. 1998. Nick Bostrom and Eliezer Yudkowsky. The ethics of artiï¬ cial intelligence. The Cambridge Handbook of Artiï¬ cial Intelligence, 2014. Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, and Wojciech Zaremba. Openai gym. arXiv preprint arXiv:1606.01540, 2016. Cristian Bucilu, Rich Caruana, and Alexandru Niculescu-Mizil. Model compression. | 1702.08608#22 | 1702.08608#24 | 1702.08608 | [
"1606.04155"
]
|
1702.08608#24 | Towards A Rigorous Science of Interpretable Machine Learning | In Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, 2006. Samuel Carton, Jennifer Helsby, Kenneth Joseph, Ayesha Mahmud, Youngsoo Park, Joe Walsh, Identifying police In ACM SIGKDD International Conference on Knowledge Crystal Cody, CPT Estella Patterson, Lauren Haynes, and Rayid Ghani. oï¬ cers at risk of adverse events. Discovery and Data Mining. ACM, 2016. Jonathan Chang, Jordan L Boyd-Graber, Sean Gerrish, Chong Wang, and David M Blei. | 1702.08608#23 | 1702.08608#25 | 1702.08608 | [
"1606.04155"
]
|
1702.08608#25 | Towards A Rigorous Science of Interpretable Machine Learning | Reading tea leaves: How humans interpret topic models. In NIPS, 2009. Nick Chater and Mike Oaksford. Speculations on human causal learning and reasoning. Information sampling and adaptive cognition, 2006. Finale Doshi-Velez, Yaorong Ge, and Isaac Kohane. Comorbidity clusters in autism spectrum disorders: an electronic health record time-series analysis. Pediatrics, 133(1):e54â e63, 2014. Finale Doshi-Velez, Byron Wallace, and Ryan Adams. | 1702.08608#24 | 1702.08608#26 | 1702.08608 | [
"1606.04155"
]
|
1702.08608#26 | Towards A Rigorous Science of Interpretable Machine Learning | Graph-sparse lda: a topic model with structured sparsity. Association for the Advancement of Artiï¬ cial Intelligence, 2015. Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Richard Zemel. Fairness through awareness. In Innovations in Theoretical Computer Science Conference. ACM, 2012. 10 Alex Freitas. Comprehensible classiï¬ cation models: a position paper. ACM SIGKDD Explorations, 2014. Vikas K Garg and Adam Tauman Kalai. Meta-unsupervised-learning: A supervised approach to unsupervised learning. arXiv preprint arXiv:1612.09030, 2016. | 1702.08608#25 | 1702.08608#27 | 1702.08608 | [
"1606.04155"
]
|
1702.08608#27 | Towards A Rigorous Science of Interpretable Machine Learning | Stuart Glennan. Rethinking mechanistic explanation. Philosophy of science, 2002. Bryce Goodman and Seth Flaxman. European union regulations on algorithmic decision-making and aâ right to explanationâ . arXiv preprint arXiv:1606.08813, 2016. Maya Gupta, Andrew Cotter, Jan Pfeifer, Konstantin Voevodski, Kevin Canini, Alexander Mangylov, Wojciech Moczydlowski, and Alexander Van Esbroeck. Monotonic calibrated in- terpolated look-up tables. Journal of Machine Learning Research, 2016. | 1702.08608#26 | 1702.08608#28 | 1702.08608 | [
"1606.04155"
]
|
1702.08608#28 | Towards A Rigorous Science of Interpretable Machine Learning | Sean Hamill. CMU computer won poker battle over humans by statistically signiï¬ cant mar- http://www.post-gazette.com/business/tech-news/2017/01/31/CMU-computer- gin. won-poker-battle-over-humans-by-statistically-significant-margin/stories/ 201701310250, 2017. Accessed: 2017-02-07. Moritz Hardt and Kunal Talwar. On the geometry of diï¬ erential privacy. In ACM Symposium on Theory of Computing. ACM, 2010. Moritz Hardt, Eric Price, and Nati Srebro. | 1702.08608#27 | 1702.08608#29 | 1702.08608 | [
"1606.04155"
]
|
1702.08608#29 | Towards A Rigorous Science of Interpretable Machine Learning | Equality of opportunity in supervised learning. Advances in Neural Information Processing Systems, 2016. In Carl Hempel and Paul Oppenheim. Studies in the logic of explanation. Philosophy of science, 1948. Tin Kam Ho and Mitra Basu. Complexity measures of supervised classiï¬ cation problems. IEEE transactions on pattern analysis and machine intelligence, 2002. Frank Keil. Explanation and understanding. Annu. Rev. Psychol., 2006. Frank Keil, Leonid Rozenblit, and Candice Mills. What lies beneath? understanding the limits of understanding. Thinking and seeing: Visual metacognition in adults and children, 2004. | 1702.08608#28 | 1702.08608#30 | 1702.08608 | [
"1606.04155"
]
|
1702.08608#30 | Towards A Rigorous Science of Interpretable Machine Learning | Been Kim, Caleb Chacha, and Julie Shah. Inferring robot task plans from human team meetings: A generative modeling approach with logic-based prior. Association for the Advancement of Artiï¬ cial Intelligence, 2013. Been Kim, Elena Glassman, Brittney Johnson, and Julie Shah. model empowering humans via intuitive interaction. 2015a. iBCM: Interactive bayesian case Been Kim, Julie Shah, and Finale Doshi-Velez. Mind the gap: | 1702.08608#29 | 1702.08608#31 | 1702.08608 | [
"1606.04155"
]
|
1702.08608#31 | Towards A Rigorous Science of Interpretable Machine Learning | A generative approach to inter- pretable feature selection and extraction. In Advances in Neural Information Processing Systems, 2015b. Himabindu Lakkaraju, Stephen H Bach, and Jure Leskovec. Interpretable decision sets: A joint framework for description and prediction. In Proceedings of the 22nd ACM SIGKDD Interna- tional Conference on Knowledge Discovery and Data Mining, pages 1675â 1684. ACM, 2016. 11 Jonathan Lazar, Jinjuan Heidi Feng, and Harry Hochheiser. | 1702.08608#30 | 1702.08608#32 | 1702.08608 | [
"1606.04155"
]
|
1702.08608#32 | Towards A Rigorous Science of Interpretable Machine Learning | Research methods in human-computer interaction. John Wiley & Sons, 2010. Tao Lei, Regina Barzilay, and Tommi Jaakkola. Rationalizing neural predictions. arXiv preprint arXiv:1606.04155, 2016. Tania Lombrozo. The structure and function of explanations. Trends in cognitive sciences, 10(10): 464â 470, 2006. Yin Lou, Rich Caruana, and Johannes Gehrke. Intelligible models for classiï¬ cation and regression. In ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, 2012. Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wier- stra, and Martin Riedmiller. Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602, 2013. Ian Neath and Aimee Surprenant. Human Memory. 2003. | 1702.08608#31 | 1702.08608#33 | 1702.08608 | [
"1606.04155"
]
|
1702.08608#33 | Towards A Rigorous Science of Interpretable Machine Learning | Clemens Otte. Safe and interpretable machine learning: A methodological review. In Computational Intelligence in Intelligent Data Analysis. Springer, 2013. Parliament and Council of the European Union. General data protection regulation. 2016. Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. â why should i trust you?â : Explaining the predictions of any classiï¬ er. arXiv preprint arXiv:1602.04938, 2016. | 1702.08608#32 | 1702.08608#34 | 1702.08608 | [
"1606.04155"
]
|
1702.08608#34 | Towards A Rigorous Science of Interpretable Machine Learning | Salvatore Ruggieri, Dino Pedreschi, and Franco Turini. Data mining for discrimination discovery. ACM Transactions on Knowledge Discovery from Data, 2010. Eric Schulz, Joshua Tenenbaum, David Duvenaud, Maarten Speekenbrink, and Samuel Gershman. Compositional inductive biases in function learning. bioRxiv, 2016. D Sculley, Gary Holt, Daniel Golovin, Eugene Davydov, Todd Phillips, Dietmar Ebner, Vinay Chaudhary, Michael Young, Jean-Fran¸cois Crespo, and Dan Dennison. Hidden technical debt in machine learning systems. In Advances in Neural Information Processing Systems, 2015. David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. Mastering the game of go with deep neural networks and tree search. | 1702.08608#33 | 1702.08608#35 | 1702.08608 | [
"1606.04155"
]
|
1702.08608#35 | Towards A Rigorous Science of Interpretable Machine Learning | Nature, 2016. Lior Jacob Strahilevitz. Privacy versus antidiscrimination. University of Chicago Law School Working Paper, 2008. Adi Suissa-Peleg, Daniel Haehn, Seymour Knowles-Barley, Verena Kaynig, Thouis R Jones, Alyssa Wilson, Richard Schalek, Jeï¬ ery W Lichtman, and Hanspeter Pï¬ ster. Automatic neural recon- struction from petavoxel of electron microscopy data. Microscopy and Microanalysis, 2016. Vincent Toubiana, Arvind Narayanan, Dan Boneh, Helen Nissenbaum, and Solon Barocas. Adnos- tic: Privacy preserving targeted advertising. 2010. Joaquin Vanschoren, Jan N Van Rijn, Bernd Bischl, and Luis Torgo. Openml: networked science in machine learning. ACM SIGKDD Explorations Newsletter, 15(2):49â 60, 2014. 12 | 1702.08608#34 | 1702.08608#36 | 1702.08608 | [
"1606.04155"
]
|
1702.08608#36 | Towards A Rigorous Science of Interpretable Machine Learning | Kush Varshney and Homa Alemzadeh. On the safety of machine learning: Cyber-physical systems, decision sciences, and data products. CoRR, 2016. Fulton Wang and Cynthia Rudin. Falling rule lists. In AISTATS, 2015. Tong Wang, Cynthia Rudin, Finale Doshi-Velez, Yimin Liu, Erica Klampï¬ , and Perry MacNeille. Bayesian rule sets for interpretable classiï¬ | 1702.08608#35 | 1702.08608#37 | 1702.08608 | [
"1606.04155"
]
|
1702.08608#37 | Towards A Rigorous Science of Interpretable Machine Learning | cation. In International Conference on Data Mining, 2017. Joseph Jay Williams, Juho Kim, Anna Raï¬ erty, Samuel Maldonado, Krzysztof Z Gajos, Walter S Lasecki, and Neil Heï¬ ernan. Axis: Generating explanations at scale with learnersourcing and machine learning. In ACM Conference on Learning@ Scale. ACM, 2016. Andrew Wilson, Christoph Dann, Chris Lucas, and Eric Xing. The human kernel. In Advances in Neural Information Processing Systems, 2015. 13 | 1702.08608#36 | 1702.08608 | [
"1606.04155"
]
|
|
1702.08138#0 | Deceiving Google's Perspective API Built for Detecting Toxic Comments | 7 1 0 2 # b e F 7 2 ] # G L . s c [ 1 v 8 3 1 8 0 . 2 0 7 1 : v i X r a # Deceiving Googleâ s Perspective API Built for Detecting Toxic Comments Hossein Hosseini, Sreeram Kannan, Baosen Zhang and Radha Poovendran Network Security Lab (NSL), Department of Electrical Engineering, University of Washington, Seattle, WA Email: {hosseinh, ksreeram, zhangbao, rp3}@uw.edu Abstractâ | 1702.08138#1 | 1702.08138 | [
"1606.04435"
]
|
|
1702.08138#1 | Deceiving Google's Perspective API Built for Detecting Toxic Comments | Social media platforms provide an environment where people can freely engage in discussions. Unfortunately, they also enable several problems, such as online harass- ment. Recently, Google and Jigsaw started a project called Perspective, which uses machine learning to automatically detect toxic language. A demonstration website has been also launched, which allows anyone to type a phrase in the interface and instantaneously see the toxicity score [1]. In this paper, we propose an attack on the Perspective toxic detection system based on the adversarial examples. We show that an adversary can subtly modify a highly toxic phrase in a way that the system assigns signiï¬ cantly lower toxicity score to it. We apply the attack on the sample phrases provided in the Perspective website and show that we can consistently reduce the toxicity scores to the level of the non-toxic phrases. The existence of such adversarial examples is very harmful for toxic detection systems and seriously undermines their usability. AI to help with providing a safe environment for online discussions [10]. Perspective is an API that enables the developers to use the toxic detector running on Googleâ s servers, to identify harassment and abuse on social media or more efï¬ ciently ï¬ ltering invective from the comments on a news website. Jigsaw has partnered with online communities and publishers, such as Wikipedia [3] and The New York Times [11], to implement this toxicity measurement system. Recently, a demonstration website has been launched, which allows anyone to type a phrase in the Perspectiveâ s interface and instantaneously see how it rates on the â | 1702.08138#0 | 1702.08138#2 | 1702.08138 | [
"1606.04435"
]
|
1702.08138#2 | Deceiving Google's Perspective API Built for Detecting Toxic Comments | toxicityâ scale [1]. The Perspective website has also open sourced the experiments, models and research data in order to explore the strengths and weaknesses of using machine learning as a tool for online discussion. # I. INTRODUCTION Social media platforms provide an environment where peo- ple can learn about the trends and news, freely share their opinions and engage in discussions. Unfortunately, the lack of a moderating entity in these platforms has caused several problems, ranging from the wide spread of fake news to online harassment [2]. Due to the growing concern about the impact of online harassment on the peopleâ s experience of the Internet, many platforms are taking steps to enhance the safety of the online environments [3], [4]. Some of the platforms employ approaches such as reï¬ ning the information based on crowdsourcing (upvotes/downvotes), turning off comments or manual moderation to mitigate the effect of the inappropriate contents [5]. These approaches however are inefï¬ cient and not scalable. As a result, there has been many calls for researchers to develop methods to automatically detect abusive or toxic context in the real time [6]. Recent advances in machine learning have transformed many domains such as computer vision [7], speech recogni- tion [8], and language processing [9]. Many researchers have explored using machine learning to also tackle the problem of online harassment. Recently, Google and Jigsaw launched a project called Perspective [1], which uses machine learning to automatically detect online insults, harassment, and abusive speech. The system intends to bring Conversation The implicit assumption of learning models is that they will be deployed in benign settings. However, many works have pointed out their vulnerability in adversarial scenarios [12]â [14]. One type of the vulnerabilities of machine learning algorithms is that an adversary can change the algorithm output by subtly perturbing the input, often unnoticeable by humans. Such inputs are called adversarial examples [15], and have been shown to be effective against different machine learning algorithms even when the adversary has only a black- box access to the target model [16]. In this paper, we demonstrate the vulnerability of the recently-released Googleâ s Perspective system against the adversarial examples. | 1702.08138#1 | 1702.08138#3 | 1702.08138 | [
"1606.04435"
]
|
1702.08138#3 | Deceiving Google's Perspective API Built for Detecting Toxic Comments | In the text classiï¬ cation task of the Perspective, adversarial examples can be deï¬ ned as mod- iï¬ ed texts which contain the same highly abusive language as the original text, yet receive a signiï¬ cantly lower toxicity score from the learning model. Through different experiments, we show that an adversary can deceive the system by misspelling the abusive words or by adding punctuations between the let- ters. The existence of adversarial examples is very harmful for toxic detector systems and seriously undermines their usability, especially since these systems are likely to be employed in adversarial settings. We conclude the paper by proposing some countermeasures to the proposed attack. | 1702.08138#2 | 1702.08138#4 | 1702.08138 | [
"1606.04435"
]
|
1702.08138#4 | Deceiving Google's Perspective API Built for Detecting Toxic Comments | # II. BACKGROUND A. Brief Description of Googleâ s Perspective API This work was supported by ONR grants N00014-14-1-0029 and N00014- 16-1-2710, ARO grant W911NF-16-1-0485 and NSF grant CNS-1446866. Perspective is an API created by Jigsaw and Googleâ s Counter Abuse Technology team in Conversation-AI. Conver- sation AI is a collaborative research effort exploring ML as a | 1702.08138#3 | 1702.08138#5 | 1702.08138 | [
"1606.04435"
]
|
1702.08138#5 | Deceiving Google's Perspective API Built for Detecting Toxic Comments | 1 TABLE I: Demosntration of the Attack on the Perspective Toxic Detection System . All phrases in the ï¬ rst column of the table are chosen from the examples provided by the Perspective website [1]. Original Phrase (Toxicity Score) Climate change is happening and itâ s not changing in our favor. If you think differently youâ re an idiot. (84%) Theyâ re stupid, itâ s getting warmer, we should enjoy it while it lasts (86%) They are liberal idiots who are uneducated (90%) idiots. backward thinking people. nationalists. not accept- ing facts. susceptible to lies. (80%) They are stupid and ignorant with no class (91%) Itâ s stupid and wrong (89%) If they voted for Hilary they are idiots (90%) Anyone who voted for Trump is a moron (80%) Screw you trump supporters (79%) Modiï¬ ed Phrase (Toxicity Score) Climate change is happening and itâ s not changing in our favor. If you think differently youâ re an idiiot. (20%) Theyâ re st.upid, itâ s getting warmer, we should enjoy it while it lasts (2%) They are liberal i.diots who are un.educated (15%) idiiots. backward thinking people. nationaalists. not accept- ing facts. susceptible to l.ies. (17%) They are st.upid and ig.norant with no class (11%) Itâ s stuipd and wrong (17%) If they voted for Hilary they are id.iots (12%) Anyone who voted for Trump is a mo.ron (13%) S c r e w you trump supporters (17%) | 1702.08138#4 | 1702.08138#6 | 1702.08138 | [
"1606.04435"
]
|
1702.08138#6 | Deceiving Google's Perspective API Built for Detecting Toxic Comments | tool for better discussions online [17]. The API uses machine learning models to score the toxicity of an input text, where toxic is deï¬ ned as â a rude, disrespectful, or unreasonable comment that is likely to make one leave a discussion.â Google and Jigsaw developed the measurement tool by taking millions of comments from different publishers and then asking panels of ten people to rate the comments on a scale from â very toxicâ to â very healthyâ contribution. The resulting judgments provided a large set of training examples for the machine learning model. method modiï¬ es the text such that the algorithm classiï¬ es the writer gender as a certain target gender, under limited knowledge of the classiï¬ er and while preserving the textâ s ï¬ uency and meaning. The modiï¬ | 1702.08138#5 | 1702.08138#7 | 1702.08138 | [
"1606.04435"
]
|
1702.08138#7 | Deceiving Google's Perspective API Built for Detecting Toxic Comments | ed text is not required to be adversarial, i.e., a human may also classify it as the target gender. In contrast, in the application of toxic text detection, the adversary intends to deceive the classiï¬ er, while maintaining the abusive content of the text. # III. THE PROPOSED ATTACKS Jigsaw has partnered with online communities and publish- ers to implement the toxicity measurement system. Wikipedia use it to perform a study of its editorial discussion pages [3] and The New York Times is planning to use it as a ï¬ rst pass of all its comments, automatically ï¬ agging abusive ones for its team of human moderators [11]. The API outputs the scores in real-time, so that publishers can integrate it into their website to show toxicity ratings to commenters even during the typing [5]. | 1702.08138#6 | 1702.08138#8 | 1702.08138 | [
"1606.04435"
]
|
1702.08138#8 | Deceiving Google's Perspective API Built for Detecting Toxic Comments | B. Adversarial Examples for Learning Systems Machine learning models are generally designed to yield the best performance on clean data and in benign settings. As a result, they are subject to attacks in adversarial scenarios [12]â [14]. One type of the vulnerabilities of the machine learning algorithms is that an adversary can change the algorithm prediction score by perturbing the input slightly, often un- noticeable by humans. Such inputs are called adversarial examples [15]. Adversarial examples have been applied to models for different tasks, such as images classiï¬ cation [15], [18], [19], music content analysis [20] and malware classiï¬ cation [21]. In this work, we generate adversarial examples on a real-world text classiï¬ er system. In the context of scoring the toxicity, adversarial examples can be deï¬ ned as modiï¬ ed phrases that contain the same highly abusive language as the original one, yet receive a signiï¬ cantly lower toxicity score by the model. In a similar work [22], the authors presented a method for gender obfuscating in social media writing. | 1702.08138#7 | 1702.08138#9 | 1702.08138 | [
"1606.04435"
]
|
1702.08138#9 | Deceiving Google's Perspective API Built for Detecting Toxic Comments | The proposed Recently, a website has been launched for Perspective demonstration, which allows anyone to type a phrase in the interface and instantaneously receive its toxicity score [1]. The website provides samples phrases for three categories of topics â that are often difï¬ cult to discuss onlineâ . The categories are 1) Climate Change, 2) Brexit and 3) US Election. section, we demonstrate an attack on the Perspective toxic detection system, based on the adver- sarial examples. In particular, we show that an adversary can subtly modify a toxic phrase such that the model will output a very low toxicity score for the modiï¬ ed phrase. The attack setting is as follows. The adversary possesses a phrase with a toxic content and tries different perturbations on the words, until she succeeds with signiï¬ cantly reducing the conï¬ dence of the model that the phrase is toxic. Note that the adversary does not have access to the model or training data, and can only query the model and get the toxicity score. Table I demonstrates the attack on sample phrases provided by the Perspective website. | 1702.08138#8 | 1702.08138#10 | 1702.08138 | [
"1606.04435"
]
|
1702.08138#10 | Deceiving Google's Perspective API Built for Detecting Toxic Comments | The ï¬ rst column represents the original phrases along with the toxicity scores and the second column provides the adversarially modiï¬ ed phrases and their corresponding toxicity scores. 1 For better demonstration of the attack, we chose phrases with different toxic words and also introduced different types of errors, rather than searching for the best error type that would potentially yield lower toxicity score. The boldface words are the toxic words that the adversary has modiï¬ ed. The modiï¬ cations are adding a 1The experiments are done on the interface of the Perspective website on Feb. 24, 2017. 2 TABLE II: Demosntration of False Alarm on the Perspective Toxic Detection System . | 1702.08138#9 | 1702.08138#11 | 1702.08138 | [
"1606.04435"
]
|
1702.08138#11 | Deceiving Google's Perspective API Built for Detecting Toxic Comments | All phrases in the ï¬ rst column of the table are chosen from the examples provided by the Perspective website [1] Original Phrase (Toxicity Score) Climate change is happening and itâ s not changing in our favor. If you think differently youâ re an idiot (84%) Theyâ re stupid, itâ s getting warmer, we should enjoy it while it lasts (86%) They are liberal idiots who are uneducated. (90%) idiots. backward thinking people. nationalists. not accepting facts. susceptible to lies. (80%) They are stupid and ignorant with no class (91%) Itâ s stupid and wrong (89%) If they voted for Hilary they are idiots (90%) Anyone who voted for Trump is a moron (80%) Screw you trump supporters (79%) Modiï¬ ed Phrase (Toxicity Score) Climate change is happening and itâ s not changing in our favor. If you think differently youâ re not an idiot (73%) Theyâ re not stupid, itâ s getting warmer, we should enjoy it while it lasts (74%) They are not liberal idiots who are uneducated. (83%) not idiots. not backward thinking people. not nationalists. accepting facts. not susceptible to lies. (74%) They are not stupid and ignorant with no class (84%) Itâ s not stupid and wrong (83%) If they voted for Hilary they are not idiots (81%) Anyone who voted for Trump is not a moron (65%) Will not screw you trump supporters (68%) | 1702.08138#10 | 1702.08138#12 | 1702.08138 | [
"1606.04435"
]
|
1702.08138#12 | Deceiving Google's Perspective API Built for Detecting Toxic Comments | dot between two letters, adding spaces between all letters or misspelling the word (repeating one letter twice or swapping two letters). As can be seen, we can consistently reduce the toxicity score to the level of the benign phrases by subtly modifying the toxic words. Moreover, we observed that the adversarial perturbations transfer among different phrases, i.e., if a certain modiï¬ cation to a word reduces the toxicity score of a phrase, the same modiï¬ cation to the word is likely to reduce the toxicity score also for another phrase. Using this property, an adversary can form a dictionary of the adversarial perturbations for every word and signiï¬ cantly simplify the attack process. Through the experiments, we made the following observa- tions: the Perspective system also wrongly assigns high tox- icity scores to the apparently benign phrases. Table II demonstrates the false alarm on the same sample phrases of Table I. The ï¬ rst column represents the original phrases along with the toxicity scores and the second column pro- vides the negated phrases and the corresponding toxicity scores. The boldface words are added to toxic phrases. As can be seen, the system consistently fails to capture the inherent semantic of the modiï¬ ed phrases and wrongly assigns high toxicity scores to them. Robustness to random misspellings: we observed that the system assigns 34% toxicity score to most of the misspelled and random words. Also, it is somewhat robust to phrases that contain randomly modiï¬ ed toxic words. â ¢ Vulnerability to poisoning attack: The Perspective interface allows users to provide a feedback on the toxicity score of phrases, suggesting that the learning algorithm updates itself using the new data. This can ex- pose the system to poisoning attacks, where an adversary modiï¬ es the training data (in this case, the labels) so that the model assigns low toxicity scores to certain phrases. IV. OPEN PROBLEMS IN DEFENSE METHODS The developers of Perspective have mentioned that the system is in the early days of research and development, and that the experiments, models, and research data are published to explore the strengths and weaknesses of using machine learning as a tool for online discussion. the Perspective system against the adversarial examples. Scoring the semantic toxicity of a phrase is clearly a very challenging task. | 1702.08138#11 | 1702.08138#13 | 1702.08138 | [
"1606.04435"
]
|
1702.08138#13 | Deceiving Google's Perspective API Built for Detecting Toxic Comments | In this following, we brieï¬ y review some of the possible approaches for improving the robustness of the toxic detection systems: â ¢ Adversarial Training: In this approach, during the training phase, we generate the adversarial examples and train the model to assign the original label to them [18]. In the context of toxic detection systems, we need to include different modiï¬ ed versions of the toxic words into the training data. While this approach may improve the robustness of the system against the adversarial examples, it does not seem practical to train the model on all variants of every word. | 1702.08138#12 | 1702.08138#14 | 1702.08138 | [
"1606.04435"
]
|
1702.08138#14 | Deceiving Google's Perspective API Built for Detecting Toxic Comments | â ¢ Spell checking: Many of the adversarial examples can be detected by ï¬ rst applying a spell checking ï¬ lter before the toxic detection system. This approach may however increase the false alarm. â ¢ Blocking suspicious users for a period of time: The adversary needs to try different error patterns to ï¬ nally evade the toxic detection system. Once a user fails to pass the threshold for a number of times, the system can block her for a while. This approach can force the users to less often use toxic language. | 1702.08138#13 | 1702.08138#15 | 1702.08138 | [
"1606.04435"
]
|
1702.08138#15 | Deceiving Google's Perspective API Built for Detecting Toxic Comments | # V. CONCLUSION In this paper, we presented an attack on the recently- released Googleâ s Perspective API built for detecting toxic comments. We showed that the system can be deceived by slightly perturbing the abusive phrases to receive very low toxicity scores, while preserving the intended meaning. We also showed that the system has high false alarm rate in scoring high toxicity to benign phrases. We provided detailed examples for the studied cases. Our future work includes development of countermeasures against such attacks. 3 Disclaimer: The phrases used in Tables I and II are chosen from the examples provided in the Perspective website [1] for the purpose of demonstrating the results and do not represent the view or opinions of the authors or sponsoring agencies. # REFERENCES [1] â https://www.perspectiveapi.com/,â [2] M. Duggan, Online harassment. Pew Research Center, 2014. [3] â https://meta.wikimedia.org/wiki/Research:Detox,â [4] â | 1702.08138#14 | 1702.08138#16 | 1702.08138 | [
"1606.04435"
]
|
1702.08138#16 | Deceiving Google's Perspective API Built for Detecting Toxic Comments | https://www.nytimes.com/interactive/2016/09/20/insider/approve-or- reject-moderation-quiz.html,â [5] â https://www.wired.com/2017/02/googles-troll-ï¬ ghting-ai-now-belongs- world/,â [6] E. Wulczyn, N. Thain, and L. Dixon, â Ex machina: Personal attacks seen at scale,â arXiv preprint arXiv:1610.08914, 2016. [7] A. Krizhevsky, I. Sutskever, and G. E. Hinton, â Imagenet classiï¬ cation with deep convolutional neural networks,â in Advances in neural infor- mation processing systems, pp. 1097â 1105, 2012. [8] G. E. Dahl, D. Yu, L. Deng, and A. | 1702.08138#15 | 1702.08138#17 | 1702.08138 | [
"1606.04435"
]
|
1702.08138#17 | Deceiving Google's Perspective API Built for Detecting Toxic Comments | Acero, â Context-dependent pre- trained deep neural networks for large-vocabulary speech recognition,â IEEE Transactions on Audio, Speech, and Language Processing, vol. 20, no. 1, pp. 30â 42, 2012. [9] R. Collobert and J. Weston, â A uniï¬ ed architecture for natural language processing: Deep neural networks with multitask learning,â in Proceed- ings of the 25th international conference on Machine learning, pp. 160â 167, ACM, 2008. [10] â https://jigsaw.google.com/,â [11] â http://www.nytco.com/the-times-is-partnering-with-jigsaw-to-expand- comment-capabilities/,â | 1702.08138#16 | 1702.08138#18 | 1702.08138 | [
"1606.04435"
]
|
1702.08138#18 | Deceiving Google's Perspective API Built for Detecting Toxic Comments | [12] M. Barreno, B. Nelson, R. Sears, A. D. Joseph, and J. D. Tygar, â Can machine learning be secure?,â in Proceedings of the 2006 ACM Symposium on Information, computer and communications security, pp. 16â 25, ACM, 2006. [13] M. Barreno, B. Nelson, A. D. Joseph, and J. Tygar, â The security of machine learning,â Machine Learning, vol. 81, no. 2, pp. 121â 148, 2010. [14] L. Huang, A. D. Joseph, B. Nelson, B. I. Rubinstein, and J. | 1702.08138#17 | 1702.08138#19 | 1702.08138 | [
"1606.04435"
]
|
1702.08138#19 | Deceiving Google's Perspective API Built for Detecting Toxic Comments | Tygar, â Ad- versarial machine learning,â in Proceedings of the 4th ACM workshop on Security and artiï¬ cial intelligence, pp. 43â 58, ACM, 2011. [15] C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus, â Intriguing properties of neural networks,â arXiv preprint arXiv:1312.6199, 2013. [16] N. Papernot, P. McDaniel, I. Goodfellow, S. Jha, Z. B. Celik, and A. Swami, â | 1702.08138#18 | 1702.08138#20 | 1702.08138 | [
"1606.04435"
]
|
1702.08138#20 | Deceiving Google's Perspective API Built for Detecting Toxic Comments | Practical black-box attacks against deep learning systems using adversarial examples,â arXiv preprint arXiv:1602.02697, 2016. [17] â https://conversationai.github.io/,â [18] I. J. Goodfellow, J. Shlens, and C. Szegedy, â Explaining and harnessing adversarial examples,â arXiv preprint arXiv:1412.6572, 2014. [19] N. Papernot, P. McDaniel, S. Jha, M. Fredrikson, Z. B. Celik, and A. Swami, â | 1702.08138#19 | 1702.08138#21 | 1702.08138 | [
"1606.04435"
]
|
1702.08138#21 | Deceiving Google's Perspective API Built for Detecting Toxic Comments | The limitations of deep learning in adversarial settings,â in 2016 IEEE European Symposium on Security and Privacy (EuroS&P), pp. 372â 387, IEEE, 2016. [20] C. Kereliuk, B. L. Sturm, and J. Larsen, â Deep learning and music ad- versaries,â IEEE Transactions on Multimedia, vol. 17, no. 11, pp. 2059â 2071, 2015. [21] K. Grosse, N. Papernot, P. Manoharan, M. Backes, and P. McDaniel, â | 1702.08138#20 | 1702.08138#22 | 1702.08138 | [
"1606.04435"
]
|
1702.08138#22 | Deceiving Google's Perspective API Built for Detecting Toxic Comments | Adversarial perturbations against deep neural networks for malware classiï¬ cation,â arXiv preprint arXiv:1606.04435, 2016. [22] S. Reddy, M. Wellesley, K. Knight, and C. Marina del Rey, â Obfuscating gender in social media writing,â NLP+ CSS 2016, p. 17, 2016. 4 | 1702.08138#21 | 1702.08138 | [
"1606.04435"
]
|
|
1702.04595#0 | Visualizing Deep Neural Network Decisions: Prediction Difference Analysis | 7 1 0 2 b e F 5 1 ] V C . s c [ 1 v 5 9 5 4 0 . 2 0 7 1 : v i X r a Published as a conference paper at ICLR 2017 VISUALIZING DEEP NEURAL NETWORK DECISIONS: PREDICTION DIFFERENCE ANALYSIS Luisa M Zintgraf1,3, Taco S Cohen1, Tameem Adel1, Max Welling1,2 1University of Amsterdam, 2Canadian Institute of Advanced Research, 3Vrije Universiteit Brussel {lmzintgraf,tameem.hesham}@gmail.com, {t.s.cohen, m.welling}@uva.nl # ABSTRACT This article presents the prediction difference analysis method for visualizing the response of a deep neural network to a speciï¬ c input. When classifying images, the method highlights areas in a given input image that provide evidence for or against a certain class. It overcomes several shortcoming of previous methods and provides great additional insight into the decision making process of classiï¬ | 1702.04595#1 | 1702.04595 | [
"1506.06579"
]
|
|
1702.04595#1 | Visualizing Deep Neural Network Decisions: Prediction Difference Analysis | ers. Making neural network decisions interpretable through visualization is important both to improve models and to accelerate the adoption of black-box classiï¬ ers in application areas such as medicine. We illustrate the method in experiments on natural images (ImageNet data), as well as medical images (MRI brain scans). # INTRODUCTION Over the last few years, deep neural networks (DNNs) have emerged as the method of choice for perceptual tasks such as speech recognition and image classiï¬ cation. In essence, a DNN is a highly complex non-linear function, which makes it hard to understand how a particular classiï¬ cation comes about. This lack of transparency is a signiï¬ cant impediment to the adoption of deep learning in areas of industry, government and healthcare where the cost of errors is high. In order to realize the societal promise of deep learning - e.g., through self-driving cars or personalized medicine - it is imperative that classiï¬ ers learn to explain their decisions, whether it is in the lab, the clinic, or the courtroom. In scientiï¬ c applications, a better understanding of the complex dependencies learned by deep networks could lead to new insights and theories in poorly understood domains. In this paper, we present a new, probabilistically sound methodology for explaining classiï¬ cation decisions made by deep neural networks. The method can be used to produce a saliency map for each (instance, node) pair that highlights the parts (features) of the input that constitute most evidence for or against the activation of the given (internal or output) node. | 1702.04595#0 | 1702.04595#2 | 1702.04595 | [
"1506.06579"
]
|
1702.04595#2 | Visualizing Deep Neural Network Decisions: Prediction Difference Analysis | See ï¬ gure 1 for an example. In the following two sections, we review related work and then present our approach. In section 4 we provide several demonstrations of our technique for deep convolutional neural networks (DCNNs) trained on ImageNet data, and further how the method can be applied when classifying MRI brain scans of HIV patients with neurodegenerative disease. Figure 1: Example of our visualization method: explains why the DCNN (GoogLeNet) predicts "cockatoo". Shown is the evidence for (red) and against (blue) the prediction. We see that the facial features of the cockatoo are most supportive for the decision, and parts of the body seem to constitute evidence against it. | 1702.04595#1 | 1702.04595#3 | 1702.04595 | [
"1506.06579"
]
|
1702.04595#3 | Visualizing Deep Neural Network Decisions: Prediction Difference Analysis | In fact, the classiï¬ er most likely considers them evidence for the second-highest scoring class, white wolf. 1 Published as a conference paper at ICLR 2017 # 2 RELATED WORK Broadly speaking, there are two approaches for understanding DCNNs through visualization inves- tigated in the literature: ï¬ nd an input image that maximally activates a given unit or class score to visualize what the network is looking for (Erhan et al., 2009; Simonyan et al., 2013; Yosinski et al., 2015), or visualize how the network responds to a speciï¬ c input image in order to explain a particular classiï¬ cation made by the network. The latter will be the subject of this paper. One such instance-speciï¬ c method is class saliency visualization proposed by Simonyan et al. (2013) who measure how sensitive the classiï¬ cation score is to small changes in pixel values, by computing the partial derivative of the class score with respect to the input features using standard backpropagation. They also show that there is a close connection to using deconvolutional networks for visualization, proposed by Zeiler & Fergus (2014). Other methods include Shrikumar et al. (2016), who compare the activation of a unit when a speciï¬ c input is fed forward through the net to a reference activation for that unit. Zhou et al. (2016) and Bach et al. (2015) also generate interesting visualization results for individual inputs, but are both not as closely related to our method as the two papers mentioned above. The idea of our method is similar to another analysis Zeiler & Fergus (2014) make: they estimate the importance of input pixels by visualizing the probability of the (correct) class as a function of a gray patch occluding parts of the image. In this paper, we take a more rigorous approach at both removing information from the image and evaluating the effect of this. | 1702.04595#2 | 1702.04595#4 | 1702.04595 | [
"1506.06579"
]
|
1702.04595#4 | Visualizing Deep Neural Network Decisions: Prediction Difference Analysis | In the ï¬ eld of medical image classiï¬ cation speciï¬ cally, a widely used method for visualizing feature importances is to simply plot the weights of a linear classiï¬ er (Klöppel et al., 2008; Ecker et al., 2010), or the p-values of these weights (determined by permutation testing) (Mourao-Miranda et al., 2005; Wang et al., 2007). These are independent of the input image, and, as argued by Gaonkar & Davatzikos (2013) and Haufe et al. (2014), interpreting these weights can be misleading in general. | 1702.04595#3 | 1702.04595#5 | 1702.04595 | [
"1506.06579"
]
|
1702.04595#5 | Visualizing Deep Neural Network Decisions: Prediction Difference Analysis | The work presented in this paper is based on an instance-speciï¬ c method by Robnik-Å ikonja & Kononenko (2008), the prediction difference analysis, which is reviewed in the next section. Our main contributions are three substantial improvements of this method: conditional sampling (section 3.1), multivariate analysis (section 3.2), and deep visualization (section 3.3). # 3 APPROACH Our method is based on the technique presented by Robnik-Å ikonja & Kononenko (2008), which we will now review. For a given prediction, the method assigns a relevance value to each input feature with respect to a class c. The basic idea is that the relevance of a feature xi can be estimated by measuring how the prediction changes if the feature is unknown, i.e., the difference between p(c|x) and p(c|x\i), where x\i denotes the set of all input features except xi. | 1702.04595#4 | 1702.04595#6 | 1702.04595 | [
"1506.06579"
]
|
1702.04595#6 | Visualizing Deep Neural Network Decisions: Prediction Difference Analysis | To ï¬ nd p(c|x\i), i.e., evaluate the prediction when a feature is unknown, the authors propose three strategies. The ï¬ rst is to label the feature as unknown (which only few classiï¬ ers allow). The second is to re-train the classiï¬ er with the feature left out (which is clearly infeasible for DNNs and high-dimensional data like images). The third approach is to simulate the absence of a feature by marginalizing the feature: | 1702.04595#5 | 1702.04595#7 | 1702.04595 | [
"1506.06579"
]
|
1702.04595#7 | Visualizing Deep Neural Network Decisions: Prediction Difference Analysis | plelx\i) = J plrilxi)ple}xi, 2%) a) (with the sum running over all possible values for xi). However, modeling p(xi|x\i) can easily become infeasible with a large number of features. Therefore, the authors approximate equation (1) by assuming that feature xi is independent of the other features, x\i: p(c|x\i) â p(xi)p(c|x\i, xi) . xi (2) The prior probability p(xi) is usually approximated by the empirical distribution for that feature. Once the class probability p(c|x\;) is estimated, it can be compared to p(c|x). We stick to an evaluation proposed by the authors referred to as weight of evidence, given by WE;,(c|x) = log, (odds(c|x)) â logy (odds(c|x\;)) ; (3) 2 | 1702.04595#6 | 1702.04595#8 | 1702.04595 | [
"1506.06579"
]
|
1702.04595#8 | Visualizing Deep Neural Network Decisions: Prediction Difference Analysis | Published as a conference paper at ICLR 2017 input x ae m.. ~ Figure 2: Simple illustration of the sampling procedure in algorithm 1. Given the input image x, we select every possible patch xw (in a sliding window fashion) of size k à k and place a larger patch Ë xw of size l à l around it. We can then conditionally sample xw by conditioning on the surrounding patch Ë xw. Algorithm 1 Evaluating the prediction difference using conditional and multivariate sampling Input: classifier with outputs p(clx), input image x of size n x n, inner patch size k, outer patch size 1 > k, class of interest c, probabilistic model over patches of size 1 x J, number of samples S Initialization: WE = zeros(n*n), counts = zeros(n*n) for every patch x,, of size k x kin x do x! = copy(x) sum, = 0 define patch X,, of size | x | that contains x, for s = 1to Sdo x/, < Xw sampled from p(xw|Xw\Xw) sum, += p(c|xâ ) > evaluate classifier end for p(c|x\Xw) := sum, /S WE[coordinates of x,,] += log, (odds(c|x)) â log, (odds(c counts[coordinates of x,,] += 1 end for Output: WE / counts > point-wise division x\Xw)) where odds(c|x) = p(c|x)/(1 â p(c|x)). To avoid problems with zero probabilities, Laplace correction p â (pN + 1)/(N + K) is used, where N is the number of training instances and K the number of classes. The method produces a relevance vector (WEi)i=1...m (m being the number of features) of the same size as the input, which reï¬ ects the relative importance of all features. A large prediction difference means that the feature contributed substantially to the classiï¬ cation, whereas a small difference indicates that the feature was not important for the decision. A positive value WEi means that the feature has contributed evidence for the class of interest: removing it would decrease the conï¬ dence of the classiï¬ er in the given class. | 1702.04595#7 | 1702.04595#9 | 1702.04595 | [
"1506.06579"
]
|
1702.04595#9 | Visualizing Deep Neural Network Decisions: Prediction Difference Analysis | A negative value on the other hand indicates that the feature displays evidence against the class: removing it also removes potentially conï¬ icting or irritating information and the classiï¬ er becomes more certain in the investigated class. 3.1 CONDITIONAL SAMPLING In equation (3), the conditional probability p(xi|x\i) of a feature xi is approximated using the marginal distribution p(xi). This is a very crude approximation. In images for example, a pixelâ s value is highly dependent on other pixels. We propose a much more accurate approximation, based on the following two observations: a pixel depends most strongly on a small neighborhood around it, and the conditional of a pixel given its neighborhood does not depend on the position of the pixel in the image. For a pixel xi, we can therefore ï¬ | 1702.04595#8 | 1702.04595#10 | 1702.04595 | [
"1506.06579"
]
|
1702.04595#10 | Visualizing Deep Neural Network Decisions: Prediction Difference Analysis | nd a patch Ë xi of size l à l that contains xi, and condition on the remaining pixels in that patch: p(xi|x\i) â p(xi|Ë x\i) . (4) This greatly improves the approximation while remaining completely tractable. For a feature to become relevant when using conditional sampling, it now has to satisfy two conditions: being relevant to predict the class of interest, and be hard to predict from the neighboring pixels. Relative to the marginal method, we therefore downweight the pixels that can easily be predicted and are thus redundant in this sense. | 1702.04595#9 | 1702.04595#11 | 1702.04595 | [
"1506.06579"
]
|
1702.04595#11 | Visualizing Deep Neural Network Decisions: Prediction Difference Analysis | 3 Published as a conference paper at ICLR 2017 3.2 MULTIVARIATE ANALYSIS Robnik-Å ikonja & Kononenko (2008) take a univariate approach: only one feature at a time is removed. However, we would expect that a neural network is relatively robust to just one feature of a high-dimensional input being unknown, like a pixel in an image. Therefore, we will remove several features at once by again making use of our knowledge about images by strategically choosing these feature sets: patches of connected pixels. Instead of going through all individual pixels, we go through all patches of size k à k in the image (k à k à 3 for RGB images and k à k à k for 3D images like MRI scans), implemented in a sliding window fashion. The patches are overlapping, so that ultimately an individual pixelâ s relevance is obtained by taking the average relevance obtained from the different patches it was in. Algorithm 1 and ï¬ gure 2 illustrate how the method can be implemented, incorporating the proposed improvements. 3.3 DEEP VISUALIZATION OF HIDDEN LAYERS When trying to understand neural networks and how they make decisions, it is not only interesting to analyze the input-output relation of the classiï¬ er, but also to look at what is going on inside the hidden layers of the network. We can adapt the method to see how the units of any layer of the network inï¬ uence a node from a deeper layer. Mathematically, we can formulate this as follows. Let h be the vector representation of the values in a layer H in the network (after forward-propagating the input up to this layer). Further, let z = z(h) be the value of a node that depends on h, i.e., a node in a subsequent layer. Then the analog of equation (2) is given by the expectation: g(z|h\i) â ¡ Ep(hi|h\i) [z(h)] = p(hi|h\i)z(h\i, hi) , hi (5) which expresses the distribution of z when unit hi in layer H is unobserved. The equation now works for arbitrary layer/unit combinations, and evaluates to the same as equation (1) when the input-output relation is analyzed. | 1702.04595#10 | 1702.04595#12 | 1702.04595 | [
"1506.06579"
]
|
1702.04595#12 | Visualizing Deep Neural Network Decisions: Prediction Difference Analysis | To evaluate the difference between g(z|h) and g(z|h\i), we will in general use the activation difference, ADi(z|h) = g(z|h) â g(z|h\i) , for the case when we are not dealing with probabilities (and equation (3) is not applicable). # 4 EXPERIMENTS In this section, we illustrate how the proposed visualization method can be applied, on the ImageNet dataset of natural images when using DCNNs (section 4.1), and on a medical imaging dataset of MRI scans when using a logistic regression classiï¬ er (section 4.2). For marginal sampling we always use the empirical distribution, i.e., we replace a feature (patch) with samples taken directly from other images, at the same location. For conditional sampling we use a multivariate normal distribution. For both sampling methods we use 10 samples to estimate p(c|x\i) (since no signiï¬ cant difference was observed with more samples). | 1702.04595#11 | 1702.04595#13 | 1702.04595 | [
"1506.06579"
]
|
1702.04595#13 | Visualizing Deep Neural Network Decisions: Prediction Difference Analysis | Note that all images are best viewed digital and in color. Our implementation is available at github.com/lmzintgraf/DeepVis-PredDiff. IMAGENET: UNDERSTANDING HOW A DCNN MAKES DECISIONS We use images from the ILSVRC challenge (Russakovsky et al., 2015) (a large dataset of natural im- ages from 1000 categories) and three DCNNs: the AlexNet (Krizhevsky et al., 2012), the GoogLeNet (Szegedy et al., 2015) and the (16-layer) VGG network (Simonyan & Zisserman, 2014). We used the publicly available pre-trained models that were implemented using the deep learning framework caffe (Jia et al., 2014). Analyzing one image took us on average 20, 30 and 70 minutes for the respective classiï¬ ers AlexNet, GoogLeNet and VGG (using the GPU implementation of caffe and mini-batches with the standard settings of 10 samples and a window size of k = 10). The results shown here are chosen from among a small set of images in order to show a range of behavior of the algorithm. The shown images are quite representative of the performance of the method in general. Examples on randomly selected images, including a comparison to the sensitivity analysis of Simonyan et al. (2013), can be seen in appendix A. | 1702.04595#12 | 1702.04595#14 | 1702.04595 | [
"1506.06579"
]
|
1702.04595#14 | Visualizing Deep Neural Network Decisions: Prediction Difference Analysis | 4 Published as a conference paper at ICLR 2017 marginal conditional input marginal conditional ry. Figure 3: Visualization of the effects of marginal versus conditional sampling using the GoogLeNet classiï¬ er. The classiï¬ er makes correct predictions (ostrich and saxophone), and we show the evidence for (red) and against (blue) this decision at the output layer. We can see that conditional sampling gives more targeted explanations compared to marginal sampling. Also, marginal sampling assigns too much importance on pixels that are easily predictable conditioned on their neighboring pixels. | 1702.04595#13 | 1702.04595#15 | 1702.04595 | [
"1506.06579"
]
|
1702.04595#15 | Visualizing Deep Neural Network Decisions: Prediction Difference Analysis | african el., 0.63 1 2 4 - = ~ 9..* 7 " " agen a ae â a â a bs &. "4 oe he | Se |) Pe te. y : % J) : %. J s 4 : â 4 s Figure 4: Visualization of how different window sizes inï¬ uence the visualization result. We used the conditional sampling method and the AlexNet classiï¬ er with l = k + 4 and varying k. We can see that even when removing single pixels (k = 1), this has a noticeable effect on the classiï¬ er and more important pixels get a higher score. By increasing the window size we can get a more easily interpretable, smooth result until the image gets blurry for very large window sizes. We start this section by demonstrating our proposed improvements (sections 3.1 - 3.3). | 1702.04595#14 | 1702.04595#16 | 1702.04595 | [
"1506.06579"
]
|
1702.04595#16 | Visualizing Deep Neural Network Decisions: Prediction Difference Analysis | Marginal vs Conditional Sampling Figure 3 shows visualizations of the spatial support for the highest scoring class, using marginal and conditional sampling (with k = 10 and l = 14). We can see that conditional sampling leads to results that are more reï¬ ned in the sense that they concentrate more around the object. We can also see that marginal sampling leads to pixels being declared as important that are very easily predictable conditioned on their neighboring pixels (like in the saxophone example). Throughout our experiments, we have found that conditional sampling tends to give more speciï¬ c and ï¬ ne-grained results than marginal sampling. For the rest of our experiments, we therefore show results using conditional sampling only. Multivariate Analysis For ImageNet data, we have observed that setting k = 10 gives a good trade-off between sharp results and a smooth appearance. Figure 4 shows how different window sizes inï¬ uence the resolution of the visualization. Surprisingly, removing only one pixel does have a measurable effect on the prediction, and the largest effect comes from sensitive pixels. We expected that removing only one pixel does not have any effect on the classiï¬ cation outcome, but apparently the classiï¬ er is sensitive even to these small changes. However when using such a small window size, it is difï¬ | 1702.04595#15 | 1702.04595#17 | 1702.04595 | [
"1506.06579"
]
|
1702.04595#17 | Visualizing Deep Neural Network Decisions: Prediction Difference Analysis | cult to make sense of the sign information in the visualization. If we want to get a good impression of which parts in the image are evidence for/against a class, it is therefore better to use larger windows. If k is chosen too large however, the results tend to get blurry. Note that these results are not just simple averages of one another, but a multivariate approach is indeed necessary to observe the presented results. # Deep Visualization of Hidden Network Layers Our third main contribution is the extension of the method to neural networks; to understand the role of hidden layers in a DNN. Figure 5 shows how different feature maps in three different layers of the GoogLeNet react to the input of a tabby cat (see ï¬ gure 6, middle image). For each feature map in a convolutional layer, we ï¬ rst compute the relevance of the input image for each hidden unit in that map. To estimate what the feature map as a whole is doing, we show the average of the relevance vectors over all units in that feature map. | 1702.04595#16 | 1702.04595#18 | 1702.04595 | [
"1506.06579"
]
|
1702.04595#18 | Visualizing Deep Neural Network Decisions: Prediction Difference Analysis | The ï¬ rst convolutional layer works with different types of simple image ï¬ lters (e.g., edge detectors), and what we see is which parts of the input image respond 5 Published as a conference paper at ICLR 2017 pas pela Lal Figure 5: Visualization of feature maps from thee different layers of the GoogLeNet (l.t.r.: â conv1/7x7_s2â , â inception_3a/outputâ , â inception_5b/outputâ | 1702.04595#17 | 1702.04595#19 | 1702.04595 | [
"1506.06579"
]
|
1702.04595#19 | Visualizing Deep Neural Network Decisions: Prediction Difference Analysis | ), using conditional sampling and patch sizes k = 10 and l = 14 (see alg. 1). For each feature map in the convolutional layer, we ï¬ rst evaluate the relevance for every single unit, and then average the results over all the units in one feature map to get a sense of what the unit is doing as a whole. Red pixels activate a unit, blue pixels decreased the activation. Figure 6: Visualization of three different feature maps, taken from the â inception_3a/outputâ layer of the GoogLeNet (from the middle of the network). Shown is the average relevance of the input features over all activations of the feature map. We used patch sizes k = 10 and l = 14 (see alg. 1). Red pixels activate a unit, blue pixels decreased the activation. | 1702.04595#18 | 1702.04595#20 | 1702.04595 | [
"1506.06579"
]
|
1702.04595#20 | Visualizing Deep Neural Network Decisions: Prediction Difference Analysis | positively or negatively to these ï¬ lters. The layer we picked from somewhere in the middle of the network is specialized to higher level features (like facial features of the cat). The activations of the last convolutional layer are very sparse across feature channels, indicating that these units are highly specialized. To get a sense of what single feature maps in convolutional layers are doing, we can look at their visualization for different input images and look for patterns in their behavior. Figure 6 shows this for four different feature maps from a layer from the middle of the GoogLeNet network. We can directly see which kind of features the model has learned at this stage in the network. For example, one feature map is mostly activated by the eyes of animals (third row), and another is looking mostly at the background (last row). Penultimate vs Output Layer If we visualize the inï¬ uence of the input features on the penultimate (pre-softmax) layer, we show only the evidence for/against this particular class, without taking other classes into consideration. After the softmax operation however, the values of the nodes are all interdependent: a drop in the probability for one class could be due to less evidence for it, or because a different class becomes more likely. Figure 7 compares visualizations for the last two layers. By looking at the top three scoring classes, we can see that the visualizations in the penultimate layer look very similar if the classes are similar (like different dog breeds). When looking at the output layer however, they look rather different. Consider the case of the elephants: the top three classes are different elephant subspecies, and the visualizations of the penultimate layer look similar since every subspecies can be identiï¬ ed by similar characteristics. But in the output layer, we can see how the classiï¬ er decides for one of the three types of elephants and against the others: the ears in this case are the crucial difference. | 1702.04595#19 | 1702.04595#21 | 1702.04595 | [
"1506.06579"
]
|
1702.04595#21 | Visualizing Deep Neural Network Decisions: Prediction Difference Analysis | 6 Published as a conference paper at ICLR 2017 african eleph. tusker indian eleph. french bulldog boston bull â _ am. staffordsh. 29.86 29.29 25.78 27.77 26.35 17.67 a 2 3 E 2 5 @ = & am. staffordsh. 0.00 african eleph. tusker indian eleph. 0.63 0.01 0.36 â | 1702.04595#20 | 1702.04595#22 | 1702.04595 | [
"1506.06579"
]
|
1702.04595#22 | Visualizing Deep Neural Network Decisions: Prediction Difference Analysis | _y Figure 7: Visualization of the support for the top-three scoring classes in the penultimate- and output layer. Next to the input image, the ï¬ rst row shows the results with respect to the penultimate layer; the second row with respect to the output layer. For each image, we additionally report the values of the units. We used the AlexNet with conditional sampling and patch sizes k = 10 and l = 14 (see alg. 1). Red pixels are evidence for a class, and blue against it. alexnet googlenet alexnet googlenet vos Dy â Cad | ayo Y | 1702.04595#21 | 1702.04595#23 | 1702.04595 | [
"1506.06579"
]
|
1702.04595#23 | Visualizing Deep Neural Network Decisions: Prediction Difference Analysis | Figure 8: Comparison of the prediction visualization of different DCNN architectures. For two input images, we show the results of the prediction difference analysis when using different neural networks - the AlexNet, GoogLeNet and VGG network. Network Comparison When analyzing how neural networks make decisions, we can also compare how different network architectures inï¬ uence the visualization. Here, we tested our method on the AlexNet, the GoogLeNet and the VGG network. Figure 8 shows the results for the three different networks, on two input images. The AlexNet seems to more on contextual information (the sky in the balloon image), which could be attributed to it having the least complex architecture compared to the other two networks. It is also interesting to see that the VGG network deems the basket of the balloon as very important compared to all other pixels. The second highest scoring class in this case was a parachute - presumably, the network learned to not confuse a balloon with a parachute by detecting a square basket (and not a human). | 1702.04595#22 | 1702.04595#24 | 1702.04595 | [
"1506.06579"
]
|
1702.04595#24 | Visualizing Deep Neural Network Decisions: Prediction Difference Analysis | 4.2 MRI DATA: EXPLAINING CLASSIFIER DECISIONS IN MEDICAL IMAGING To illustrate how our visualization method can also be useful in a medical domain, we show some experimental results on an MRI dataset of HIV and healthy patients. In such settings, it is crucial that the practitioner has some insight into the algorithmâ s decision when classifying a patient, to weigh this information and incorporate it in the overall diagnosis process. The dataset used here is referred to as the COBRA dataset. It contains 3D MRIs from 100 HIV patients and 70 healthy individuals, included in the Academic Medical Center (AMC) in Amsterdam, The Netherlands. Of these subjects, diffusion weighted MRI data were acquired. Preprocessing of the data was performed with software developed in-house, using the HPCN-UvA Neuroscience Gateway and using resources of the Dutch e-Science Grid Shahand et al. (2015). As a result, Fractional Anisotropy (FA) maps were computed. FA is sensitive to microstructural damage and therefore expected to be, on average, decreased in patients. Subjects were scanned on two 3.0 Tesla scanner systems, 121 subjects on a Philips Intera system and 39 on a Philips Ingenia system. Patients and controls were evenly distributed. FA images were spatially normalized to standard space Andersson et al. (2007), resulting in volumes with 91 Ã 109 Ã 91 = 902, 629 voxels. 7 | 1702.04595#23 | 1702.04595#25 | 1702.04595 | [
"1506.06579"
]
|
1702.04595#25 | Visualizing Deep Neural Network Decisions: Prediction Difference Analysis | Published as a conference paper at ICLR 2017 We trained an L2-regularized Logistic Regression classiï¬ er on a subset of the MRI slices (slices 29-40 along the ï¬ rst axis) and on a balanced version of the dataset (by taking the ï¬ rst 70 samples of the HIV class) to achieve an accuracy of 69.3% in a 10-fold cross-validation test. Analyzing one image took around half an hour (on a CPU, with k = 3 and l = 7, see algorithm 1). For conditional sampling, we also tried adding location information in equation (2), i.e., we split up the 3D image into a 20 à 20 à 20 grid and also condition on the index in that grid. | 1702.04595#24 | 1702.04595#26 | 1702.04595 | [
"1506.06579"
]
|
1702.04595#26 | Visualizing Deep Neural Network Decisions: Prediction Difference Analysis | We found that this slightly improved the interpretability of the results, since the pixel values in the special case of MRI scans does depend on spacial location as well. Figure 9 (ï¬ rst row) shows one way via which the prediction difference results could be presented to a physician, for an HIV sample. By overlapping the prediction difference and the MRI image, the exact regions can be pointed out that are evidence for (red parts) or against (blue parts) the classiï¬ erâ | 1702.04595#25 | 1702.04595#27 | 1702.04595 | [
"1506.06579"
]
|
1702.04595#27 | Visualizing Deep Neural Network Decisions: Prediction Difference Analysis | s decision. The second row shows the results using the weights of the logistic regression classiï¬ er, which is a commonly used method in neuroscientiï¬ c literature. We can see that they are considerably noisier (in the sense that, compared to our method, the voxels relevant for the classiï¬ cation decisions are more scattered), and also, they are not speciï¬ c to the given image. Figure 10 shows the visualization results for four healthy, and four HIV samples. We can clearly see that the patterns for the two classes are distinct, and there is some pattern to the decision of the classiï¬ er, but which is still speciï¬ | 1702.04595#26 | 1702.04595#28 | 1702.04595 | [
"1506.06579"
]
|
1702.04595#28 | Visualizing Deep Neural Network Decisions: Prediction Difference Analysis | c to the input image. Figure 11 shows the same (HIV) sample as in ï¬ gure 9 along different axes, and ï¬ gure 12 shows how the visualization changes with different patch sizes. We believe that both varying the slice and patch size can give different insights to a clinician, and in clinical practice, a 3D animation where these parameters can be adjusted would be very useful for analyzing the visualization result. In general we can assume that the better the classiï¬ er, the closer the explanations for its decisions are to the true class difference. For clinical practice it is therefore crucial to have very good classiï¬ ers. This will increase computation time, but in many medical settings, longer waiting times for test results are common and worth the wait if the patient is not in an acute life threatening condition (e.g., when predicting HIV or Alzheimer from MRI scans, or the ï¬ eld of cancer diagnosis and detection). The presented results here are for demonstration purposes of the visualization method, and we claim no medical validity. A thorough qualitative analysis incorporating expert knowledge was outside the scope of this paper. # 5 FUTURE WORK In our experiments, we used a simple multivariate normal distribution for conditional sampling. We can imagine that using more sophisticated generative models will lead to better results: pixels that are easily predictable by their surrounding are downweighted even more. | 1702.04595#27 | 1702.04595#29 | 1702.04595 | [
"1506.06579"
]
|
1702.04595#29 | Visualizing Deep Neural Network Decisions: Prediction Difference Analysis | However this will also signiï¬ cantly increase the computational resources needed to produce the explanations. Similarly, we could try to modify equation (4) to get an even better approximation by using a conditional distribution that takes more information about the whole image into account (like adding spatial information for the MRI scans). To make the method applicable for clinical analysis and practice, a better classiï¬ cation algorithm is required. Also, software that visualizes the results as an interactive 3D model will improve the usability of the system. | 1702.04595#28 | 1702.04595#30 | 1702.04595 | [
"1506.06579"
]
|
1702.04595#30 | Visualizing Deep Neural Network Decisions: Prediction Difference Analysis | # 6 CONCLUSION We presented a new method for visualizing deep neural networks that improves on previous methods by using a more powerful conditional, multivariate model. The visualization method shows which pixels of a speciï¬ c input image are evidence for or against a node in the network. The signed information offers new insights - for research on the networks, as well as the acceptance and usability in domains like healthcare. While our method requires signiï¬ cant computational resources, real-time 3D visualization is possible when visualizations are pre-computed. With further optimization and powerful GPUs, pre-computation time can be reduced a lot further. In our experiments, we have presented several ways in which the visualization method can be put into use for analyzing how DCNNs make decisions. | 1702.04595#29 | 1702.04595#31 | 1702.04595 | [
"1506.06579"
]
|
1702.04595#31 | Visualizing Deep Neural Network Decisions: Prediction Difference Analysis | 8 Published as a conference paper at ICLR 2017 Input + Pred.Diff _ Input (inv) + Pred. Diff Input + Pred.Diff Input (inv) + Pred.Diff i 1 o ta x = , Input (inv) + Weights Figure 9: Visualization of the support for the correct classiï¬ cation â HIVâ , using the Prediction Differ- ence method and Logistic Regression Weights. For an HIV sample, we show the results with the prediction difference (ï¬ rst row), and using the weights of the logistic regression classiï¬ er (second row), for slices 29 and 40 (along the ï¬ rst axis). | 1702.04595#30 | 1702.04595#32 | 1702.04595 | [
"1506.06579"
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.