doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1706.07269 | 30 | While the aim here is not a detailed survey of causality, however, it is pertinent to note that the dependence theories all focus around the concept of counterfactuals: the state of aï¬airs that would have resulted from some event that did not occur. Even transference theories, which are not explicitly deï¬ned as counterfactual, consider that causation is an unnatural transference of energy to the receiving object, implying what would have been otherwise. As such, the notion of âcounterfactualâ is important in causality.
Gerstenberg et al. [49] tested whether people consider counterfactuals when making causal judgements in an experiment involving colliding balls. They presented experiment participants with diï¬erent scenarios involving two balls colliding, with each scenario having diï¬erent outcomes, such as one ball going through a gate, just missing the gate, or missing the gate by a long distance. While wearing eye-tracking equipment, participants were asked to determine what the outcome would have been (a counterfactual) had the candidate cause not occurred (the balls had not collided). Using the eye-gaze data from the tracking, they showed that their participants, even in these physical environments, would trace where the ball would have gone had the balls not collided, thus demonstrating that they used counterfactual simulation to make causal judgements. | 1706.07269#30 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 31 | Necessary and Suï¬cient Causes. Kelley [87] proposes a taxonomy of causality in social attribution, but which has more general applicability, and noted that there are two main types of causal schemata for causing events: multiple necessary causes and multiple suï¬cient causes. The former deï¬nes a schema in which a set of events are all necessary to cause the event in question, while the latter deï¬nes a schema in which there are multiple possible ways to cause the event, and only one of these is required. Clearly, these can be interleaved; e.g. causes C1, C2, and C3 for event E, in which C1 is necessary and either of C2 or C3 are necessary, while both C2 and C3 are suï¬cient to cause the compound event (C2 or C3).
Internal and External Causes. Heider [66], the grandfather of causal attribution in social psychology, argues that causes fall into two camps: internal and external. Internal causes
9 | 1706.07269#31 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 32 | Internal and External Causes. Heider [66], the grandfather of causal attribution in social psychology, argues that causes fall into two camps: internal and external. Internal causes
9
of events are those due to the characteristics of an actor, while external causes are those due to the speciï¬c situation or the environment. Clearly, events can have causes that mix both. However, the focus of work from Heider was not on causality in general, but on social attribution, or the perceived causes of behaviour. That is, how people attribute the behaviour of others. Nonetheless, work in this ï¬eld, as we will see in Section 3, builds heavily on counterfactual causality.
Causal Chains. In causality and explanation, the concept of causal chains is important. A causal chain is a path of causes between a set of events, in which a cause from event C to event E indicates that C must occur before E. Any events without a cause are root causes.
Hilton et al. [76] deï¬ne ï¬ve diï¬erent types of causal chain, outlined in Table 2, and note that diï¬erent causal chains are associated with diï¬erent types of explanations. | 1706.07269#32 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 33 | Type Description Example Temporal Coincidental Unfolding Opportunity chains Pre-emptive Distal events do not constraint proxi- mal events. Events can be switched in time without changing the outcome Distal events do not constraint prox- imal events. The causal relationships holds in a particular case, but not in general. Distal events strongly constrain prox- imal events. The causal relationships hold in general and in this particular case and cannot be switched. The distal event enables the proximal event. Distal precedes proximal and prevents the proximal from causing an event. A and B together cause C ; order of A and B is irrelevant; e.g. two peo- ple each ï¬ipping a coin win if both coins are heads; it is irrelevant who ï¬ips ï¬rst. A causes B this time, but the general relationship does not hold; e.g. a per- son smoking a cigarette causes a house ï¬re, but this does not generally hap- pen. A causes B and B causes C ; e.g. switching a light switch causes an elec- tric current to run to the light, which causes the light to turn on in- A enables B, B causes C ; | 1706.07269#33 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 34 | switching a light switch causes an elec- tric current to run to the light, which causes the light to turn on in- A enables B, B causes C ; e.g. stalling a light switch enables it to be switched, which causes the light to turn on. B causes C, A would have caused C if B did not occur; e.g. my action of unlocking the car with my remote lock would have unlocked the door if my wife had not already unlocked it with the key. | 1706.07269#34 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 35 | Table 2: Types of Causal Chains according to Hilton et al. [76].
People do not need to understand a complete causal chain to provide a sound expla- nation. This is evidently true: causes of physical events can refer back to events that occurred during the Big Bang, but nonetheless, most adults can explain to a child why a bouncing ball eventually stops.
Formal Models of Causation. While several formal models of causation have been pro- posed, such as those based on conditional logic [53, 98], the model of causation that 10
I believe would be of interest to many in artiï¬cial intelligence is the formalisation of causality by Halpern and Pearl [58]. This is a general model that should be accessible to anyone with a computer science background, has been adopted by philosophers and psychologists, and is accompanied by many additional results, such as an axiomatisation [57] and a series articles on complexity analysis [40, 41]. | 1706.07269#35 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 36 | Halpern and Pearl [58] deï¬ne a model-based approach using structural causal models over two sets of variables: exogenous variables, whose values are determined by factors external to the model, and endogenous variables, whose values are determined by re- lationships with other (exogenous or endogenous) variables. Each endogenous variable has a function that deï¬nes its value from other variables. A context is an assignment of values to variables. Intuitively, a context represents a âpossible worldâ of the model. A model/context pair is called a situation. Given this structure, Halpern and Pearl deï¬ne a actual cause of an event X = x (that is, endogenous variable X receiving the value x) as a set of events E (each of the form Y = y) such that (informally) the following three criteria hold:
AC1 Both the event X = x and the cause E are true in the actual situation.
AC2 If there was some counterfactual values for the variables of the events in E, then the event X = x would not have occurred.
AC3 E is minimal â that is, there are no irrelevant events in the case.
A suï¬cient cause is simply a non-minimal actual cause; that is, it satisï¬es the ï¬rst two items above. | 1706.07269#36 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 37 | A suï¬cient cause is simply a non-minimal actual cause; that is, it satisï¬es the ï¬rst two items above.
We will return later to this model in Section 5.1.2 to to discuss Halpern and Pearlâs model of explanation.
2.1.2. Explanation
An explanation is an assignment of causal responsibility â Josephson and Josephson [81]
Explanation is both a process and a product, as noted by Lombrozo [104]. However, I argue that there are actually two processes in explanation, as well as the product:
1. Cognitive process â The process of abductive inference for âï¬lling the gapsâ [27] to determine an explanation for a given event, called the explanandum, in which the causes for the event are identiï¬ed, perhaps in relation to a particular counterfactual cases, and a subset of these causes is selected as the explanation (or explanans).
In social science, the process of identifying the causes of a particular phenomenon is known as attribution, and is seen as just part of the entire process of explanation.
2. Product â The explanation that results from the cognitive process is the product of the cognitive explanation process. | 1706.07269#37 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 38 | 2. Product â The explanation that results from the cognitive process is the product of the cognitive explanation process.
3. Social process â The process of transferring knowledge between explainer and explainee, generally an interaction between a group of people, in which the goal is that the explainee has enough information to understand the causes of the event; although other types of goal exists, as we discuss later.
11
Question Reasoning Description What? Associative Reason about which unobserved events could have oc- curred given the observed events How? Interventionist Simulate a change in the situation to see if the event still happens Why? Counterfactual Simulating alternative causes to see whether the event still happens
Table 3: Classes of Explanatory Question and the Reasoning Required to Answer
But what constitutes an explanation? This question has created a lot of debate in philosophy, but accounts of explanation both philosophical and psychology stress the importance of causality in explanation â that is, an explanation refers to causes [159, 191, 107, 59]. There are, however, deï¬nitions of non-causal explanation [52], such as explaining âwhat happenedâ or explaining what was meant by a particular remark [187]. These deï¬nitions out of scope in this paper, and they present a diï¬erent set of challenges to explainable AI.
2.1.3. Explanation as a Product | 1706.07269#38 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 39 | 2.1.3. Explanation as a Product
We take the deï¬nition that an explanation is an answer to a whyâquestion [35, 138, 99, 102].
According to Bromberger [13], a why-question is a combination of a whetherâquestion, preceded by the word âwhyâ. A whether-question is an interrogative question whose correct answer is either âyesâ or ânoâ. The presupposition within a whyâquestion is the fact referred to in the question that is under explanation, expressed as if it were true (or false if the question is a negative sentence). For example, the question âwhy did they do that? â is a why-question, with the inner whether-question being âdid they do that? â, and the presupposition being âthey did thatâ. However, as we will see in Section 2.3, whyâquestions are structurally more complicated than this: they are contrastive. | 1706.07269#39 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 40 | However, other types of questions can be answered by explanations. In Table 3, I propose a simple model for explanatory questions based on Pearl and Mackenzieâs Ladder of Causation [141]. This model places explanatory questions into three classes: (1) whatâ questions, such as âWhat event happened? â; (2) how -questions, such as âHow did that event happen? â; and (3) whyâquestions, such as âWhy did that event happen? â. From the perspective of reasoning, whyâquestions are the most challenging, because they use the most sophisticated reasoning. What-questions ask for factual accounts, possibly using associative reasoning to determine, from the observed events, which unobserved events also happened. How questions are also factual, but require interventionist reasoning to determine the set of causes that, if removed, would prevent the event from happening. This may also require associative reasoning. We categorise what if âquestions has how â questions, as they are just a contrast case analysing what would happen under a diï¬erent situation. Whyâquestions are the most challenging, as they require counterfactual rea- soning to undo events and simulate other events that are not factual. This also requires associative and interventionist reasoning.
12 | 1706.07269#40 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 41 | 12
Dennett [36] argues that âwhyâ is ambiguous and that there are two diï¬erent senses of whyâquestion: how come? and what for?. The former asks for a process narrative, without an explanation of what it is for, while the latter asks for a reason, which implies some intentional thought behind the cause. Dennett gives the examples of âwhy are planets spherical?â and âwhy are ball bearings spherical?â. The former asks for an explanation based on physics and chemistry, and is thus a how-comeâquestion, because planets are not round for any reason. The latter asks for an explanation that gives the reason what the designer made ball bearings spherical for : a reason because people design them that way.
Given a whyâquestion, Overton [138] deï¬nes an explanation as a pair consisting of: (1) the explanans: which is the answer to the question; and (2) and the explanandum; which is the presupposition.
2.1.4. Explanation as Abductive Reasoning | 1706.07269#41 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 42 | 2.1.4. Explanation as Abductive Reasoning
As a cognitive process, explanation is closely related to abductive reasoning. Peirce [142] was the ï¬rst author to consider abduction as a distinct form of reasoning, separate from induction and deduction, but which, like induction, went from eï¬ect to cause. His work focused on the diï¬erence between accepting a hypothesis via scientiï¬c experiments (induction), and deriving a hypothesis to explain observed phenomenon (abduction). He deï¬nes the form of inference used in abduction as follows:
The surprising fact, C, is observed; But if A were true, C would be a matter of course, Hence, there is reason to suspect that A is true.
Clearly, this is an inference to explain the fact C from the hypothesis A, which is diï¬erent from deduction and induction. However, this does not account for compet- ing hypotheses. Josephson and Josephson [81] describe this more competitive-form of abduction as:
D is a collection of data (facts, observations, givens). H explains D (would, if true, explain D). No other hypothesis can explain D as well as H does. Therefore, H is probably true. | 1706.07269#42 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 43 | Harman [62] labels this process âinference to the best explanationâ. Thus, one can think of abductive reasoning as the following process: (1) observe some (presumably unexpected or surprising) events; (2) generate one or more hypothesis about these events; (3) judge the plausibility of the hypotheses; and (4) select the âbestâ hypothesis as the explanation [78].
Research in philosophy and cognitive science has argued that abductive reasoning is closely related to explanation. In particular, in trying to understand causes of events, people use abductive inference to determine what they consider to be the âbestâ expla- nation. Harman [62] is perhaps the ï¬rst to acknowledge this link, and more recently, experimental evaluations have demonstrated it [108, 188, 109, 154]. Popper [146] is perhaps the most inï¬uential proponent of abductive reasoning in the scientiï¬c process. He argued strongly for the scientiï¬c method to be based on empirical falsiï¬ability of hypotheses, rather than the classic inductivist view at the time.
13 | 1706.07269#43 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 44 | 13
Early philosophical work considered abduction as some magical process of intuition â something that could not be captured by formalised rules because it did not ï¬t the standard deductive model. However, this changed when artiï¬cial intelligence researchers began investigating abductive reasoning to explain observations, such as in diagnosis (e.g. medical diagnosis, fault diagnosis) [145, 156], intention/plan recognition [24], etc. The necessity to encode the process in a suitable computational form led to axiomatisations, with Pople [145] seeming to be the ï¬rst to do this, and characterisations of how to implement such axiomatisations; e.g. Levesque [97]. From here, the process of abduction as a principled process gained traction, and it is now widely accepted that abduction, induction, and deduction are diï¬erent modes of logical reasoning.
In this paper, abductive inference is not equated directly to explanation, because explanation also refers to the product and the social process; but abductive reasoning does fall into the category of cognitive process of explanation. In Section 4, we survey the cognitive science view of abductive reasoning, in particular, cognitive biases in hypothesis formation and evaluation.
2.1.5. Interpretability and Justiï¬cation | 1706.07269#44 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 45 | 2.1.5. Interpretability and Justiï¬cation
Here, we brieï¬y address the distinction between interpretability, explainability, justi- ï¬cation, and explanation, as used in this article; and as they seem to be used in artiï¬cial intelligence.
Lipton [103] provides a taxonomy of the desiderata and methods for interpretable AI. This paper adopts Liptonâs assertion that explanation is post-hoc interpretability. I use Biran and Cotton [9]âs deï¬nition of interpretability of a model as: the degree to which an observer can understand the cause of a decision. Explanation is thus one mode in which an observer may obtain understanding, but clearly, there are additional modes that one can adopt, such as making decisions that are inherently easier to understand or via introspection. I equate âinterpretabilityâ with âexplainabilityâ.
A justiï¬cation explains why a decision is good, but does not necessarily aim to give an explanation of the actual decision-making process [9]. | 1706.07269#45 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 46 | A justiï¬cation explains why a decision is good, but does not necessarily aim to give an explanation of the actual decision-making process [9].
It is important to understand the similarities and diï¬erences between these terms as one reads this article, because some related research discussed is relevant to explanation only, in particular, Section 5, which discusses how people present explanations to one another; while other sections, in particular Sections 3 and 4 discuss how people generate and evaluate explanations, and explain behaviour of others, so are broader and can be used to create more explainable agents.
2.2. Why People Ask for Explanations
There are many reasons that people may ask for explanations. Curiosity is one primary criterion that humans use, but other pragmatic reasons include examination â for example, a teacher asking her students for an explanation on an exam for the purposes of testing the studentsâ knowledge on a particular topic; and scientiï¬c explanation â asking why we observe a particular environmental phenomenon.
In this paper, we are interested in explanation in AI, and thus our focus is on how intelligent agents can explain their decisions. As such, this section is primarily concerned with why people ask for âeverydayâ explanations of why speciï¬c events occur, rather than explanations for general scientiï¬c phenomena, although this work is still relevant in many cases.
14 | 1706.07269#46 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 47 | 14
It is clear that the primary function of explanation is to facilitate learning [104, 189]. Via learning, we obtain better models of how particular events or properties come about, and we are able to use these models to our advantage. Heider [66] states that people look for explanations to improve their understanding of someone or something so that they can derive stable model that can be used for prediction and control. This hypothesis is backed up by research suggesting that people tend to ask questions about events or observations that they consider abnormal or unexpected from their own point of view [77, 73, 69]. | 1706.07269#47 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 48 | Lombrozo [104] argues that explanations have a role in inference learning precisely because they are explanations, not necessarily just due to the causal information they reveal. First, explanations provide somewhat of a âï¬lterâ on the causal beliefs of an event. Second, prior knowledge is changed by giving explanations; that is, by asking someone to provide an explanation as to whether a particular property is true or false, the explainer changes their perceived likelihood of the claim. Third, explanations that oï¬er fewer causes and explanations that explain multiple observations are considered more believable and more valuable; but this does not hold for causal statements. Wilkenfeld and Lombrozo [188] go further and show that engaging in explanation but failing to arrive at a correct explanation can improve ones understanding. They describe this as âexplaining for the best inferenceâ, as opposed to the typical model of explanation as âinference to the best explanationâ.
Malle [112, Chapter 3], who gives perhaps the most complete discussion of everyday explanations in the context of explaining social action/interaction, argues that people ask for explanations for two reasons:
1. To ï¬nd meaning: to reconcile the contradictions or inconsistencies between ele- ments of our knowledge structures. | 1706.07269#48 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 49 | 1. To ï¬nd meaning: to reconcile the contradictions or inconsistencies between ele- ments of our knowledge structures.
2. To manage social interaction: to create a shared meaning of something, and to change othersâ beliefs & impressions, their emotions, or to inï¬uence their actions.
Creating a shared meaning is important for explanation in AI. In many cases, an explanation provided by an intelligent agent will be precisely to do this â to create a shared understanding of the decision that was made between itself and a human observer, at least to some partial level. | 1706.07269#49 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 50 | Lombrozo [104] and Wilkenfeld and Lombrozo [188] note that explanations have sev- eral functions other than the transfer of knowledge, such as persuasion, learning, or assignment of blame; and that in some cases of social explanation, the goals of the ex- plainer and explainee may be diï¬erent. With respect to explanation in AI, persuasion is surely of interest: if the goal of an explanation from an intelligent agent is to generate trust from a human observer, then persuasion that a decision is the correct one could in some case be considered more important than actually transferring the true cause. For example, it may be better to give a less likely explanation that is more convincing to the explainee if we want them to act in some positive way. In this case, the goals of the explainer (to generate trust) is diï¬erent to that of the explainee (to understand a decision).
15
2.3. Contrastive Explanation
The key insight is to recognise that one does not explain events per se, but that one explains why the puzzling event occurred in the target cases but not in some counterfactual contrast case. â Hilton [72, p. 67] | 1706.07269#50 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 51 | I will dedicate a subsection to discuss one of the most important ï¬ndings in the philosophical and cognitive science literature from the perspective of explainable AI: contrastive explanation. Research shows that people do not explain the causes for an event per se, but explain the cause of an event relative to some other event that did not occur; that is, an explanation is always of the form âWhy P rather than Q? â, in which P is the target event and Q is a counterfactual contrast case that did not occur, even if the Q is implicit in the question. This is called contrastive explanation.
Some authors refer to Q as the counterfactual case [108, 69, 77]. However, it is impor- tant to note that this is not the same counterfactual that one refers to when determining causality (see Section 2.1.1). For causality, the counterfactuals are hypothetical ânon- causesâ in which the event-to-be-explained does not occur â that is a counterfactual to cause C â, whereas in contrastive explanation, the counterfactuals are hypothetical outcomes â that is, a counterfactual to event E [127]. | 1706.07269#51 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 52 | Lipton [102] refers to the two cases, P and Q, as the fact and the foil respectively; the fact being the event that did occur, and the foil being the event that did not. To avoid confusion, throughout the remainder of this paper, we will adopt this terminology and use counterfactual to refer to the hypothetical case in which the cause C did not occur, and foil to refer to the hypothesised case Q that was expected rather than P . | 1706.07269#52 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 53 | Most authors in this area argue that all whyâquestions ask for contrastive explana- tions, even if the foils are not made explicit [102, 77, 69, 72, 110, 108], and that people are good at inferring the foil; e.g. from language and tone. For example, given the ques- tion, âWhy did Elizabeth open the door? â, there are many, possibly an inï¬nite number, of foils; e.g. âWhy did Elizabeth open the door, rather than leave it closed? â, âWhy did Elizabeth open the door rather than the window?â, or âWhy did Elizabeth open the door rather than Michael opening it? â. These diï¬erent contrasts have diï¬erent explanations, and there is no inherent one that is certain to be the foil for this question. The negated presupposition not(Elizabeth opens the door) refers to an entire class of foils, including all those listed already. Lipton [102] notes that âcentral requirement for a sensible con- trastive question is that the fact and the foil have a largely similar history, against which the diï¬erences stand out. When the histories are disparate, we do not know where to begin to answer | 1706.07269#53 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 55 | It is important that the explainee understands the counterfactual case [69]. For example, given the question âWhy did Elizabeth open the door? â, the answer âBecause she was hotâ is a good answer if the foil is Elizabeth leaving the door closed, but not a good answer if the foil is ârather than turning on the air conditioningâ, because the fact that Elizabeth is hot explains both the fact and the foil.
The idea of contrastive explanation should not be controversial if we accept the argu- ment outlined in Section 2.2 that people ask for explanations about events or observations that they consider abnormal or unexpected from their own point of view [77, 73, 69]. In such cases, people expect to observe a particular event, but then observe another, with the observed event being the fact and the expected event being the foil.
16
Van Bouwel and Weber [175] deï¬ne four types of explanatory question, three of which are contrastive:
Plain fact: | Why does object a have property P? P-contrast: | Why does object a have property P, rather than property Q? O-contrast: Why does object a have property P, while object b has property Q? T-contrast: | Why does object a have property P at time t, but property Q at time t/? | 1706.07269#55 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 56 | Van Bouwel and Weber note that diï¬erences occur on properties within an object (P-contrast), between objects themselves (O-contrast), and within an object over time (T-contrast). They reject the idea that all âplain factâ questions have an implicit foil, proposing that plain-fact questions require showing details across a ânon-interruptedâ causal chain across time. They argue that plain-fact questions are typically asked due to curiosity, such as desiring to know how certain facts ï¬t into the world, while contrastive questions are typically asked when unexpected events are observed. | 1706.07269#56 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 57 | Lipton [102] argues that contrastive explanations between a fact P and a foil Q are, in general, easier to derive than âcompleteâ explanations for plain-fact questions about P . For example, consider the arthropod classiï¬cation algorithm in Section 1.4. To be a beetle, an arthropod must have six legs, but this does not cause an arthropod to be a beetle â other causes are necessary. Lipton contends that we could answer the P-contrast question such as âWhy is image J labelled as a Beetle instead of a Spider?â by citing the fact that the arthropod in the image has six legs. We do not need information about eyes, wings, or stingers to answer this, whereas to explain why image J is a spider in a non-contrastive way, we must cite all causes.
The hypothesis that all causal explanations are contrastive is not merely philosophical. In Section 4, we see several bodies of work supporting this, and these provide more detail as to how people select and evaluate explanations based on the contrast between fact and foil.
2.4. Types and Levels of Explanation | 1706.07269#57 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 58 | 2.4. Types and Levels of Explanation
The type of explanation provided to a question is dependent on the particular ques- tion asked; for example, asking why some event occurred is diï¬erent to asking under what circumstances it could have occurred; that is, the actual vs. the hypothetical [159]. However, for the purposes of answering whyâquestions, we will focus on a particular subset of philosophical work in this area.
Aristotleâs Four Causes model, also known as the Modes of Explanation model, con- tinues to be foundational for cause and explanation. Aristotle proposed an analytic scheme, classed into four diï¬erent elements, that can be used to provide answers to whyâquestions [60]:
1. Material : The substance or material of which something is made. For example, rubber is a material cause for a car tyre.
2. Formal : The form or properties of something that make it what it is. For example, being round is a formal cause of a car tyre. These are sometimes referred to as categorical explanations.
3. Eï¬cient: The proximal mechanisms of the cause something to change. For exam- ple, a tyre manufacturer is an eï¬cient cause for a car tyre. These are sometimes referred to as mechanistic explanations. 17 | 1706.07269#58 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 59 | 4. Final : The end or goal of something. Moving a vehicle is an eï¬cient cause of a car tyre. These are sometimes referred to as functional or teleological explanations.
A single whyâquestion can have explanations from any of these categories. For ex- ample, consider the question: âWhy does this pen contain ink? â. A material explanation is based on the idea that the pen is made of a substance that prevents the ink from leaking out. A formal explanation is that it is a pen and pens contain ink. An eï¬cient explanation is that there was a person who ï¬lled it with ink. A ï¬nal explanation is that pens are for writing, and so require ink.
Several other authors have proposed models similar to Aristotleâs, such as Dennett [35], who proposed that people take three stances towards objects: physical, design, and intention; and Marr [119], building on earlier work with Poggio [120], who deï¬ne the computational, representational, and hardware levels of understanding for computational problems. | 1706.07269#59 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 60 | Kass and Leake [85] deï¬ne a categorisation of explanations of anomalies into three types: (1) intentional ; (2) material ; and (3) social. The intentional and material cate- gories correspond roughly to Aristotleâs ï¬nal and material categories, however, the social category does not correspond to any particular category in the models of Aristotle, Marr [119], or Dennett [35]. The social category refers to explanations about human behaviour that is not intentionally driven. Kass and Leake give the example of an increase in crime rate in a city, which, while due to intentional behaviour of individuals in that city, is not a phenomenon that can be said to be intentional. While individual crimes are committed with intent, it cannot be said that the individuals had the intent of increasing the crime rate â that is merely an eï¬ect of the behaviour of a group of individuals.
# 2.5. Structure of Explanation | 1706.07269#60 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 61 | # 2.5. Structure of Explanation
As we saw in Section 2.1.2, causation is a major part of explanation. Earlier accounts of explanation from Hempel and Oppenheim [68] argued for logically deductive models of explanation. Kelley [86] subsequently argued instead that people consider co-variation in constructing explanations, and proposed a statistical model of explanation. However, while inï¬uential, subsequent experimental research uncovered many problems with these models, and currently, both the deductive and statistical models of explanation are no longer considered valid theories of everyday explanation in most camps [114].
Overton [140, 139] deï¬nes a model of scientiï¬c explanation. In particular, Overton [139] deï¬nes the structure of explanations. He deï¬nes ï¬ve categories of properties or objects that are explained in science: (1) theories: sets of principles that form building blocks for models; (2) models: an abstraction of a theory that represents the relationships between kinds and their attributes; (3) kinds: an abstract universal class that supports counterfactual reasoning; (4) entities: an instantiation of a kind; and (5) data: state- ments about activities (e.g. measurements, observations). The relationships between these is shown in Figure 3. | 1706.07269#61 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 62 | From these categories, Overton [139] provides a crisp deï¬nition of the structure of scientiï¬c explanations. He argues that explanations of phenomena at one level must be relative to and refer to at least one other level, and that explanations between two such levels must refer to all intermediate levels. For example, an arthropod (Entity) has eight legs (Data). Entities of this Kind are spiders, according to the Model of our Theory of arthropods. In this example, the explanation is constructed by appealing to the Model 18
justiï¬es models instantiated by measured by Theories Models Kinds Entities Data uniï¬es submodel of subkind of causes correlates with
Figure 3: Overtonâs ï¬ve categories and four relations in scientiï¬c explanation, reproduced from Overton [139, p. 54, Figure 3.1]
.
of insects, which, in turn, appeals to a particular Theory that underlies that Model. Figure 4 shows the structure of a theory-data explanation, which is the most complex because it has the longest chain of relationships between any two levels. | 1706.07269#62 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 63 | p o t quality A explains quality B s r a e b s r a e b theory core relation data e r o c j u s t i ï¬ e s d e r u s a e m y b model entity m o d e l s d e t a i t n a t s n y b i e s a b kind X identity kind X
Figure 4: Overtonâs general structure of a theory-data explanation, reproduced from Overton [139, p. 54, Figure 3.2])
With respect to social explanation, Malle [112] argues that social explanation is best understood as consisting of three layers:
1. Layer 1: A conceptual framework that outlines the assumptions people make about
19
human behaviour and explanation.
2. Layer 2: The psychological processes that are used to construct explanations.
3. Layer 3: Language layer that speciï¬es the type of linguistic structures people use in giving explanations.
I will present Malleâs views of these three layers in more detail in the section on social attribution (Section 3), cognitive processes (Section 4, and social explanation (Section 5). This work is collated into Malleâs 2004 book [112].
2.6. Explanation and XAI | 1706.07269#63 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 64 | 2.6. Explanation and XAI
This section presents some ideas on how the philosophical work outlined above aï¬ects researchers and practitioners in XAI.
2.6.1. Causal Attribution is Not Causal Explanation
An important concept is the relationship between cause attribution and explanation. Extracting a causal chain and displaying it to a person is causal attribution, not (neces- sarily) an explanation. While a person could use such a causal chain to obtain their own explanation, I argue that this does not constitute giving an explanation. In particular, for most AI models, it is not reasonable to expect a lay-user to be able to interpret a causal chain, no matter how it is presented. Much of the existing work in explainable AI literature is on the causal attribution part of explanation â something that, in many cases, is the easiest part of the problem because the causes are well understood, for- malised, and accessible by the underlying models. In later sections, we will see more on the diï¬erence between attribution and explanation, why existing work in causal attri- bution is only part of the problem of explanation, and insights of how this work can be extended to produce more intuitive explanations.
# 2.6.2. Contrastive Explanation | 1706.07269#64 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 65 | # 2.6.2. Contrastive Explanation
Perhaps the most important point in this entire section is that explanation is con- trastive (Section 2.3). Research indicates that people request only contrastive explana- tions, and that the cognitive burden of complete explanations is too great.
It could be argued that because models in AI operate at a level of abstraction that is considerably higher than real-world events, the causal chains are often smaller and less cognitively demanding, especially if they can be visualised. Even if one agrees with this, this argument misses a key point: it is not only the size of the causal chain that is important â people seem to be cognitively wired to process contrastive explanations, so one can argue that a layperson will ï¬nd contrastive explanations more intuitive and more valuable. | 1706.07269#65 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 66 | This is both a challenge and an opportunity in AI. It is a challenge because often a person may just ask âWhy X?â, leaving their foil implicit. Eliciting a contrast case from a human observer may be diï¬cult or even infeasible. Lipton [102] states that the obvious solution is that a non-contrastive question âWhy P? â can be interpreted by default to âWhy P rather than not-P?â. However, he then goes on to show that to answer âWhy P rather than not-P?â is equivalent to providing all causes for P â something that is not so useful. As such, the challenge is that the foil needs to be determined. In some
20
applications, the foil could be elicited from the human observer, however, in others, this may not be possible, and therefore, foils may have to be inferred. As noted later in Section 4.6.3, concepts such as abnormality could be used to infer likely foils, but techniques for HCI, such as eye gaze [164] and gestures could be used to infer foils in some applications. | 1706.07269#66 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 67 | It is an opportunity because, as Lipton [102] argues, explaining a contrastive question is often easier than giving a full causal attribution because one only needs to understand what is diï¬erent between the two cases, so one can provide a complete explanation without determining or even knowing all of the causes of the fact in question. This holds for computational explanation as well as human explanation. Further, it can be beneï¬cial in a more pragmatic way:
if a person provides a foil, they are implicitly pointing towards the part of the model they do not understand. In Section 4.4, we will see research that outlines how people use contrasts to select explanations that are much simpler than their full counterparts. | 1706.07269#67 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 68 | Several authors within artiï¬cial intelligence ï¬ag the importance of contrastive ques- tions. Lim and Dey [100] found via a series of user studies on context-aware applications that âWhy not . . . ? â questions were common questions that people asked. Further, several authors have looked to answer contrastive questions. For example, Winikoï¬ [190] considers the questions of âWhy donât you believe . . . ? â and âWhy didnât you do . . . ? â for BDI programs, or Fox et al. [46] who have similar questions in planning, such as âWhy didnât you do something else (that I would have done)?â. However, most existing work considers contrastive questions, but not contrastive explanations; that is, ï¬nding the diï¬erences between the two cases. Providing two complete explanations does not take advantage of contrastive questions. Section 4.4.1 shows that people use the diï¬erence between the fact and foil to focus explanations on the causes relevant to the question, which makes the explanations more relevant to the explainee.
2.6.3. Explanatory Tasks and Levels of Explanation | 1706.07269#68 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 70 | To illustrate, letâs take a couple of examples and apply them to Aristotleâs modes of explanation model outlined in Section 2.4. Consider our earlier arthropod classiï¬cation algorithm from Section 1.4. At ï¬rst glance, it may seem that such an algorithm resides at the formal level, so should oï¬er explanations based on form. However, this would be erroneous, because that given categorisation algorithm has both eï¬cient/mechanistic components, a reason for being implemented/executed (the ï¬nal mode), and is imple- mented on hardware (the ï¬nal mode). As such, there are explanations for its behaviour at all levels. Perhaps most whyâquestions proposed by human observers about such an algorithm would indeed by at the formal level, such as âWhy is image J in group A instead of group B? â, for which an answer could refer to the particular form of image and the groups A and B. However, in our idealised dialogue, the question âWhy did you infer that the insect in image J had eight legs instead of six? â asks a question about the underlying algorithm | 1706.07269#70 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 71 | dialogue, the question âWhy did you infer that the insect in image J had eight legs instead of six? â asks a question about the underlying algorithm for counting legs, so the cause is at the eï¬cient level; that is, it does not ask for what constitutes a spider in our model, but from where the inputs for that model came. Further, the ï¬nal question about classifying the spider as an octopus 21 | 1706.07269#71 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 72 | refers to the ï¬nal level, referring to the algorithms function or goal. Thus, causes in this algorithm occur at all four layers: (1) the material causes are at the hardware level to derive certain calculations; (2) the formal causes determine the classiï¬cation itself; (3) the eï¬cient causes determine such concepts as how features are detected; and (4) ï¬nal causes determine why the algorithm was executed, or perhaps implemented at all. | 1706.07269#72 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 73 | As a second example, consider an algorithm for planning a robotic search and rescue mission after a disaster. In planning, programs are dynamically constructed, so diï¬erent modes of cause/explanation are of interest compared to a classiï¬cation algorithm. Causes still occur at the four levels: (1) the material level as before describes the hardware computation; (2) the formal level describes the underlying model passed to the planning tool; (3) the mechanistic level describes the particular planning algorithm employed; and (4) the ï¬nal level describes the particular goal or intention of a plan. In such a system, the robot would likely have several goals to achieve; e.g. searching, taking pictures, supplying ï¬rst-aid packages, returning to re-fuel, etc. As such, whyâquestions described at the ï¬nal level (e.g. its goals) may be more common than in the classiï¬cation algorithm example. However, questions related to the model are relevant, or why particular actions were taken rather than others, which may depend on the particular optimisation criteria used (e.g. cost vs. time), and these require eï¬cient/mechanistic explanations. | 1706.07269#73 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 74 | However, I am not arguing that we, as practitioners, must have explanatory agents capable of giving explanations at all of these levels. I argue that these frameworks are useful for analysing the types of questions explanatory agents one may receive. In Sec- tions 3 and 4, we will see work that demonstrates that for explanations at these diï¬erent levels, people expect diï¬erent types of explanation. Thus, it is important to understand which types of questions refer to which levels in particular instances of technology, that diï¬erent levels will be more useful/likely than others, and that, in research articles on interpretability, it is clear at which level we are aiming to provide explanations.
2.6.4. Explanatory Model of Self | 1706.07269#74 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 75 | 2.6.4. Explanatory Model of Self
The work outlined in this section demonstrates that an intelligent agent must be able to reason about its own causal model. Consider our image classiï¬cation example. When posed with the question âWhy is image J in group A instead of group B? â, it is non-trivial, in my view, to attribute the cause by using the algorithm that generated the answer. A cleaner solution would be to have a more abstract symbolic model alongside this that records information such as when certain properties are detected and when certain categorisations are made, which can be reasoned over. In other words, the agent requires a model of itâs own decision making â a model of self â that exists merely for the purpose of explanation. This model may be only an approximation of the original model, but more suitable for explanation. | 1706.07269#75 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 76 | This idea is not new in XAI. In particular, researchers have investigated machine learning models that are uninterpretable, such as neural nets, and have attempted to extract model approximations using more interpretable model types, such as Bayesian networks [63], decision trees [47], or local approximations [157]. However, my argument here is not only for the purpose of interpretability. Even models considered interpretable, such as decision trees, could be accompanied by another model that is speciï¬cally used for explanation. For example, to explain control policies, Hayes and Shah [65] select and annotate particular important state variables and actions that are relevant for expla- nation only. Langley et al. notes that âAn agent must represent content in a way that 22
supports the explanationsâ [93, p. 2].
Thus, to generate meaningful and useful explanations of behaviour, models based on the our understanding of explanation must sit alongside and work with the decision- making mechanisms.
2.6.5. Structure of Explanation | 1706.07269#76 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 77 | 2.6.5. Structure of Explanation
Related to the âmodel of selfâ is the structure of explanation. Overtonâs model of scientiï¬c explanation [139] deï¬nes what I believe to be a solid foundation for the structure of explanation in AI. To provide an explanation along the chain outlined in Figure 4, one would need an explicit explanatory model (Section 2.6.4) of each of these diï¬erent categories for the given system.
For example, the question from our dialogue in Section 1.4 âHow do you know that spiders have eight legs? â, is a question referring not to the causal attribution in the clas- siï¬cation algorithm itself, but is asking: âHow do you know this? â, and thus is referring to how this was learnt â which, in this example, was learnt via another algorithm. Such an approach requires an additional part of the âmodel of selfâ that refers speciï¬cally to the learning, not the classiï¬cation.
Overtonâs model [139] or one similar to it seems necessary for researchers and prac- titioners in explainable AI to frame their thoughts and communicate their ideas.
# 3. Social Attribution â How Do People Explain Behaviour? | 1706.07269#77 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 78 | # 3. Social Attribution â How Do People Explain Behaviour?
Just as the contents of the nonsocial environment are interrelated by certain lawful connections, causal or otherwise, which deï¬ne what can or will hap- pen, we assume that there are connections of similar character between the contents of the social environment. â Heider [66, Chapter 2, pg. 21]
In this section, we outline work on social attribution, which deï¬nes how people at- tribute and (partly) explain behaviour of others. Such work is clearly relevant in many areas of artiï¬cial intelligence. However, research on social attribution laid the ground- work for much of the work outlined in Section 4, which looks at how people generate and evaluate events more generally. For a more detailed survey on this, see McClure [122] and Hilton [70].
3.1. Deï¬nitions
Social attribution is about perception. While the causes of behaviour can be described at a neurophysical level, and perhaps even lower levels, social attribution is concerned not with the real causes of human behaviour, but how other attribute or explain the behaviour of others. Heider [66] deï¬nes social attribution as person perception. | 1706.07269#78 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 79 | Intentions and intentionality is key to the work of Heider [66], and much of the recent work that has followed his â for example, Dennett [35], Malle [112], McClure [122], Boonzaier et al. [10], Kashima et al. [84]. An intention is a mental state of a person in which they form a commitment to carrying out some particular action or achieving some particular aim. Malle and Knobe [115] note that intentional behaviour therefore is always contrasted with unintentional behaviour, citing that laws of state, rules in sport, etc. all treat intentional actions diï¬erent from unintentional actions because intentional
23
rule breaking is punished more harshly than unintentional rule breaking. They note that, while intentionality can be considered an objective fact, it is also a social construct, in that people ascribe intentions to each other whether that intention is objective or not, and use these to socially interact.
Folk psychology, or commonsense psychology, is the attribution of human behaviour using âeverydayâ terms such as beliefs, desires, intentions, emotions, and personality traits. This ï¬eld of cognitive and social psychology recognises that, while such concepts may not truly cause human behaviour, these are the concepts that humans use to model and predict each othersâ behaviours [112]. In other words, folk psychology does not describe how we think; it describes how we think we think. | 1706.07269#79 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 80 | In the folk psychological model, actions consist of three parts: (1) the precondition of the action â that is, the circumstances under which it can be successfully executed, such as the capabilities of the actor or the constraints in the environment; (2) the action itself that can be undertaken; and (3) the eï¬ects of the action â that is, the changes that they bring about, either environmentally or socially.
Actions that are undertaken are typically explained by goals or intentions. In much of the work in social science, goals are equated with intentions. For our discussions, we deï¬ne goals as being the end to which a mean contributes, while we deï¬ne intentions as short-term goals that are adopted to achieve the end goals. The intentions have no utility themselves except to achieve positive utility goals. A proximal intention is a near-term intention that helps to achieve some further distal intention or goal. In the survey of existing literature, we will use the term used by the original authors, to ensure that they are interpreted as the authors expected.
# 3.2. Intentionality and Explanation | 1706.07269#80 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 81 | # 3.2. Intentionality and Explanation
Heider [66] was the ï¬rst person to experimentally try to identify how people attribute behaviour to others. In their now famous experiment from 1944, Heider and Simmel [67], showed a video containing animated shapes â a small triangle, a large triangle, and a small circle â moving around a screen3, and asked experiment participants to observe the video and then describe the behaviour of the shapes. Figure 5 shows a captured screenshot from this video in which the circle is opening a door to enter into a room. The participantsâ responses described the behaviour anthropomorphically, assigning actions, intentions, emotions, and personality traits to the shapes. However, this experiment was not one on animation, but in social psychology. The aim of the experiment was to demonstrate that people characterise deliberative behaviour using folk psychology. | 1706.07269#81 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 82 | Heider [66] argued then that, the diï¬erence between object perception â describing causal behaviour of objects â and person perception was the intentions, or motives, of the person. He noted that behaviour in a social situation can have two types of causes: (1) personal (or dispositional ) causality; and (2) impersonal causality, which can subsequently be inï¬uenced by situational factors, such as the environment. This interpretation lead to many researchers reï¬ecting on the person-situation distinction and, in Malleâs view [114], incorrectly interpreting Heiderâs work for decades.
Heider [66] contends that the key distinction between intentional action and non- intentional events is that intentional action demonstrates equiï¬nality, which states that
# 3See the video here: https://www.youtube.com/watch?v=VTNmLt7QX8E.
24 | 1706.07269#82 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 83 | Figure 5: A screenshot of the video used in Heider and Simmelâs seminal study [67].
while the means to realise an intention may vary, the intention itself remains equa-ï¬nal. Thus, if an actor should fail to achieve their intention, they will try other ways to achieve this intention, which diï¬ers from physical causality. Lombrozo [107] provides the example of Romeo and Juliet, noting that had a wall been placed between them, Romeo would have scaled the wall or knocked in down to reach his goal of seeing Juliet. However, iron ï¬laments trying to get to a magnet would not display such equiï¬nality â they would instead be simply blocked by the wall. Subsequent research conï¬rms this distinction [35, 112, 122, 10, 84, 108].
Malle and Pearce [118] break the actions that people will explain into two dimensions: (1) intentional vs. unintentional ; and (2) observable vs. unobservable; thus creating four diï¬erent classiï¬cations (see Figure 6).
Intentional Unintentional Observable Unobservable actions intentional thoughts mere behaviours experiences
Figure 6: Malleâs classiï¬cation of types of events, based on the dimensions of intentionality and observ- ability [112, Chapter 3] | 1706.07269#83 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 84 | Figure 6: Malleâs classiï¬cation of types of events, based on the dimensions of intentionality and observ- ability [112, Chapter 3]
Malle and Pearce [118] performed experiments to conï¬rm this model. As part of these experiments, participants were placed into a room with another participant, and were left for 10 minutes to converse with each other to âget to know one anotherâ, while their conversation was recorded. Malle and Pearce coded participants responses to questions with regards to observability and intentionality. Their results show that actors tend to explain unobservable events more than observable events, which Malle and Pearce argue is because the actors are more aware of their own beliefs, desires, feelings, etc., than of their observable behaviours, such as facial expressions, gestures, postures, etc.). On the other hand, observers do the opposite for the inverse reason. Further, they showed that actors tend to explain unintentional behaviour more than intentional behaviour, again because (they believe) they are aware of their intentions, but not their âunplannedâ unintentional behaviour. Observers tend to ï¬nd both intentional and unintentional behaviour diï¬cult 25 | 1706.07269#84 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 85 | to explain, but will tend to ï¬nd intentional behaviour more relevant. Such a model accounts for the correspondence bias noted by Gilbert and Malone [51], which is the tendency for people to explain othersâ behaviours based on traits rather than situational factors, because the situational factors (beliefs, desires) are invisible.
3.3. Beliefs, Desires, Intentions, and Traits
Further to intentions, research suggest that other factors are important in attribution of social behaviour; in particular, beliefs, desires, and traits.
Kashima et al. [84] demonstrated that people use the folk psychological notions of belief, desire, and intention to understand, predict, and explain human action. In par- ticular, they demonstrated that desires hold preference over beliefs, with beliefs being not explained if they are clear from the viewpoint of the explainee. They showed that people judge that explanations and behaviour âdo not make senseâ when belief, desires, and intentions were inconsistent with each other. This early piece of work is one of the ï¬rst to re-establish Heiderâs theory of intentional behaviour in attribution [66].
However, it is the extensive body of work from Malle [111, 112, 113] that is the most seminal in this space.
3.3.1. Malleâs Conceptual Model for Social Attribution | 1706.07269#85 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 86 | 3.3.1. Malleâs Conceptual Model for Social Attribution
Malle [112] proposes a model based on Theory of Mind, arguing that people attribute behaviour of others and themselves by assigning particular mental states that explain the behaviour. He oï¬ers six postulates (and sub-postulates) for the foundation of peopleâs folk explanation of behaviour, modelled in the scheme in Figure 7. He argues that these six postulates represent the assumptions and distinctions that people make when attributing behaviour to themselves and others:
Determine intentionality of behavior if unintentional if intentional | offer cause offer EF offer reason offer CHR J belief desire I marked marked â+ unmarked | â~ unmarked
Figure 7: Malleâs conceptual framework for behaviour explanation; reproduced Malle [113, p. 87, Figure 3.3], adapted from Malle [112, p. 119, Figure 5.1]
26
1. People distinguish between intentional and unintentional behaviour.
2. For intentional behaviour, people use three modes of explanation based on the speciï¬c circumstances of the action:
(a) Reason explanations are those explanations that link to the mental states (typically desires and beliefs, but also values) for the act, and the grounds on which they formed an intention. | 1706.07269#86 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 87 | (a) Reason explanations are those explanations that link to the mental states (typically desires and beliefs, but also values) for the act, and the grounds on which they formed an intention.
(b) Causal History of Reason (CHR) explanations are those explanations that use factors that âlay in the backgroundâ of an agentâs reasons (note, not the background of the action), but are not themselves reasons. Such factors can include unconscious motives, emotions, culture, personality, and the context. CHR explanations refer to causal factors that lead to reasons. CHR explanations do not presuppose either subjectivity or rationality. This has three implications. First, they do not require the explainer to take the perspective of the explainee. Second, they can portray the actor as less ra- tionale, by not oï¬ering a rational and intentional reason for the behaviour. Third, they allow the use of unconscious motives that the actor themselves would typically not use. Thus, CHR explanations can make the agent look less rationale and in control than reason explanations. | 1706.07269#87 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 88 | (c) Enabling factor (EF) explanations are those explanations that explain not the intention of the actor, but instead explain how the intentional action achieved the outcome that it did. Thus, it assumes that the agent had an intention, and then refers to the factors that enabled the agent to successfully carry out the action, such as personal abilities or environmental properties. In essence, it relates to why preconditions of actions were enabled.
3. For unintentional behaviour, people oï¬er just causes, such as physical, mechanistic, or habitual cases.
At the core of Malleâs framework is the intentionality of an act. For a behaviour to be considered intentional, the behaviour must be based on some desire, and a belief that the behaviour can be undertaken (both from a personal and situational perspective) and can achieve the desire. This forms the intention. If the agent has the ability and the awareness that they are performing the action, then the action is intentional.
Linguistically, people make a distinction between causes and reasons; for example, consider âWhat were her reasons for choosing that book? â, vs. âWhat were his causes for falling over? â. The use of âhis causesâ implies that the cause does not belong to the actor, but the reason does. | 1706.07269#88 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 89 | To give a reason explanation is to attribute intentionality to the action, and to identify the desires, beliefs, and valuings in light of which (subjectivity assumption) and on the grounds of which (rationality assumption) the agent acted. Thus, reasons imply intentionality, subjectivity, and rationality.
3.4. Individual vs. Group Behaviour
Susskind et al. [167] investigated how people ascribe causes to groups rather than individuals, focusing on traits. They provided experimental participants with a set of 27 | 1706.07269#89 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 90 | Susskind et al. [167] investigated how people ascribe causes to groups rather than individuals, focusing on traits. They provided experimental participants with a set of 27
statements describing behaviours performed by individuals or groups, and were then asked to provide ratings of diï¬erent descriptions of these individuals/groups, such as their intelligence (a trait, or CHR in Malleâs framework), and were asked to judge the conï¬dence of their judgements. Their results showed that as with individuals, partici- pants freely assigned traits to groups, showing that groups are seen as agents themselves. However, they showed that when explaining an individualâs behaviour, the participants were able to produce explanations faster and more conï¬dently than for groups, and that the traits that they assigned to individuals were judged to be less âextremeâ than those assigned to to groups. In a second set of experiments, Susskind et al. showed that people expect more consistency in an individualâs behaviour compared to that of a group. When presented with a behaviour that violated the impression that participants had formed of individuals or groups, the participants were more likely to attribute the individualâs behaviour to causal mechanisms than the groupsâ behaviour. | 1706.07269#90 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 91 | OâLaughlin and Malle [137] further investigated peopleâs perception of group vs. indi- vidual behaviour, focusing on intentionality of explanation. They investigated the relative agency of groups that consist of âunrelatedâ individuals acting independently (aggregate groups) compared to groups acting together (jointly acting groups). In their study, par- ticipants were more likely to oï¬er CHR explanations than intention explanations for aggregate groups, and more likely to oï¬er intention explanations than CHR explanations for jointly acting groups. For instance, to explain why all people in a department store came to that particular store, participants were more likely oï¬er a CHR explanation, such as that there was a sale on at the store that day. However, to answer the same question for why a group of friends came to the same store place, participants were more likely to oï¬er an explanation that the group wanted to spend the day together shopping â a desire. This may demonstrate that people cannot attribute intentional behaviour to the individuals in an aggregate group, so resort to more causal history explanations. | 1706.07269#91 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 92 | OâLaughlin and Malleâs [137] ï¬nding about using CHRs to explain aggregate group behaviour is consistent with the earlier work from Kass and Leake [85], whose model of explanation explicitly divided intentional explanations from social explanations, which are explanations about human behaviour that is not intentionally driven (discussed in more detail in Section 2.4). These social explanations account for how people attribute deliberative behaviour to groups without referring to any form of intention.
An intriguing result from OâLaughlin and Malle [137] is that while people attribute less intentionality to aggregate groups than to individuals, they attribute more intention- ality to jointly acting groups than to individuals. OâLaughlin and Malle reason that joint action is highly deliberative, so the group intention is more likely to have been explicitly agreed upon prior to acting, and the individuals within the group would be explicitly aware of this intention compared to the their own individual intentions.
# 3.5. Norms and Morals | 1706.07269#92 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 93 | # 3.5. Norms and Morals
Norms have been shown to hold a particular place in social attribution. Burguet and Hilton [15] (via Hilton [70]) showed that norms and abnormal behaviour are important in how people ascribe mental states to one another. For example, Hilton [70] notes that upon hearing the statement âTed admires Paul â, people tend to attribute some trait to Paul as the object of the sentence, such as that Paul is charming and many people would admire him; and even that Ted does not admire many people. However, a counter- normative statement such as âTed admires the rapistâ triggers attributions instead to 28 | 1706.07269#93 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 94 | Ted, explained by the fact that it is non-normative to admire rapists, so Tedâs behaviour is distinctive to others, and is more likely to require an explanation. In Section 4, we will see more on the relationship between norms, abnormal behaviour, and attribution. Uttich and Lombrozo [174] investigate the relationship of norms and the eï¬ect it has on attributing particular mental states, especially with regard to morals. They oï¬er an interesting explanation of the side-eï¬ect eï¬ect, or the Knobe eï¬ect [88], which is the eï¬ect for people to attribute particular mental states (Theory of Mind) based on moral judgement. Knobeâs vignette from his seminal [88] paper is:
The vice-president of a company went to the chairman of the board and said, âWe are thinking of starting a new program. It will help us increase proï¬ts, but it will also harm the environmentâ. The chairman of the board answered, âI donât care at all about harming the environment. I just want to make as much proï¬t as I can. Letâs start the new program.â They started the new program. Sure enough, the environment was harmed. | 1706.07269#94 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 95 | Knobe then produce a second vignette, which is exactly the same, but the side-eï¬ect of the program was in fact that the environment was helped. When participants were asked if the chairman had intentionally harmed the environment (ï¬rst vignette), 82% of respondents replied yes. However, in the second vignette, only 23% thought that the chairman intentionally helped the environment.
Uttich and Lombrozo [174] hypothesis that the two existing camps aiming to explain this eï¬ect: the Intuitive Moralist and Biased Scientist, do not account for this. Uttich and Lombrozo hypothesise that it is the fact the norms are violated that account for this; that is, rather than moralist judgements inï¬uencing intentionality attribution, it is the more general relationship of conforming (or not) to norms (moral or not). In particular, behaviour that conforms to norms is less likely to change a personâs Theory of Mind (intention) of another person compared to behaviour that violates norms. | 1706.07269#95 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 96 | Samland and Waldmann [161] further investigate social attribution in the context of norms, looking at permissibility rather than obligation. They gave participants scenarios in which two actors combined to cause an outcome. For example, a department in which only administrative assistants are permitted to take pens from the stationary cupboard. One morning, Professor Smith (not permitted) and an assistant (permitted) each take a pen, and there are no pens remaining. Participants were tasked with rating how strongly each agent caused the outcome. Their results showed that participants rated the action of the non-permitted actor (e.g. Professor Smith) more than three times stronger than the other actor. However, if the outcome was positive instead of negative, such as an intern (not permitted) and a doctor (permitted) both signing oï¬ on a request for a drug for a patient, who subsequently recovers due to the double dose, participants rate the non-permitted behaviour only slightly stronger. As noted by Hilton [70, p. 54], these results indicate that in such settings, people seem to interpret the term cause as meaning âmorally or institutionally responsibleâ. | 1706.07269#96 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 97 | In a follow-up study, Samland et al. [160] showed that children are not sensitive to norm violating behaviour in the same way that adults are. In particular, while both adults and children correlate cause and blame, children do not distinguish between cases in which the person was aware of the norm, while adults do.
29
3.6. Social Attribution and XAI
This section presents some ideas on how the work on social attribution outlined above aï¬ects researchers and practitioners in XAI.
# 3.6.1. Folk Psychology
While the models and research results presented in this section pertain to the be- haviour of humans, it is reasonably clear that these models have a place in explainable AI. Heider and Simmelâs seminal experiments from 1944 with moving shapes [67] (Sec- tion 3.2) demonstrate unequivocally that people attribute folk psychological concepts such as belief, desire, and intention, to artiï¬cial objects. Thus, as argued by de Graaf and Malle [34], it is not a stretch to assert that people will expect explanations using the same conceptual framework used to explain human behaviours. | 1706.07269#97 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 98 | This model is particularly promising because many knowledge-based models in delib- erative AI either explicitly build on such folk psychological concepts, such as belief-desire- intention (BDI) models [152], or can be mapped quite easily to them; e.g. in classical-like AI planning, goals represent desires, intermediate/landmark states represent intentions, and the environment model represents beliefs [50].
In addition, the concepts and relationships between actions, preconditions, and prox- imal and distal intentions is similar to those in models such as BDI and planning, and as such, the work on the relationships between preconditions, outcomes, and competing goals, is useful in this area.
# 3.6.2. Malleâs Models
Of all of the work outlined in this section, it is clear that Malleâs model, culminating in his 2004 text book [112], is the most mature and complete model of social attribution to date. His three-layer models provides a solid foundation on which to build explanations of many deliberative systems, in particular, goal-based deliberation systems. | 1706.07269#98 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 99 | Malleâs conceptual framework provides a suitable framework for characterising diï¬er- ent aspects of causes for behaviour. It is clear that reason explanations will be useful for goal-based reasoners, as discussed in the case of BDI models and goal-directed AI planning, and enabling factor explanations can play a role in how questions and in counterfactual explanations. In Section 4, we will see further work on how to select explanations based on these concepts. | 1706.07269#99 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 100 | However, the causal history of reasons (CHR) explanations also have a part to play for deliberative agents. In human behaviour, they refer to personality traits and other unconscious motives. While anthropomorphic agents could clearly use CHRs to explain behaviour, such as emotion or personality, they are also valid explanations for non- anthropomorphic agents. For example, for AI planning agents that optimise some metric, such as cost, the explanation that action a was chosen over action b because it had lower cost is a CHR explanation. The fact that the agent is optimising cost is a âpersonality traitâ of the agent that is invariant given the particular plan or goal. Other types of planning systems may instead be risk averse, optimising to minimise risk or regret, or may be âï¬exibleâ and try to help out their human collaborators as much as possible. These types of explanations are CHRs; even if they are not described as personality traits to the explainee. However, one must be careful to ensure these CHRs do not make their agent appear irrational â unless of course, that is the goal one is trying to achieve with the explanation process.
30 | 1706.07269#100 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 101 | 30
Broekens et al. [12] describe algorithms for automatic generation of explanations for BDI agents. Although their work does not build on Malleâs model directly, it shares a similar structure, as noted by the authors, in that their model uses intentions and enabling conditions as explanations. They present three algorithms for explaining be- haviour: (a) oï¬ering the goal towards which the action contributes; (b) oï¬ering the enabling condition of an action; and (c) oï¬ering the next action that is to be performed; thus, the explanadum is explained by oï¬ering a proximal intention. A set of human behavioural experiments showed that the diï¬erent explanations are considered better in diï¬erent circumstances; for example, if only one action is required to achieve the goal, then oï¬ering the goal as the explanation is more suitable than oï¬ering the other two types of explanation, while if it is part of a longer sequence, also oï¬ering a proximal intention is evaluated as being a more valuable explanation. These results reï¬ect those by Malle, but also other results from social and cognitive psychology on the link between goals, proximal intentions, and actions, which are surveyed in Section 4.4.3
# 3.6.3. Collective Intelligence | 1706.07269#101 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 102 | # 3.6.3. Collective Intelligence
The research into behaviour attribution of groups (Section 3.4) is important for those working in collective intelligence; areas such as in multi-agent planning [11], computa- tional social choice [26], or argumentation [8]. Although this line of work appears to be much less explored than attributions of individualâs behaviour, the ï¬ndings from Kass and Leake [85], Susskind et al., and in particular OâLaughlin and Malle [137] that people assign intentions and beliefs to jointly-acting groups, and reasons to aggregate groups, indicates that the large body of work on attribution of individual behaviour could serve as a solid foundation for explanation of collective behaviour.
# 3.6.4. Norms and Morals
The work on norms and morals discussed in Section 3.5 demonstrates that normative behaviour, in particular, violation of such behaviour, has a large impact on the ascrip- tion of a Theory of Mind to actors. Clearly, for anthropomorphic agents, this work is important, but as with CHRs, I argue here that it is important for more âtraditionalâ AI as well. | 1706.07269#102 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 103 | First, the link with morals is important for applications that elicit ethical or so- cial concerns, such as defence, safety-critical applications, or judgements about people. Explanations or behaviour in general that violate norms may give the impression of âim- moral machinesâ â whatever that can mean â and thus, such norms need to be explicitly considered as part of explanation and interpretability.
Second, as discussed in Section 2.2, people mostly ask for explanations of events that they ï¬nd unusual or abnormal [77, 73, 69], and violation of normative behaviour is one such abnormality [73]. Thus, normative behaviour is important in interpretability â a statement that would not surprise those researchers and practitioners of normative artiï¬cial intelligence.
In Section 4, we will see that norms and violation of normal/normative behaviour is also important in the cognitive processes of people asking for, constructing, and evaluat- ing explanations, and its impact on interpretability.
31
# 4. Cognitive Processes â How Do People Select and Evaluate Explanations? | 1706.07269#103 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 104 | 31
# 4. Cognitive Processes â How Do People Select and Evaluate Explanations?
There are as many causes of x as there are explanations of x. Consider how the cause of death might have been set out by the physician as âmultiple haemorrhageâ, by the barrister as ânegligence on the part of the driverâ, by the carriage-builder as âa defect in the brakelock constructionâ, by a civic planner as âthe presence of tall shrubbery at that turningâ. None is more true than any of the others, but the particular context of the question makes some explanations more relevant than others. â Hanson [61, p. 54] | 1706.07269#104 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 105 | Mill [130] is one of the earliest investigations of cause and explanation, and he argued that we make use of âstatisticalâ correlations to identify cause, which he called the Method of Diï¬erence. He argued that causal connection and explanation selection are essentially arbitrary and the scientiï¬cally/philosophically it is âwrongâ to select one explanation over another, but oï¬ered several cognitive biases that people seem to use, including things like unexpected conditions, precipitating causes, and variability. Such covariation models ideas were dominant in causal attribution, in particular, the work of Kelley [86]. However, many researchers noted that the covariation models failed to explain many observations; for example, people can identify causes between events from a single data point [127, 75]; and therefore, more recently, new theories have displaced them, while still acknowledging that the general idea that people using co-variations is valid. | 1706.07269#105 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 106 | In this section, we look at these theories, in particular, we survey three types of cognitive processes used in explanation: (1) causal connection, which is the process people use to identify the causes of events; (2) explanation selection, which is the process people use to select a small subset of the identiï¬ed causes as the explanation; and (3) explanation evaluation, which is the processes that an explainee uses to evaluate the quality of an explanation. Most of this research shows that people have certain cognitive biases that they apply to explanation generation, selection, and evaluation.
4.1. Causal Connection, Explanation Selection, and Evaluation
Malle [112] presents a theory of explanation, which breaks the psychological processes used to oï¬er explanations into two distinct groups, outlined in Figure 8:
1. Information processes â processes for devising and assembling explanations. The present section will present related work on this topic.
2. Impression management processes â processes for governing the social interaction of explanation. Section 5 will present related work on this topic.
Malle [112] further splits these two dimensions into two further dimensions, which refer to the tools for constructing and giving explanations, and the explainerâs perspective or knowledge about the explanation.
Taking the two dimensions, there are four items: | 1706.07269#106 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 107 | Taking the two dimensions, there are four items:
1. Information requirements â what is required to give an adequate explanation; for example, one must knows the causes of the explanandum, such as the desires and beliefs of an actor, or the mechanistic laws for a physical cause.
32
~ Information Functional requirements Explanatory tool capacities iT Impression Information processes EXPLANATION management processes Information Pragmatic access Explainer goals
Figure 8: Malleâs process model for behaviour explanation; reproduced from Malle [114, p. 320, Figure 6.6]
2. Information access â what information the explainer has to give the explanation, such as the causes, the desires, etc. Such information can be lacking; for example, the explainer does not know the intentions or beliefs of an actor in order to explain their behaviour.
3. Pragmatic goals â refers to the goal of the the explanation, such as transferring knowledge to the explainee, making an actor look irrational, or generating trust with the explainee.
4. Functional capacities â each explanatory tool has functional capacities that con- strain or dictate what goals can be achieved with that tool. | 1706.07269#107 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 108 | 4. Functional capacities â each explanatory tool has functional capacities that con- strain or dictate what goals can be achieved with that tool.
Malle et al. [117] argue that this theory accounts for apparent paradoxes observed in attribution theory, most speciï¬cally the actor-observer asymmetries, in which actors and observers oï¬er diï¬erent explanations for the same action taken by an actor. They hypothesise that this is due to information asymmetry; e.g. an observer cannot access the intentions of an actor â the intentions must be inferred from the actorâs behaviour. In this section, we ï¬rst look speciï¬cally at processes related to the explainer: informa- tion access and pragmatic goals. When requested for an explanation, people typically do not have direct access to the causes, but infer them from observations and prior knowl- edge. Then, they select some of those causes as the explanation, based on the goal of the explanation. These two process are known as causal connection (or causal inference), which is a processing of identifying the key causal connections to the fact; and explana- tion selection (or casual selection), which is the processing of selecting a subset of those causes to provide as an explanation. | 1706.07269#108 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 109 | This paper separates casual connection into two parts: (1) abductive reasoning, the cognitive process in which people try to infer causes that explain events by making as- sumptions about hypotheses and testing these; and (2) simulation, which is the cognitive 33
process of simulating through counterfactuals to derive a good explanation. These pro- cesses overlap, but can be somewhat diï¬erent. For example, the former requires the reasoner to make assumptions and test the validity of observations with respect to these assumptions, while in the latter, the reasoner could have complete knowledge of the causal rules and environment, but use simulation of counterfactual cases to derive an explanation. From the perspective of explainable AI, an explanatory agent explaining its decision would not require abductive reasoning as it is certain of the causes of its decisions. An explanatory agent trying to explain some observed events not under its control, such as the behaviour of another agent, may require abductive reasoning to ï¬nd a plausible set of causes.
Finally, when explainees receive explanations, they go through the process of expla- nation evaluation, through which they determine whether the explanation is satisfactory or not. A primary criteria is that the explanation allows the explainee to understand the cause, however, peopleâs cognitive biases mean that they prefer certain types of explana- tion over others.
# 4.2. Causal Connection: Abductive Reasoning | 1706.07269#109 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 110 | # 4.2. Causal Connection: Abductive Reasoning
The relationship between explanation and abductive reasoning is introduced in Sec- tion 2.1.4. This section surveys work in cognitive science that looks at the process of abduction. Of particular interest to XAI (and artiï¬cial intelligence in general) is work demonstrating the link between explanation and learning, but also other processes that people use to simplify the abductive reasoning process for explanation generation, and to switch modes of reasoning to correspond with types of explanation.
4.2.1. Abductive Reasoning and Causal Types | 1706.07269#110 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 111 | 4.2.1. Abductive Reasoning and Causal Types
Rehder [154] looked speciï¬cally at categorical or formal explanations. He presents the causal model theory, which states that people infer categories of objects by both their features and the causal relationships between features. His experiments show that people categorise objects based their perception that the observed properties were generated by the underlying causal mechanisms. Rehder gives the example that people not only know that birds can ï¬y and bird have wings, but that birds can ï¬y because they have wings. In addition, Rehder shows that people use combinations of features as evidence when assigning objects to categories, especially for features that seem incompatible based on the underlying causal mechanisms. For example, when categorising an animal that cannot ï¬y, yet builds a nest in trees, most people would consider it implausible to categorise it as a bird because it is diï¬cult to build a nest in a tree if one cannot ï¬y. However, people are more likely to categorise an animal that does not ï¬y and builds nests on the ground as a bird (e.g. an ostrich or emu), as this is more plausible; even though the ï¬rst example has more features in common with a bird (building nests in trees). | 1706.07269#111 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 112 | Rehder [155] extended this work to study how people generalise properties based on the explanations received. When his participants were ask to infer their own explanations using abduction, they were more likely to generalise a property from a source object to a target object if they had more features that were similar; e.g. generalise a property from one species of bird to another, but not from a species of bird to a species of plant. However, given an explanation based on features, this relationship is almost completely eliminated: the generalisation was only done if the features detailed in the explanation
34
were shared between the source and target objects; e.g. bird species A and mammal B both eat the same food, which is explained as the cause for an illness, for example. Thus, the abductive reasoning process used to infer explanations were also used to generalise properties â a parallel seen in machine learning [133]. | 1706.07269#112 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 113 | However, Williams et al. [189] demonstrate that, at least for categorisation in abduc- tive reasoning, the properties of generalisation that support learning can in fact weaken learning by overgeneralising. They gave experimental participants a categorisation task to perform by training themselves on exemplars. They asked one group to explain the categorisations as part of the training, and another to just âthink aloudâ about their task. The results showed that the explanation group more accurately categorised features that had similar patterns to the training examples, but less accurately categorised exceptional cases and those with unique features. Williams et al. argue that explaining (which forces people to think more systematically about the abduction process) is good for fostering generalisations, but this comes at a cost of over-generalisation. | 1706.07269#113 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 114 | Chin-Parker and Cantelon [28] provide support for the contrastive account of ex- planation (see Section 2.3) in categorisation/classiï¬cation tasks. They hypothesise that contrast classes (foils) are key to providing the context to explanation. They distin- guish between prototypical features of categorisation, which are those features that are typical of a particular category, and diagnostic features, which are those features that are relevant for a contrastive explanation. Participants in their study were asked to ei- ther describe particular robots or explain why robots were of a particular category, and then follow-up on transfer learning tasks. The results demonstrated that participants in the design group mentioned signiï¬cantly more features in general, while participants in the explanation group selectively targeted contrastive features. These results provide empirical support for contrastive explanation in category learning.
# 4.2.2. Background and Discounting | 1706.07269#114 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 115 | # 4.2.2. Background and Discounting
Hilton [73] discusses the complementary processes of backgrounding and discounting that aï¬ect the abductive reasoning process. Discounting is when a hypothesis is deemed less likely as a cause because additional contextual information is added to a competing hypothesis as part of causal connection. It is actually discounted as a cause to the event. Backgrounding involves pushing a possible cause to the background because it is not relevant to the goal, or new contextual information has been presented that make it no longer a good explanation (but still a cause). That is, while it is the cause of an event, it is not relevant to the explanation because e.g. the contrastive foil also has this cause. As noted by Hilton [73], discounting occurs in the context of multiple possible causes â there are several possible causes and the person is trying to determine which causes the fact â, while backgrounding occurs in the context of multiple necessary events â a subset of necessary causes is selected as the explanation. Thus, discounting is part of causal connection, while backgrounding is part of explanation selection.
# 4.2.3. Explanatory Modes | 1706.07269#115 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 116 | # 4.2.3. Explanatory Modes
As outlined in Section 2.4, philosophers and psychologists accept that diï¬erent types of explanations exist; for example, Aristotleâs model: material, formal, eï¬cient, and ï¬nal. However, theories of causality have typically argued for only one type of cause, with the two most prominent being dependence theories and transference theories.
35
Lombrozo [107] argues that both dependence theories and transference theories are at least psychologically real, even if only one (or neither) is the true theory. She hy- pothesises that people employ diï¬erent modes of abductive reasoning for diï¬erent modes of cognition, and thus both forms of explanation are valid: functional (ï¬nal) explana- tions are better for phenomena that people consider have dependence relations, while mechanistic (eï¬cient) explanations are better for physical phenomena. | 1706.07269#116 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 117 | Lombrozo [107] gave experimental participants scenarios in which the explanatory mode was manipulated and isolated using a mix of intentional and accidental/incidental human action, and in a second set of experiments, using biological traits that provide a particular function, or simply cause certain events incidentally. Participants were asked to evaluate diï¬erent causal claims. The results of these experiments show that when events were interpreted in a functional manner, counterfactual dependence was important, but physical connections were not. However, when events were interpreted in a mechanistic manner, both counterfactual dependence and physical dependence were both deemed important. This implies that there is a link between functional causation and dependence theories on the one hand, and between mechanistic explanation and transference theories on the other. The participants also rated the functional explanation stronger in the case that the causal dependence was intentional, as opposed to accidental. Lombrozo [106] studied at the same issue of functional vs. mechanistic explanations for inference in categorisation tasks speciï¬cally. She presented participants with tasks similar to the following (text in square brackets added): | 1706.07269#117 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 118 | There is a kind of ï¬ower called a holing. Holings typically have brom com- pounds in their stems and they typically bend over as they grow. Scientists have discovered that having brom compounds in their stems is what usually causes holings to bend over as they grow [mechanistic cause]. By bending over, the holingâs pollen can brush against the fur of ï¬eld mice, and spread to neighboring areas [functional cause].
Explanation prompt: Why do holings typically bend over?
They then gave participants a list of questions about ï¬owers; for example: Suppose a ï¬ower has brom compounds in its stem. How likely do you think it is that it bends over? Their results showed that participants who provided a mechanistic explanation from the ï¬rst prompt were more likely to think that the ï¬ower would bend over, and vice- versa for functional causes. Their ï¬ndings shows that giving explanations inï¬uences the inference process, changing the importance of diï¬erent features in the understanding of category membership, and that the importance of features in explanations can impact the categorisation of that feature. In extending work, Lombrozo and Gwynne [109] argue that people generalise better from functional than mechanistic explanations.
4.2.4. Inherent and Extrinsic Features | 1706.07269#118 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 119 | 4.2.4. Inherent and Extrinsic Features
Prasada and Dillingham [149] and Prasada [148] discuss how peopleâs abductive rea- soning process prioritises certain factors in the formal mode. Prasada contends that âIdentifying something as an instance of a kind and explaining some of its properties in terms of its being the kind of thing it is are not two distinct activities, but a single cognitive activity.â [148, p. 2]
36 | 1706.07269#119 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 120 | 36
Prasada and Dillingham [149] note that people represent relationships between the kinds of things and the properties that they posses. This description conforms with Overtonâs model of the structure of explanation [139] (see Section 2.6.5). Prasada and Dillinghamâs experiments showed that people distinguish between two types of properties for a kind: k-properties, which are the inherent properties of a thing that are due to its kind, and which they call principled connections; and t-properties, which are the extrinsic properties of a thing that are not due to its kind, which they call factual connections. Statistical correlations are examples of factual connections. For instance, a queen bee has a stinger and ï¬ve legs because it is a bee (k-property), but the painted mark seen on almost all domesticated queen bees is because a bee keeper has marked it for ease of identiï¬cation (t-property). K-properties have both principled and factual connections to their kind, whereas t-properties have mere factual connections. They note that k- properties have a normative aspect, in that it is expected that instances of kinds will have their k-properties, and when they do not, they are considered abnormal; for instance, a bee without a stinger. | 1706.07269#120 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 121 | In their experiments, they presented participants with explanations using diï¬erent combinations of k-properties and t-properties to explain categorisations; for example, âwhy is this a dog?â Their results showed that for formal modes, explanations involv- ing k-properties were considered much better than explanations involving t-properties, and further, that using a thingâs kind to explain why it has a particular property was considered better for explaining k-properties than for explaining t-properties. | 1706.07269#121 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 122 | Using ï¬ndings from previous studies, Cimpian and Salomon [30] argue that, when asked to explain a phenomenon, such as a feature of an object, peopleâs cognitive biases make them more likely to use inherent features (k-properties) about the object to explain the phenomenon, rather than extrinsic features (t-properties), such as historical factors. An inherent feature is one that characterises âhow an object is constitutedâ [30, p. 465], and therefore they tend to be stable and enduring features. For example, âspiders have eight legsâ is inherent, while âhis parents are scared of spidersâ is not. Asked to explain why they ï¬nd spiders scary, people are more likely to refer to the âlegginessâ of spiders rather than the fact that their parents have arachnophobia, even though studies show that people with arachnophobia are more likely to have family members who ï¬nd spiders scary [33]. Cimpian and Salomon argue that, even if extrinsic information is known, it is not readily accessible by the mental shotgun [82] that people use to retrieve information. For example, looking at spiders, you | 1706.07269#122 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 123 | if extrinsic information is known, it is not readily accessible by the mental shotgun [82] that people use to retrieve information. For example, looking at spiders, you can see their legs, but not your familyâs fear of them. Therefore, this leads to people biasing explanations towards inherent features rather than extrinsic. This is similar to the correspondence bias discussed in Section 3.2, in which people are more likely to describe peopleâs behaviour on personality traits rather than beliefs, desires, and intentions, because the latter are not readily accessible while the former are stable and enduring. The bias towards inherence is aï¬ected by many factors, such as prior knowledge, cognitive ability, expertise, culture, and age. | 1706.07269#123 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 124 | 4.3. Causal Connection: Counterfactuals and Mutability
To determine the causes of anything other than a trivial event, it is not possible for a person to simulate back through all possible events and evaluate their counterfactual cases. Instead, people apply heuristics to select just some events to mutate. However, this process is not arbitrary. This section looks at several biases used to assess the mutability of events; that is, the degree to which the event can be âundoneâ to consider 37
counterfactual cases. It shows that abnormality (including social abnormality), intention, time and controllability of events are key criteria.
# 4.3.1. Abnormality | 1706.07269#124 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 125 | Kahneman and Tversky [83] performed seminal work in this ï¬eld, proposing the simulation heuristic. They hypothesise that when answering questions about past events, people perform a mental simulation of counterfactual cases. In particular, they show that abnormal events are mutable: they are the common events that people undo when judging causality. In their experiments, they asked people to identity primary causes in causal chains using vignettes of a car accident causing the fatality of Mr. Jones, and which had multiple necessary causes, including Mr. Jones going through a yellow light, and the teenager driver of the truck that hit Mr. Jonesâ car while under the inï¬uence of drugs. They used two vignettes: one in which Mr. Jones the car took an unusual route home to enjoy the view along the beach (the route version); and one in which he took the normal route home but left a bit early (the time version). Participants were asked to complete an âif onlyâ sentence that undid the fatal accident, imagining that they were a family member of Mr. Jones. Most participants in the route group undid the event in which Mr. Jones took the unusual route home more than those in the time | 1706.07269#125 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 126 | that they were a family member of Mr. Jones. Most participants in the route group undid the event in which Mr. Jones took the unusual route home more than those in the time version, while those in the time version undid the event of leaving early more often than those in the route version. That is, the participants tended to focus more on abnormal causes. In particular, Kahneman and Tversky note that people did not simply undo the event with the lowest prior probability in the scenario. | 1706.07269#126 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 127 | In their second study, Kahneman and Tversky [83] asked the participants to empathise with the family of the teenager driving the truck instead of with Mr. Jones, they found that people more often undid events of the teenage driver, rather Mr. Jones. Thus, the perspective or the focus is important in what types of events people undo.
# 4.3.2. Temporality
Miller and Gunasegaram [131] show that the temporality of events is important, in particular that people undo more recent events than more distal events. For instance, in one of their studies, they asked participants to play the role of a teacher selecting exam questions for a task. In one group, the teacher-ï¬rst group, the participants were told that the students had not yet studied for their exam, while those in the another group, the teacher-second group, were told that the students had already studied for the exam. Those in the teacher-second group selected easier questions than those in the ï¬rst, showing that participants perceived the degree of blame they would be given for hard questions depends on the temporal order of the tasks. This supports the hypothesis that earlier events are considered less mutable than later events.
4.3.3. Controllability and Intent | 1706.07269#127 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 128 | 4.3.3. Controllability and Intent
Girotto et al. [54] investigated mutability in causal chains with respect to control- lability. They hypothesised that actions controllable by deliberative actors are more mutable than events that occur as a result of environmental eï¬ects. They provided par- ticipants with a vignette about Mr. Bianchi, who arrived late home from work to ï¬nd his wife unconscious on the ï¬oor. His wife subsequently died. Four diï¬erent events caused Mr. Bianchiâs lateness: his decision to stop at a bar for a drink on the way home, plus three non-intentional causes, such as delays caused by abnormal traï¬c. Diï¬erent 38 | 1706.07269#128 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 129 | questionnaires were given out with the events in diï¬erent orders. When asked to undo events, participants overwhelmingly selected the intentional event as the one to undo, demonstrating that people mentally undo controllable events over uncontrollable events, irrelevant of the controllable events position in the sequence or whether the event was normal or abnormal. In another experiment, they varied whether the deliberative ac- tions were constrained or unconstrained, in which an event is considered as constrained when they are somewhat enforced by other conditions; for example, Mr. Bianchi going to the bar (more controllable) vs. stopping due to an asthma attack (less controllable). The results of this experiment show that unconstrained actions are more mutable than constrained actions.
# 4.3.4. Social Norms | 1706.07269#129 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 130 | # 4.3.4. Social Norms
McCloy and Byrne [121] investigated the mutability of controllable events further, looking at the perceived appropriateness (or the socially normative perception) of the events. They presented a vignette similar to that of Girotto et al. [54], but with several controllable events, such as the main actor stopping to visit his parents, buy a newspaper, and stopping at a fast-food chain to get a burger. Participants were asked to provide causes as well as rate the âappropriatenessâ of the behaviour. The results showed that participants were more likely to indicate inappropriate events as causal; e.g. stopping to buy a burger. In a second similar study, they showed that inappropriate events are traced through both normal and other exceptional events when identifying cause.
# 4.4. Explanation Selection | 1706.07269#130 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 131 | # 4.4. Explanation Selection
Similar to causal connection, people do not typically provide all causes for an event as an explanation. Instead, they select what they believe are the most relevant causes. Hilton [70] argues that explanation selection is used for cognitive reasons: causal chains are often too large to comprehend. He provides an example [70, p. 43, Figure 7] show- ing the causal chain for the story of the fatal car accident involving âMr. Jonesâ from Kahneman and Tversky [83]. For a simple story of a few paragraphs, the causal chain consists of over 20 events and 30 causes, all relevant to the accident. However, only a small amount of these are selected as explanations [172].
In this section, we overview key work that investigates the criteria people use for ex- planation selection. Perhaps unsurprisingly, the criteria for selection look similar to that of mutability, with temporality (proximal events preferred over distal events), abnormal- ity, and intention being important, but also the features that are diï¬erent between fact and foil.
4.4.1. Facts and Foils | 1706.07269#131 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.