doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1706.07269 | 132 | 4.4.1. Facts and Foils
As noted in Section 2, whyâquestions are contrastive between a fact and a foil. Re- search shows that the two contrasts are the primary way that people select explanations. In particular, to select an explanation from a set of causes, people look at the diï¬erence between the cases of the fact and foil.
Mackie [110] is one of the earliest to argue for explanation selection based on con- trastive criteria, however, the ï¬rst crisp deï¬nition of contrastive explanation seems to come from Hesslow [69]:
39
This theory rests on two ideas. The ï¬rst is that the eï¬ect or the explanan- dum, i.e. the event to be explained, should be construed, not as an objectâs having a certain property, but as a diï¬erence between objects with regard to a certain property. The second idea is that selection and weighting of causes is determined by explanatory relevance. [Emphasis from the original source] â Hesslow [69, p. 24] | 1706.07269#132 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 133 | Hesslow [69] argues that criteria for selecting explanations are clearly not arbitrary, because people seem to select explanations in similar ways to each other. He deï¬nes an explanan as a relation containing an object a (the fact in our terminology), a set of comparison objects R, called the reference class (the foils), and a property E, which a has but the objects in reference class R does not. For example, a = Spider, R = Beetle, and E = eight legs. Hesslow argues that the contrast between the fact and foil is the primary criteria for explanation selection, and that the explanation with the highest explanatory power should be the one that highlights the greatest number of diï¬erences in the attributes between the target and reference objects.
Lipton [102], building on earlier work in philosophy from Lewis [99], derived similar thoughts to Hesslow [69], without seeming to be aware of his work. He proposed a deï¬nition of contrastive explanation based on what he calls the Diï¬erence Condition:
To explain why P rather than Q, we must cite a causal diï¬erence between P and not-Q, consisting of a cause of P and the absence of a corresponding event in the history of not-Q. â Lipton [102, p. 256] | 1706.07269#133 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 134 | From an experimental perspective, Hilton and Slugoski [77] were the ï¬rst researchers to both identify the limitations of covariation, and instead propose that contrastive ex- planation is best described as the diï¬erences between the two events (discussed further in Section 4.4.2). More recent research in cognitive science from Rehder [154, 155] supports the theory that people perform causal inference, explanation, and generalisation based on contrastive cases.
Returning to our arthropod example, for the whyâquestion between image J cate- gorised as a ï¬y and image K a beetle, image J having six legs is correctly determined to have no explanatory relevance, because it does not cause K to be categorised as a beetle instead of a ï¬y. Instead, the explanation would cite some other cause, which according to Table 1, would be that the arthropod in image J has ï¬ve eyes, consistent with a ï¬y, while the one in image K has two, consistent with a beetle.
# 4.4.2. Abnormality | 1706.07269#134 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 135 | # 4.4.2. Abnormality
Related to the idea of contrastive explanation, Hilton and Slugoski [77] propose the abnormal conditions model, based on observations from legal theorists Hart and Honor´e [64]. Hilton and Slugoski argue that abnormal events play a key role in causal explana- tion. They argue that, while statistical notions of co-variance are not the only method employed in everyday explanations, the basic idea that people select unusual events to explain is valid. Their theory states that explainers use their perceived background knowledge with explainees to select those conditions that are considered abnormal. They give the example of asking why the Challenger shuttle exploded in 1986 (rather than not exploding, or perhaps why most other shuttles do not explode). The explanation that
40
it exploded âbecause of faulty sealsâ seems like a better explanation than âthere was oxygen in the atmosphereâ. The abnormal conditions model accounts for this by noting that an explainer will reason that oxygen is present in the atmosphere when all shuttles launch, so this is not an abnormal condition. On the other hand, most shuttles to not have faulty seals, so this contributing factor was a necessary yet abnormal event in the Challenger disaster. | 1706.07269#135 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 136 | The abnormal conditions model has been backed up by subsequent experimental studies, such as those by McClure and Hilton [125], McClure et al. [126], and Hilton et al. [76], and more recently, Samland and Waldmann [161], who show that a variety of non-statistical measures are valid foils.
4.4.3. Intentionality and Functionality
Other features of causal chains have been demonstrated to be more important than abnormality. | 1706.07269#136 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 137 | Hilton et al. [76] investigate the claim from legal theorists Hart and Honor´e [64] that intentional action takes priority of non-intentional action in opportunity chains. Their perspective builds on the abnormal conditions model, noting that there are two important contrasts in explanation selection: (1) normal vs. abnormal; and (2) intentional vs. non- intentional. They argue further that causes will be âtraced throughâ a proximal (more recent) abnormal condition if there is a more distal (less recent) event that is intentional. For example, to explain why someone died, one would explain that the poison they ingested as part of a meal was the cause of death; but if the poison as shown to have been deliberately placed in an attempt to murder the victim, the intention of someone to murder the victim receives priority. In their experiments, they gave participants diï¬erent opportunity chains in which a proximal abnormal cause was an intentional human action, an unintentional human action, or a natural event, depending on the condition to which they were assigned. For example, a cause of an accident was ice on the road, which was enabled by either someone deliberative spraying | 1706.07269#137 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 138 | natural event, depending on the condition to which they were assigned. For example, a cause of an accident was ice on the road, which was enabled by either someone deliberative spraying the road, someone unintentionally placing water on the road, or water from a storm. Participants were asked to rate the explanations. Their results showed that: (1) participants rated intentional action as a better explanation than the other two causes, and non-intentional action better than natural cases; and (2) in opportunity chains, there is little preference for proximal over distal events if two events are of the same type (e.g. both are natural events) â both are seen as necessary. | 1706.07269#138 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 139 | Lombrozo [107] argues further that this holds for functional explanations in general; not just intentional action. For instance, citing the functional reason that an object exists is preferred to mechanistic explanations.
4.4.4. Necessity, Suï¬ciency and Robustness
Several authors [102, 107, 192] argue that necessity and suï¬ciency are strong criteria for preferred explanatory causes. Lipton [102] argues that necessary causes are preferred to suï¬cient causes. For example, consider mutations in the DNA of a particular species of beetle that cause its wings to grow longer than normal when kept in certain temperatures. Now, consider that there is two such mutations, M1 and M2, and either is suï¬cient to cause the mutation. To contrast with a beetle whose wings would not change, the explanation of temperature is preferred to either of the mutations M1 or M2, because neither M1 nor M2 are individually necessary for the observed event; merely that either 41
M1 or M2. In contrast, the temperature is necessary, and is preferred, even if we know that the cause was M1. | 1706.07269#139 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 140 | M1 or M2. In contrast, the temperature is necessary, and is preferred, even if we know that the cause was M1.
Woodward [192] argues that suï¬ciency is another strong criteria, in that people prefer causes that bring about the eï¬ect without any other cause. This should not be confused with suï¬ciency in the example above, in which either mutation M1 or M2 is suï¬cient in combination with temperature. Woodwardâs argument applies to uniquely suï¬cient causes, rather than cases in which there are multiple suï¬cient causes. For example, if it were found that are third mutation M3 could cause longer wings irrelevant of the temperature, this would be preferred over temperature plus another mutation. This is related to the notation of simplicity discussed in Section 4.5.1.
Finally, several authors [107, 192] argue that robustness is also a criterion for expla- nation selection, in which the extend to which a cause C is considered robust is whether the eï¬ect E would still have occurred if conditions other than C were somewhat diï¬erent. Thus, a cause C1 that holds only in speciï¬c situations has less explanatory value than cause C2, which holds in many other situations.
# 4.4.5. Responsibility | 1706.07269#140 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 141 | # 4.4.5. Responsibility
The notions of responsibility and blame are relevant to causal selection, in that an event considered more responsible for an outcome is likely to be judged as a better explanation than other causes. In fact, it relates closely to necessity, as responsibility aims to place a measure of âdegree of necessityâ of causes. An event that is fully responsible outcome for an event is a necessary cause.
Chockler and Halpern [29] modiï¬ed the structural equation model proposed by Halpern and Pearl [58] (see Section 2.1.1) to deï¬ne responsibility of an outcome. Informally, they deï¬ne the responsibility of cause C to event E under a situation based on the minimal number of changes required to the situation to make event E no longer occur. If N is 1 N +1 . the minimal number of changes required, then the responsibility of C causes E is If N = 0, then C is fully responsible. Thus, one can see that an event that is considered more responsible than another requires less changes to prevent E than the other. | 1706.07269#141 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 143 | The structural model approach deï¬nes the responsibility of events, rather than indi- viduals or groups, but one can see that it can be used in group models as well. Gersten- berg and Lagnado [48] show that the model has strong predictive power at attributing responsibility to individuals in groups. They ran a set of experiments in which par- ticipants played a simple game in teams in which each individual was asked to count the number of triangles in an image, and teams won or lost depending on how accurate their collective counts were. After the game, participants rated the responsibility of each player to the outcome. Their results showed that the modiï¬ed structural equation model Chockler and Halpern [29] was more accurate at predicting participants outcomes than simple counterfactual model and the so-called Matching Model, in which the responsibil- ity is deï¬ned as the degree of deviation to the outcome; in the triangle counting game, this would be how far oï¬ the individual was to the actual number of triangles.
42
4.4.6. Preconditions, Failure, and Intentions | 1706.07269#143 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 144 | 42
4.4.6. Preconditions, Failure, and Intentions
An early study into explanation selection in cases of more than one cause was under- taken by Leddo et al. [96]. They conducted studies asking people to rate the probability of diï¬erent factors as causes of events. As predicted by the intention/goal-based theory, goals were considered better explanations than relevant preconditions. However, people also rated conjunctions of preconditions and goals as better explanations of why the event occurred. For example, for the action âFred went to the restaurantâ, participants rated explanations such as âFred was hungryâ more likely than âFred had money in his pocketâ, but further âFred was hungry and had money in his pocketâ as an even more likely explanation, despite the fact the cause itself is less likely (conjoining the two prob- abilities). This is consistent with the well-known conjunction fallacy [173], which shows that people sometimes estimate the probability of the conjunction of two facts higher than either of the individual fact if those two facts are representative of prior beliefs. | 1706.07269#144 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 145 | However, Leddo et al. [96] further showed that for failed or uncompleted actions, just one cause (goal or precondition) was considered a better explanation, indicating that failed actions are explained diï¬erently. This is consistent with physical causality explanations [106]. Leddo et al. argue that to explain an action, people combine their knowledge of the particular situation with a more general understanding about causal relations. Lombrozo [107] argues similarly that this is because failed actions are not goal-directed, because people do not intend to fail. Thus, people prefer mechanistic explanations for failed actions, rather than explanations that cite intentions.
McClure and Hilton [123] and McClure et al. [124] found that people tend to assign a higher probability of conjoined goal and precondition for a successful action, even though they prefer the goal as the best explanation, except in extreme/unlikely situations; that is, when the precondition is unlikely to be true. They argue that is largely due to the (lack of) controllability of unlikely actions. That is, extreme/unlikely events are judged to be harder to control, and thus actors would be less likely to intentionally select that action unless the unlikely opportunity presented itself. However, for normal and expected actions, participants preferred the goal alone as an explanation instead of the goal and precondition. | 1706.07269#145 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 146 | In a follow-up study, McClure and Hilton [125] looked at explanations of obstructed vs. unobstructed events, in which an event is obstructed by its precondition being false; for example, âFred wanted a coï¬ee, but did not have enough money to buy oneâ as an explanation for why Fred failed to get a coï¬ee. They showed that while goals are important to both, for obstructed events, the precondition becomes more important than for unobstructed events.
# 4.5. Explanation Evaluation
In this section, we look at work that has investigated the criteria that people use to evaluate explanations. The most important of these are: probability, simplicity, gener- alise, and coherence with prior beliefs.
4.5.1. Coherence, Simplicity, and Generality
Thagard [171] argues that coherence is a primary criteria for explanation. He pro- poses the Theory for Explanatory Coherence, which speciï¬es seven principles of how explanations relate to prior belief. He argues that these principles are foundational prin- ciples that explanations must observe to be acceptable. They capture properties such 43 | 1706.07269#146 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 147 | as if some set of properties P explain some other property Q, then all properties in P must be coherent with Q; that is, people will be more likely to accept explanations if they are consistent with their prior beliefs. Further, he contends that all things being equal, simpler explanations â those that cite fewer causes â and more general expla- nations â those that explain more events â, are better explanations. The model has been demonstrated to align with how humans make judgements on explanations [151]. | 1706.07269#147 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 148 | Read and Marcus-Newhall [153] tested the hypotheses from Thagardâs theory of ex- planatory coherence [171] that people prefer simpler and more general explanations. Par- ticipants were asked to rate the probability and the âqualityâ of explanations with diï¬erent numbers of causes. They were given stories containing several events to be explained, and several diï¬erent explanations. For example, one story was about Cheryl, who is suï¬ering from three medical problems: (1) weight gain; (2) fatigue; and (3) nausea. Dif- ferent participant groups were given one of three types of explanations: (1) narrow : one of Cheryl having stopped exercising (weight gain), has mononucleosis (explains fatigue), or a stomach virus (explains nausea); (2) broad : Cheryl is pregnant (explains all three); or (3) conjunctive: all three from item 1 as the same time. As predicted, participants preferred simple explanations (pregnancy) with less causes than more complex ones (all three conjunctions), and participants preferred explanations that explained more events.
# 4.5.2. Truth and Probability
Probability has two facets in explanation: the probability of the explanation being true; and the use of probability in an explanation. Neither has a much importance as one may expect. | 1706.07269#148 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 149 | The use of statistical relationships to explain events is considered to be unsatisfying on its own. This is because people desire causes to explain events, not associative rela- tionships. Josephson and Josephson [81] give the example of a bag full of red balls. When selecting a ball randomly from the bag, it must be red, and one can ask: âWhy is this ball red?â. The answer that uses the statistical generalisation âBecause all balls in the bag are redâ is not a good explanation, because it does not explain why that particular ball is red. A better explanation is someone painted it red. However, for the question: âWhy did we observe a red ball coming out of the bagâ, it is a good explanation, be- cause having only red balls in the bag does cause us to select a red one. Josephson and Josephson highlight that the diï¬erence between explaining the fact observed (the ball is red) and explaining the event of observing the fact (a red ball was selected). To explain instances via statistical generalisations, we need to explain the causes of those generali- sations too, not the generalisations themselves. If the reader is not convinced, consider my own example: a student coming | 1706.07269#149 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 150 | need to explain the causes of those generali- sations too, not the generalisations themselves. If the reader is not convinced, consider my own example: a student coming to their teacher to ask why they only received 50% on an exam. An explanation that most students scored around 50% is not going to satisfy the student. Adding a cause for why most students only scored 50% would be an improvement. Explaining to the student why they speciï¬cally received 50% is even better, as it explains the cause of the instance itself. | 1706.07269#150 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 151 | The truth of likelihood of an explanation is considered an important criteria of a good explanation. However, Hilton [73] shows that the most likely or âtrueâ cause is not necessarily the best explanation. Truth conditions4 are a necessary but not suï¬cient
4We use the term truth condition to refer to facts that are either true or considered likely by the
explainee.
44
criteria for the generation of explanations. While a true or likely cause is one attribute of a good explanation, tacitly implying that the most probable cause is always the best explanation is incorrect. As an example, consider again the explosion of the Challenger shuttle (Section 4.4.2), in which a faulty seal was argued to be a better explanation than oxygen in the atmosphere. This is despite the fact the the âsealâ explanation is a likely but not known cause, while the âoxygenâ explanation is a known cause. Hilton argues that this is because the fact that there is oxygen in the atmosphere is presupposed ; that is, the explainer assumes that the explainee already knows this. | 1706.07269#151 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 152 | McClure [122] also challenges the idea of probability as a criteria for explanations. Their studies found that people tend not to judge the quality of explanations around their probability, but instead around their so-called pragmatic inï¬uences of causal behaviour. That is, people judge explanations on their usefulness, relevance, etc., including via Griceâs maxims of conversation [56] (see Section 5.1.1 for a more detailed discussion of this). This is supported by experiments such as Read and Marcus-Newhall [153] cited above, and the work from Tversky and Kahneman [173] on the conjunction fallacy. | 1706.07269#152 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 153 | Lombrozo [105] notes that the experiments on generality and simplicity performed by Read and Marcus-Newhall [153] cannot rule out that participants selected simple explanations because they did not have probability or frequency information for events. Lombrozo argues that if participants assumed that the events of stopping exercising, hav- ing mononucleosis, having a stomach virus, and being pregnant are all equally likely, then the probability of the conjunction of any three is much more unlikely than any one com- bined. To counter this, she investigated the inï¬uence that probability has on explanation evaluation, in particular, when simpler explanations are less probable than more complex ones. Based on a similar experimental setup to that of Read and Marcus-Newhall [153], Lombrozo presented experimental participants with information about a patient with several symptoms that could be explained by one cause or several separate causes. In some setups, base rate information about each disease was provided, in which the con- junction of the separate causes was more likely than the single (simpler) cause. Without base-rate information, participants selected the most simple (less likely) explanations. When base-rate information was | 1706.07269#153 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 154 | causes was more likely than the single (simpler) cause. Without base-rate information, participants selected the most simple (less likely) explanations. When base-rate information was included, this still occurred, but the diï¬erence was less pronounced. However, the likelihood of the conjunctive scenario had to be signiï¬cantly more likely for it to be chosen. Lombrozoâs ï¬nal experiment showed that this eï¬ect was reduced again if participants were explicitly provided with the joint probability of the two events, rather than in earlier experiments in which they were provided separately. | 1706.07269#154 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 155 | Preston and Epley [150] show that the value that people assign to their own beliefs â both in terms of probability and personal relevance â correspond with the explanatory power of those beliefs. Participants were each given a particular âbeliefâ that is generally accepted by psychologists, but mostly unknown in the general public, and were then allocated to three conditions: (1) the applications condition, who were asked to list ob- servations that the belief could explain; (2) the explanations condition, who were asked to list observations that could explain the belief (the inverse to the previous condition); and (3) a control condition who did neither. Participants were then asked to consider the probability of that belief being true, and to assign their perceived value of the belief to themselves and society in general. Their results show that people in the applications and explanations condition both assigned a higher probability to the belief being true, demonstrating that if people link beliefs to certain situations, the perceived probability increased. However, for value, the results were diï¬erent: those in the applications condi- 45
tion assigned a higher value than the other two conditions, and those in the explanations condition assigned a lower value than the other two conditions. This indicates that peo- ple assign higher values to beliefs that explain observations, but a lower value to beliefs that can be explained by other observations. | 1706.07269#155 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 156 | Kulesza et al. [90] investigate the balance between soundness and completeness of explanation. They investigated explanatory debugging of machine learning algorithms making personalised song recommendations. By using progressively simpler models with less features, they trained a recommender system to give less correct recommendations. Participants were given recommendations for songs on a music social media site, based on their listening history, and were placed into one of several treatments. Participants in each treatment would be given a diï¬erent combination of soundness and completeness, where soundness means that the explanation is correct and completeness means that all of the underlying causes are identiï¬ed. For example, one treatment had low soundness but high completeness, while another had medium soundness and medium completeness. Participants were given a list of recommended songs to listen to, along with the (possibly unsound and incomplete) explanations, and were subsequently asked why the song had been recommended. The participantsâ mental models were measured. The results show that sound and complete models were the best for building a correct mental model, but at the expense of cost/beneï¬t. Complete but unsound explanations improved the partic- ipantsâ | 1706.07269#156 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 157 | best for building a correct mental model, but at the expense of cost/beneï¬t. Complete but unsound explanations improved the partic- ipantsâ mental models more than soundness, and gave a better perception of cost/beneï¬t, but reduced trust. Sound but incomplete explanations were the least preferred, resulting in higher costs and more requests for clariï¬cation. Overall, Kulesza et al. concluded that completeness was more important than soundness. From these results, Kulesza et al. [89] list three principles for explainability: (1) Be sound ; (2) Be complete; but (3) Donât over- whelm. Clearly, principles 1 and 2 are at odds with principle 3, indicating that careful design must be put into explanatory debugging systems. | 1706.07269#157 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 158 | 4.5.3. Goals and Explanatory Mode
Vasilyeva et al. [177] show that the goal of explainer is key in how the evaluated explanations, in particular, in relation to the mode of explanation used (i.e. material, formal, eï¬cient, ï¬nal). In their experiments, they gave participants diï¬erent tasks with varying goals. For instance, some participants were asked to assess the causes behind some organisms having certain traits (eï¬cient), others were asked to categorise organisms into groups (formal), and the third group were asked for what reason organisms would have those traits (functional). They provided explanations using diï¬erent modes for parts of the tasks and then asked participants to rate the âgoodnessâ of an explanation provided to them. Their results showed that the goals not only shifted the focus of the questions asked by participants, but also that participants preferred modes of explanation that were more congruent with the goal of their task. This is further evidence that being clear about the question being asked is important in explanation.
# 4.6. Cognitive Processes and XAI
This section presents some ideas on how the work on the cognitive processes of ex- planation aï¬ects researchers and practitioners in XAI. | 1706.07269#158 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 159 | This section presents some ideas on how the work on the cognitive processes of ex- planation aï¬ects researchers and practitioners in XAI.
The idea of explanation selection is not new in XAI. Particularly in machine learning, in which models have many features, the problem is salient. Existing work has primarily
46
looked at selecting which features in the model were important for a decision, mostly built on local explanations [158, 6, 157] or on information gain [90, 89]. However, as far as the authors are aware, there are currently no studies that look at the cognitive biases of humans as a way to select explanations from a set of causes.
# 4.6.1. Abductive Reasoning | 1706.07269#159 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 160 | # 4.6.1. Abductive Reasoning
Using abductive reasoning to generate explanations has a long history in artiï¬cial intelligence [97], aiming to solve problems such as fault diagnosis [144], plan/intention recognition [24], and generalisation in learning [133]. Findings from such work has paral- lels with many of the results from cognitive science/psychology outlined in this section. Leake [95] provides an excellent overview of the challenges of abduction for everyday ex- planation, and summarises work that addresses these. He notes three of the main tasks that an abductive reasoner must perform are: (1) what to explain about a given situation (determining the question); (2) how to generate explanations (abductive reasoning); and (3) how to evaluate the âbestâ explanation (explanation selection and evaluation). He stresses that determining the goal of the explanation is key to providing a good expla- nation; echoing the social scientistsâ view that the explaineeâs question is important, and that such questions are typically focused on anomalies or surprising observations. | 1706.07269#160 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 161 | The work from Rehder [154, 155] and Lombrozo [108] show that that explanation is good for learning and generalisation. This is interesting and relevant for XAI, because it shows that individual users should require less explanation the more they interact with a system. First, because they will construct a better mental model of the system and be able to generalise its behaviour (eï¬ectively learning its model). Second, as they see more cases, they should become less surprised by abnormal phenomena, which as noted in Section 4.4.2, is a primary trigger for requesting explanations. An intelligent agent that presents â unprompted â an explanation alongside every decision, runs a risk of providing explanations that become less needed and more distracting over time. | 1706.07269#161 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 162 | The work on inherent vs. extrinsic features (Section 4.2.4) is relevant for many AI applications, in particular classiï¬cation tasks. In preliminary work, Bekele et al. [7] use the inherence bias [30] to explain person identiï¬cation in images. Their re-identiï¬cation system is tasked with determining whether two images contain the same person, and uses inherent features such as age, gender, and hair colour, as well as extrinsic features such as clothing or wearing a backpack. Their explanations use the inherence bias with the aim of improving the acceptability of the explanation. In particular, when the image is deemed to be of the same person, extrinsic properties are used, while for diï¬erent people, intrinsic properties are used. This work is preliminary and has not yet been evaluated, but it is an excellent example of using cognitive biases to improve explanations.
4.6.2. Mutability and Computation | 1706.07269#162 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 163 | 4.6.2. Mutability and Computation
Section 4.3 studies the heuristics that people use to discount some events over others during mental simulation of causes. This is relevant to some areas of explainable AI because, in the same way that people apply these heuristics to more eï¬ciently search through a causal chain, so to can these heuristics be used to more eï¬ciently ï¬nd causes, while still identifying causes that a human explainee would expect.
The notions of causal temporality and responsibility would be reasonably straight- forward to capture in many models, however, if one can capture concepts such as ab47
normality, responsibility intentional, or controllability in models, this provides further opportunities.
4.6.3. Abnormality
Abnormality clearly plays a role in explanation and interpretability. For explanation, it serves as a trigger for explanation, and is a useful criteria for explanation selection. For interpretability, it is clear that ânormalâ behaviour will, on aggregate, be judged more explainable than abnormal behaviour. | 1706.07269#163 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 164 | Abnormality is a key criteria for explanation selection, and as such, the ability to identify abnormal events in causal chains could improve the explanations that can be supplied by an explanatory agent. While for some models, such as those used for proba- bilistic reasoning, identifying abnormal events would be straightforward, and for others, such as normative systems, they are âbuilt inâ, for other types of models, identifying abnormal events could prove diï¬cult but valuable. | 1706.07269#164 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 165 | One important note to make is regarding abnormality and its application to ânon- contrastiveâ whyâquestions. As noted in Section 2.6.2, questions of the form âWhy P? â may have an implicit foil, and determining this can improve explanation. In some cases, normality could be used to mitigate this problem. That is, in the case of âWhy P? â, we can interpret this as âWhy P rather than the normal case Q? â [72]. For example, consider the application of assessing the risk of glaucoma [22]. Instead of asking why they were given a positive diagnosis rather than a negative diagnosis, the explanatory again could provide one or more default foils, which would be âstereotypicalâ examples of people who were not diagnosed and whose symptoms were more regular with respect to the general population. Then, the question becomes why was the person diagnosed with glaucoma compared to these default stereotypical cases without glaucoma.
# 4.6.4. Intentionality and Functionality | 1706.07269#165 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 166 | # 4.6.4. Intentionality and Functionality
The work discussed in Section 4.4.3 demonstrates the importance of intentionality and functionality in selecting explanations. As discussed in Section 3.6.1, these concepts are highly relevant to deliberative AI systems, in which concepts such as goals and intentions are ï¬rst-class citizens. However, the importance of this to explanation selection rather than social attribution must be drawn out. In social attribution, folk psychological concepts such as intentions are attributed to agents to identify causes and explanations, while in this section, intentions are used as part of the cognitive process of selecting explanations from a causal chain. Thus, even for a non-deliberative system, labelling causes as intentional could be useful. For instance, consider a predictive model in which some features represent that an intentional event has occurred. Prioritising these may lead to more intuitive explanations.
4.6.5. Perspectives and Controllability | 1706.07269#166 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 167 | 4.6.5. Perspectives and Controllability
The ï¬nding from Kahneman and Tversky [83] that perspectives change the events people mutate, discussed in Section 4.3, is important in multi-agent contexts. This implies that when explaining a particular agentâs decisions or behaviour, the explanatory agent could focus on undoing actions of that particular agent, rather than others. This is also consistent with the research on controllability discussed in Section 4.3, in that, from the perspective of the agent in question, they can only control their own actions.
48
in generating explainable behaviour, with all others things being equal, agents could select actions that lead to future actions being more constrained, as the subsequent actions are less likely to have counterfactuals undone by the observer.
4.6.6. Evaluation of Explanations
likelihood is not everything. While likely causes are part of good explanations, they do not strongly correlate with explanations that people ï¬nd useful. The work outlined in this section provides three criteria that are at least as equally important: simplicity, generality, and coherence. | 1706.07269#167 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 168 | For explanation, if the goal of an explanatory agent is to provide the most likely causes of an event, then these three criteria can be used to prioritise among the most likely events. However, if the goal of an explanatory agent is to generate trust between itself and its human observers, these criteria should be considered as ï¬rst-class criteria in explanation generation beside or even above likelihood. For example, providing simpler explanations that increase the likelihood that the observer both understands and accepts the explanation may increase trust better than giving more likely explanations.
For interpretability, similarly, these three criteria can form part of decision-making algorithms; for example, a deliberative agent may opt to select an action that is less likely to achieve its goal, if the action helps towards other goals that the observer knows about, and has a smaller number of causes to refer to. | 1706.07269#168 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 169 | The selection and evaluation of explanations in artiï¬cial intelligence has been studied in some detail, going back to early work on abductive reasoning, in which explanations with structural simplicity, coherence, or minimality are preferred (e.g. [156, 97]) and the concept of explanatory power of a set of hypotheses is deï¬ned as the set of manifestations those hypotheses account for [1]. Other approaches use probability as the deï¬ning factor to determine the most likely explanation (e.g. [59]). In addition to the cognitive biases of people to discount probability, the probabilistic approaches have the problem that such ï¬ne-grained probabilities are not always available [95]. These selection mechanisms are context-independent and do not account for the explanations as being relevant to the question nor the explainee. | 1706.07269#169 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 170 | Leake [94], on the other hand, argues for goal-directed explanations in abductive reasoning that explicitly aim to reduce knowledge gaps; speciï¬cally to explain why an observed event is âreasonableâ and to help identify faulty reasoning processes that led to it being surprising. He proposes nine evaluation dimensions for explanations: timeliness, knowability, distinctiveness, predictive power, causal force, independence, repairability, blockability, and desirability. Some of these correspond to evaluation criteria outlined in Section 4.5; for example, distinctiveness notes that a cause that is surprising is of good explanatory value, which equates to the criteria of abnormality.
# 5. Social Explanation â How Do People Communicate Explanations?
Causal explanation is ï¬rst and foremost a form of social interaction. One speaks of giving causal explanations, but not attributions, perceptions, com- prehensions, categorizations, or memories. The verb to explain is a three49
place predicate: Someone explains something to someone. Causal ex- planation takes the form of conversation and is thus subject to the rules of conversation. [Emphasis original] â Hilton [72] | 1706.07269#170 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 171 | place predicate: Someone explains something to someone. Causal ex- planation takes the form of conversation and is thus subject to the rules of conversation. [Emphasis original] â Hilton [72]
This ï¬nal section looks at the communication problem in explanation â something that has been studied little in explainable AI so far. The work outlined in this section asserts that the explanation process does not stop at just selecting an explanation, but considers that an explanation is an interaction between two roles: explainer and explainee (perhaps the same person/agent playing both roles), and that there are certain ârulesâ that govern this interaction.
# 5.1. Explanation as Conversation
Hilton [72] presents the most seminal article on the social aspects of conversation, proposing a conversational model of explanation based on foundational work undertaken by both himself and others. The primary argument of Hilton is that explanation is a conversation, and this is how it diï¬ers from causal attribution. He argues that there are two stages: the diagnosis of causality in which the explainer determines why an action/event occurred; and the explanation, which is the social process of conveying this to someone. The problem is then to âresolve a puzzle in the explaineeâs mind about why the event happened by closing a gap in his or her knowledgeâ [72, p. 66]. | 1706.07269#171 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 172 | The conversational model argues that good social explanations must be relevant. This means that they must answer the question that is asked â merely identifying causes does not provide good explanations, because many of the causes will not be relevant to the questions; or worst still, if the âmost probableâ causes are selected to present to the explainee, they will not be relevant to the question asked. The information that is communicated between explainer and explainee should conform to the general rules of cooperative conversation [56], including being relevant to the explainee themselves, and what they already know.
Hilton [72] terms the second stage explanation presentation, and argues that when an explainer presents an explanation to an explainee, they are engaged in a conversation. As such, they tend to follow basic rules of conversation, which Hilton argues are captured by Griceâs maxims of conversation [56]: (a) quality; (b) quantity; (c) relation; and (d) manner. Coarsely, these respectively mean: only say what you believe; only say as much as is necessary; only say what is relevant; and say it in a nice way.
These maxims imply that the shared knowledge between explainer and explainee are presuppositions of the explanations, and the other factors are the causes that should be explained; in short, the explainer should not explain any causes they think the explainee already knows (epistemic explanation selection). | 1706.07269#172 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 173 | Previous sections have presented the relevant literature about causal connection (Sec- tions 3 and 4) and explanation selection (Sections 4). In the remainder of this subsection, we describe Griceâs model and present related research that analyses how people select explanations relative to subjective (or social) viewpoints, and present work that supports Hiltonâs conversational model of explanation [72].
5.1.1. Logic and Conversation
Griceâs maxims [56] (or the Gricean maxims) are a model for how people engage in cooperative conversation. Grice observes that conversational statements do not occur in 50
isolation: they are often linked together, forming a cooperative eï¬ort to achieve some goal of information exchange or some social goal, such as social bonding. He notes then that a general principle that one should adhere to in conversation is the cooperative principle: âMake your conversational contribution as much as is required, at the stage at which it occurs, by the accepted purpose or direction of the talk exchange in which you are engaged â [56, p. 45].
For this, Grice [56] distinguishes four categories of maxims that would help to achieve the cooperative principle: | 1706.07269#173 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 174 | For this, Grice [56] distinguishes four categories of maxims that would help to achieve the cooperative principle:
1. Quality: Make sure that the information is of high quality â try to make your contribution one that is true. This contains two maxims: (a) do not say things that you believe to be false; and (b) do not say things for which you do not have suï¬cient evidence.
2. Quantity: Provide the right quantity of information. This contains two maxims: (a) make your contribution as informative as is required; and (b) do not make it more informative than is required.
3. Relation: Only provide information that is related to the conversation. This con- sists of a single maxim: (a) Be relevant. This maxim can be interpreted as a strategy for achieving the maxim of quantity.
4. Manner : Relating to how one provides information, rather than what is provided. This consists of the âsupermaximâ of âBe perspicuousâ, but according to Grice, is broken into âvariousâ maxims such as: (a) avoid obscurity of expression; (b) avoid ambiguity; (c) be brief (avoid unnecessary prolixity); and (d) be orderly. | 1706.07269#174 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 175 | Grice [56] argues that for cooperative conversation, one should obey these maxims, and that people learn such maxims as part of their life experience. He further links these maxims to implicature, and shows that it is possible to violate some maxims while still being cooperative, in order to either not violate one of the other maxims, or to achieve some particular goal, such as to implicate something else without saying it. Irony and metaphors are examples of violating the quality maxims, but other examples, such as: Person A: âWhat did you think of the food they served? â; Person B: âWell, it was certainly healthyâ, violates the maxim of manner, but is implying perhaps that Person B did not enjoy the food, without them actually saying so.
Following from the claim that explanations are conversations, Hilton [72] argues that explanations should follow these maxims. The quality and quantity categories present logical characterisations of the explanations themselves, while the relation and manner categories deï¬ne how they explanations should be given.
5.1.2. Relation & Relevance in Explanation Selection | 1706.07269#175 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 176 | 5.1.2. Relation & Relevance in Explanation Selection
Of particular interest here is research to support these Gricean maxims; in particular, the related maxims of quantity and relevance, which together state that the speaker should only say what is necessary and relevant. In social explanation, research has shown that people select explanations to adhere to these maxims by considering the particular question being asked by the explainee, but also by giving explanations that the explainee does not already accept as being true.: To quote Hesslow:
51
What are being selected are essentially questions, and the causal selection that follows from this is determined by the straightforward criterion of explanatory relevance. â [69, p. 30]
In Section 4.4.1, we saw evidence to suggest that the diï¬erence between the fact and foil for contrastive whyâquestions are the relevant causes for explanation. In this section, we review work on the social aspects of explanation selection and evaluation. | 1706.07269#176 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 177 | Epistemic Relevance. Slugoski et al. [165] present evidence of Gricean maxims in expla- nation, and of support for the idea of explanation as conversation. They argue that the form of explanation must take into account its function as an answer to a speciï¬ed whyâ question, and that this should take part within a conversational framework, including the context of the explainee. They gave experimental participants information in the form of a police report about an individual named George who had been charged with assault after a school ï¬ght. This information contained information about George himself, and about the circumstances of the ï¬ght. Participants were then paired with another âpar- ticipantâ (played by a researcher), were told that the other participant had either: (a) information about George; (2) the circumstances of the ï¬ght; or (c) neither; and were asked to answer why George had assaulted the other person. The results showed partic- ipants provided explanations that are tailored to their expectations of what the hearer already knows, selecting single causes based on abnormal factors of which they believe the explainee is unaware; and that participants change their explanations of the same event when presenting to explainees with diï¬ering background knowledge. | 1706.07269#177 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 178 | Jaspars and Hilton [80] and Hilton [73] both argue that such results demonstrate that, as well as being true or likely, a good explanation must be relevant to both the question and to the mental model of the explainee. Byrne [16] oï¬ers a similar argument in her computational model of explanation selection, noting that humans are model-based, not proof-based, so explanations must be relevant to a model.
Halpern and Pearl [59] present an elegant formal model of explanation selection based on epistemic relevance. This model extends their work on structural causal models [59], discussed in Section 2.1.1. They deï¬ne an explanation as a fact that, if found to be true, would constitute an actual cause of a speciï¬c event. | 1706.07269#178 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 179 | Recall from Section 2.1.1 structural causal models [58] contain variables and functions between these variables. A situation is a unique assignment from variables to values. Halpern and Pearl [59] then deï¬ne an epistemic state as a set of situations, one for each possible situation that the explainee considers possible. Explaining the causes of an event then becomes providing the values for those variables that remove some situations from the epistemic state such that the cause of the event can be uniquely identiï¬ed. They then further show how to provide explanations that describe the structural model itself, rather than just the values of variables, and how to reason when provided with probability distributions over events. Given a probabilistic model, Halpern and Pearl Informally, this states formally deï¬ne the explanatory power of partial explanations. that explanation C1 has more explanatory power explanation C2 for explanandum E if and only if providing C1 to the explainee increases the prior probability of E being true more than providing C2 does.
Dodd and Bradshaw [38] demonstrates that the perceived intention of a speaker is important in implicature. Just as leading questions in eyewitness reports can have an
52 | 1706.07269#179 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 180 | Dodd and Bradshaw [38] demonstrates that the perceived intention of a speaker is important in implicature. Just as leading questions in eyewitness reports can have an
52
eï¬ect on the judgement of the eyewitness, so to it can aï¬ect explanation. They showed that the meaning and presuppositions that people infer from conversational implicatures depends heavily on the perceived intent or bias of the speaker. In their experiments, they asked participants to assess, among other things, the causes of a vehicle accident, with the account of the accident being given by diï¬erent parties: a neutral bystander vs. the driver of the vehicle. Their results show that the bystanderâs information is more trusted, but also that incorrect presuppositions are recalled as âfactsâ by the participants if the account was provided by the neutral source, but not the biased source; even if they observed the correct facts to begin with. Dodd and Bradshaw argue that this is because the participants ï¬ltered the information relative to their perceived intention of the person providing the account. | 1706.07269#180 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 181 | The Dilution Eï¬ect. Tetlock and Boettger [169] investigated the eï¬ect of implicature with respect to the information presented, particularly its relevance, showing that when presented with additional, irrelevant information, peopleâs implicatures are diluted. They performed a series of controlled experiments in which participants were presented with in- formation about an individual David, and were asked to make predictions about Davidâs future; for example, what his grade point average (GPA) would be. There were two control groups and two test groups. In the control groups, people were told David spent either 3 or 31 hours studying each week (which we will call groups C3 and C31), while in the diluted group test groups, subjects were also provided with additional irrelevant information about David (groups T3 and T31). The results showed that those in the diluted T3 group predicted a higher GPA than those in the undiluted C3 group, while those in the diluted T31 group predicted a lower GPA than those in the undiluted C31 group. Tetlock and Boettger argued that this is because participants assumed the irrel- evant information may have indeed been relevant, but its lack of support for prediction led to less extreme predictions. This study and studies on which it built demonstrate the importance of relevance in explanation. | 1706.07269#181 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 182 | In a further study, Tetlock et al. [170] explicitly controlled for conversational maxims, by informing one set of participants that the information displayed to them was chosen at random from the history of the individual. Their results showed that the dilution eï¬ect disappeared when conversational maxims were deactivated, providing further evidence for the dilution eï¬ect.
Together, these bodies of work and those on which they build demonstrate that Griceâs maxims are indeed important in explanation for several reasons; notably that they are a good model for how people expect conversation to happen. Further, while it is clear that providing more information than necessary not only would increase the cognitive load of the explainee, but that it dilutes the eï¬ects of the information that is truly important.
# 5.1.3. Argumentation and Explanation | 1706.07269#182 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 183 | # 5.1.3. Argumentation and Explanation
Antaki and Leudar [3] extend Hiltonâs conversational model [72] from dialogues to arguments. Their research shows that a majority of statements made in explanations are actually argumentative claim-backings; that is, justifying that a particular cause indeed did hold (or was thought to have held) when a statement is made. Thus, explanations are used to both report causes, but also to back claims, which is an argument rather than just a question-answer model. They extend the conversational model to a wider class of contrast cases. As well as explaining causes, one must be prepared to defend a particular 53 | 1706.07269#183 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 184 | claim made in a causal explanation. Thus, explanations extend not just to the state of aï¬airs external to the dialogue, but also to the internal attributes of the dialogue itself. An example on the distinction between explanation and argument provided by Antaki and Leudar [3, p. 186] is âThe water is hot because the central heating is onâ. The distinction lies on whether the speaker believes that the hearer believes that the water is hot or not. If it is believed that the speaker believes that the water is hot, then the central heating being on oï¬ers an explanation: it contrasts with a case in which the water is not hot. If the speaker believes that the hearer does not believe the water is hot, then this is an argument that the water should indeed be hot; particularly if the speaker believes that the hearer believes that the central heating is on. The speaker is thus trying to persuade the hearer that the water is hot. However, the distinction is not always so clear because explanations can have argumentative functions.
# 5.1.4. Linguistic structure | 1706.07269#184 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 185 | # 5.1.4. Linguistic structure
Malle et al. [116] argue that the linguistic structure of explanations plays an important role in interpersonal explanation. They hypothesise that some linguistic devices are used not to change the reason, but to indicate perspective and to manage impressions. They asked experimental participants to select three negative and three positive intentional actions that they did recently that were outside of their normal routine. They then asked participants to explain why they did this, and coded the answers. Their results showed several interesting ï¬ndings.
First, explanations for reasons can be provided in two diï¬erent ways: marked or unmarked. An unmarked reason is a direct reason, while a marked reason has a mental state marker attached. For example, to answer the question âWhy did she go back into the houseâ, the explanations âThe key is still in the houseâ and âShe thinks the key is still in the houseâ both give the same reason, but with diï¬erent constructs that are used to give diï¬erent impressions: the second explanation gives an impression that the explainee may not be in agreement with the actor. | 1706.07269#185 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 186 | Second, people use belief markers and desire markers; for example, âShe thinks the key is in the houseâ and âShe wants the key to be in her pocketâ respectively. In general, dropping ï¬rst-person markings, that is, a speaker dropping âI/we believeâ, is common in conversation and the listeners automatically infer that this is a belief of the speaker. For example, âThe key is in the houseâ indicates a belief on the behalf of the speaker and inferred to mean âI believe the key is in the houseâ [116]5.
However, for third-person perspective, this is not the case. The unmarked version of explanations, especially belief markers, generally imply some sort of agreement from the explainer: âShe went back in because the key is in the houseâ invites the explainee to infer that the actor and the explainer share the belief that the key is in the house. Whereas, âShe went back in because she believes the key is in the houseâ is ambiguous â it does not (necessarily) indicate the belief of the speaker. The reason: âShe went back in because she mistakenly believes the key is in the houseâ oï¬ers no ambiguity of the speakerâs belief. | 1706.07269#186 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 187 | Malle [112, p. 169, Table 6.3] argues that diï¬erent markers sit on a scale between being distancing to being embracing. For example, âshe mistakenly believesâ is more
5Malle [112, Chapter 4] also brieï¬y discusses valuings as markers, such as âShe likesâ, but notes that
these are rarely dropped in reasons.
54
distancing than âshe jumped to the conclusionâ â, while âshe realisesâ is embracing. Such constructs aim not to provide diï¬erent reasons, but merely allow the speaker to form impressions about themselves and the actor.
# 5.2. Explanatory Dialogue
If we accept the model of explanation as conversation, then we may ask whether there are particular dialogue structures for explanation. There has been a collection of such articles ranging from dialogues for pragmatic explanation [176] to deï¬nitions based on transfer of understanding [179]. However, the most relevant for the problem of explanation in AI is a body of work lead largely by Walton. | 1706.07269#187 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 188 | Walton [180] proposed a dialectical theory of explanation, putting forward similar ideas to that of Antaki and Leudar [3] in that some parts of an explanatory dialogue require the explainer to provide backing arguments to claims. In particular, he argues that such an approach is more suited to âeverydayâ or interpersonal explanation than models based on scientiï¬c explanation. He further argues that such models should be combined with ideas of explanation as understanding, meaning that social explanation is about transferring knowledge from explainer to explainee. He proposes a series of conditions on the dialogue and its interactions as to when and how an explainer should transfer knowledge to an explainee.
In a follow-on paper, Walton [182] proposes a formal dialogue model called CE, based on an earlier persuasion dialogue [184], which deï¬nes the conditions on how a explanatory dialogue commences, rules for governing the locutions in the dialogue, rules for governing the structure or sequence of the dialogue, success rules and termination rules. | 1706.07269#188 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 189 | Extending on this work further [182], Walton [183] describes an improved formal dia- logue system for explanation, including a set of speech act rules for practical explanation, consisting of an opening stage, exploration stage, and closing stage. In particular, this paper focuses on the closing stage to answer the question: how do we know that an explanation has âï¬nishedâ ? Scriven [162] argues that to test someoneâs understanding of a topic, merely asking them to recall facts that have been told to them is insuï¬cient â we should also be able to answer new questions that demonstrate generalisation of and inference from what has been learnt: an examination.
To overcome this, Walton proposes the use of examination dialogues [181] as a method for the explainer to determine whether the explainee has correctly understood the ex- planation â that is, the explainer has a real understanding, not merely a perceived (or claimed) understanding. Walton proposes several rules for the closing stage of the exam- ination dialogue, including a rule for terminating due to âpractical reasonsâ, which aim to solve the problem of the failure cycle, in which repeated explanations are requested, and thus the dialogue does not terminate. | 1706.07269#189 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 190 | Arioua and Croitoru [4] formalise Waltonâs work on explanation dialogue [183], ground- ing it in a well-known argumentation framework [147]. In addition, they provide for- malisms of commitment stores and understanding stores for maintaining what each party in the dialogue is committed to, and what they already understand. This is necessary to prevent circular arguments. They further deï¬ne how to shift between diï¬erent dia- logues in order to enable nested explanations, in which an explanation produces a new whyâquestion, but also to shift from an explanation to an argumentation dialogue, which supports nested argument due to a challenge from an explainee, as noted by Antaki and
55
Leudar [3]. The rules deï¬ne when this dialectical shift can happen, when it can return to the explanation, and what the transfer of states is between these; that is, how the explanation state is updated after a nested argument dialogue.
5.3. Social Explanation and XAI
This section presents some ideas on how research from social explanation aï¬ects researchers and practitioners in XAI.
# 5.3.1. Conversational Model | 1706.07269#190 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 191 | This section presents some ideas on how research from social explanation aï¬ects researchers and practitioners in XAI.
# 5.3.1. Conversational Model
The conversational model of explanation according to Hilton [72], and its subsequent extension by Antaki and Leudar [3] to consider argumentation, are appealing and useful models for explanation in AI. In particular, they are appealing because of its general- ity â they can be used to explain human or agent actions, emotions, physical events, algorithmic decisions, etc. It abstracts away from the cognitive processes of causal attri- bution and explanation selection, and therefore does not commit to any particular model of decision making, of how causes are determined, how explanations are selected, or even any particular mode of interaction. | 1706.07269#191 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 192 | One may argue that in digital systems, many explanations would be better done in a visual manner, rather than a conversational manner. However, the models of Hilton [72], Antaki and Leudar [3], and Walton [183] are all independent of language. They deï¬ne interactions based on questions and answers, but these need not be verbal. Questions could be asked by interacting with a visual object, and answers could similarly be pro- vided in a visual way. While Griceâs maxim are about conversation, they apply just as well to other modes of interaction. For instance, a good visual explanation would display only quality explanations that are relevant and relate to the question â these are exactly Griceâs maxims.
I argue that, if we are to design and implement agents that can truly explain them- selves, in many scenarios, the explanation will have to be interactive and adhere to maxims of communication, irrelevant of the media used. For example, what should an explanatory agent do if the explainee does not accept a selected explanation?
# 5.3.2. Dialogue | 1706.07269#192 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 193 | # 5.3.2. Dialogue
Waltonâs explanation dialogues [180, 182, 183], which build on well-accepted mod- els from argumentation, are closer to the notion of computational models than that of Hilton [72] or Antaki and Leudar [3]. While Walton also abstracts away from the cog- nitive processes of causal attribution and explanation selection, his dialogues are more idealised ways of how explanation can occur, and thus make certain assumptions that may be reasonable for a model, but of course, do not account for all possible interactions. However, this is appealing from an explainable AI perspective because it is clear that the interactions between an explanatory agent and an explainee will need to be scoped to be computationally tractable. Waltonâs models provide a nice step towards implementing Hiltonâs conversational model.
Arioua and Croitoruâs formal model for explanation [4] not only brings us one step closer to a computational model, but also nicely brings together the models of Hilton [72] and Antaki and Leudar [3] for allowing arguments over claims in explanations. Such formal models of explanation could work together with concepts such as conversation policies [55] to implement explanations.
56 | 1706.07269#193 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 194 | 56
The idea of interactive dialogue XAI is not new. In particular, a body of work by Cawsey [17, 18, 19] describes EDGE: a system that generates natural-language dialogues for explaining complex principles. Cawseyâs work was novel because it was the ï¬rst to investigate discourse within an explanation, rather than discourse more generally. Due to the complexity of explanation, Cawsey advocates context-speciï¬c, incremental explanation, interleaving planning and execution of an explanation dialogue. EDGE separates content planning (what to say) from dialogue planning (organisation of the Interruptions attract their own sub-dialog. The ï¬ow of the dialogue is interaction). context dependent, in which context is given by: (1) the current state of the discourse relative to the goal/sub-goal hierarchy; (2) the current focus of the explanation, such as which components of a device are currently under discussion; and (3) assumptions about the userâs knowledge. Both the content and dialogue are inï¬uenced by the context. The dialogue is planned using a rule-based system that break explanatory goals into sub-goals and utterances. Evaluation of EDGE [19] is anecdotal, based on a small set of people, and with no formal evaluation or comparison. | 1706.07269#194 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 195 | At a similar time, Moore and Paris [134] devised a system for explanatory text gener- ation within dialogues that also considers context. They explicitly reject the notion that schemata can be used to generate explanations, because they are too rigid and lack the intentional structure to recover from failures or misunderstandings in the dialogue. Like Cawseyâs EDGE system, Moore and Paris explicitly represent the userâs knowledge, and plan dialogues incrementally. The two primary diï¬erences from EDGE is that Moore and Parisâs system explicitly models the eï¬ects that utterances can have on the hearerâs mental state, providing ï¬exibility that allows recovery from failure and misunderstand- ing; and that the EDGE system follows an extended explanatory plan, including probing questions, which are deemed less appropriate in Moore and Parisâs application area of advisory dialogues. The focus of Cawseyâs and Moore and Parisâs work are in applica- tions such as intelligent tutoring, rather than on AI that explains itself, but many of the lessons and ideas generalise.
EDGE and other related research on interactive explanation considers only verbal dialogue. As noted above, abstract models of dialogue such as those proposed by Walton [183] may serve as a good starting point for multi-model interactive explanations.
# 5.3.3. Theory of Mind | 1706.07269#195 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 196 | # 5.3.3. Theory of Mind
is required to provide meaningful explanations. However, for social explanation, a Theory of Mind is also required. Clearly, as part of a dialog, an explanatory agent should at least keep track of what has already been explained, which is a simple model of other and forms part of the explanatory context. However, if an intelligent agent is operating with a human explainee in a particular environment, it could may have access to more complete models of other, such as the otherâs capabilities and their current beliefs or knowledge; and even the explaineeâs model of the explanatory agent itself. If it has such a model, the explanatory agent can exploit this by tailoring the explanation to the human observer. Halpern and Pearl [59] already considers a simpliï¬ed idea of this in their model of explanation, but other work on epistemic reasoning and planning [42, 135] and planning for interactive dialogue [143] can play a part here. These techniques will be made more powerful if they are aligned with user modelling techniques used in HCI [44].
57 | 1706.07269#196 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 197 | While the idea of Theory of Mind in AI is not new; see for example [178, 37]; itâs application to explanation has not been adequately explored. Early work on XAI took the idea of dialogue and user modelling seriously. For example, Cawseyâs EDGE system, described in Section 5.3.2, contains a speciï¬c user model to provide better context for interactive explanations [20]. Cawsey argues that the user model must be integrated closely with explanation model to provide more natural dialogue. The EDGE user model consists of two parts: (1) the knowledge that the user has about a phenomenon; and (2) their âlevel of expertiseâ; both of which can be updated during the dialogue. EDGE uses dialogues questions to build a user model, either explicitly, using questions such as âDo you known X?â or âWhat is the value of Y?â, or implicitly, such as when a user asks for clariï¬cation. EDGE tries to guess other indirect knowledge using logical inference from this direct knowledge. This knowledge is then used to tailor explanation to the speciï¬c person, which is an example of | 1706.07269#197 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 198 | other indirect knowledge using logical inference from this direct knowledge. This knowledge is then used to tailor explanation to the speciï¬c person, which is an example of using epistemic relevance to select explanations. Cawsey was not the ï¬rst to consider user knowledge; for example, Weinerâs BLAH system [185] for incremental explanation also had a simple user model for knowledge that is used to tailor explanation, and Weiner refers to Griceâs maxim of quality to justify this. | 1706.07269#198 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 199 | More recently, Chakraborti et al. [21] discuss preliminary work in this area for ex- plaining plans. Their problem deï¬nition consists of two planning models: the explainer and the explainee; and the task is to align the two models by minimising some criteria; for example, the number of changes. This is an example of using epistemic relevance to tailor an explanation. Chakraborti et al. class this as contrastive explanation, because the explanation contrasts two models. However, this is not the same use of the term âcontrastiveâ as used in social science literature (see Section 2.3), in which the contrast is an explicit foil provided by the explainee as part of a question.
# 5.3.4. Implicature
It is clear that in some settings, implicature can play an important role. Reasoning about implications of what the explainee says could support more succinct explanations, but just as importantly, those designing explanatory agents must also keep in mind what people could infer from the literal explanations â both correctly and incorrectly. | 1706.07269#199 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 200 | Further to this, as noted by Dodd and Bradshaw [38], people interpret explanations relative to the intent of the explainer. This is important for explainable AI because one of the main goals of explanation is to establish trust of people, and as such, explainees will be aware of this goal. It is clear that we should quite often assume from the outset that trust levels are low. If explainees are sceptical of the decisions made by a system, it is not diï¬cult to imagine that they will also be sceptical of explanations provided, and could interpret explanations as biased.
# 5.3.5. Dilution
Finally, it is important to focus on dilution. As noted in the introduction of this paper, much of the work in explainable AI is focused on causal attributions. The work outlined in Section 4 shows that this is only part of the problem. While presenting a casual chain may allow an explainee to ï¬ll in the gaps of their own knowledge, there is still a likely risk that the less relevant parts of the chain will dilute those parts that are crucial to the particular question asked by the explainee. Thus, this again emphasises the importance of explanation selection and relevance.
58
5.3.6. Social and Interactive Explanation | 1706.07269#200 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 201 | 58
5.3.6. Social and Interactive Explanation
The recent surge in explainable AI has not (yet) truly adopted the concept socially- interactive explanation, at least, relative to the ï¬rst wave of explainable AI systems I hypothesise that this is such as that by Cawsey [20] and Moore and Paris [134]. largely due to the nature of the task being explained. Most recent research is concerned with explainable machine learning, whereas early work explained symbolic models such as expert systems and logic programs. This inï¬uences the research in two ways: (1) recent research focuses on how to abstract and simplify uninterpretable models such as neural nets, whereas symbolic approaches are relatively more interpretable and need less abstraction in general; and (2) an interactive explanation is a goal-based endeavour, which lends itself more naturally to symbolic approaches. Given that early work on XAI was to explain symbolic approaches, the authors of such work would have more intuitively seen the link to interaction. Despite this, others in the AI community have recently re-discovered the importance of social interaction for explanation; for example, [186, 163], and have noted that this is a problem that requires collaboration with HCI researchers.
# 6. Conclusions | 1706.07269#201 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 202 | # 6. Conclusions
In this paper, I have argued that explainable AI can beneï¬t from existing models of how people deï¬ne, generate, select, present, and evaluate explanations. I have reviewed what I believe are some of the most relevant and important ï¬ndings from social science research on human explanation, and have provide some insight into how this work can be used in explainable AI.
In particular, we should take the four major ï¬ndings noted in the introduction into account in our explainable AI models: (1) whyâquestions are contrastive; (2) explanations are selected (in a biased manner); (3) explanations are social; and (4) probabilities are not as important as causal links. I acknowledge that incorporating these ideas are not feasible for all applications, but in many cases, they have the potential to improve explanatory agents. I hope and expect that readers will also ï¬nd other useful ideas from this survey. It is clear that adopting this work into explainable AI is not a straightforward step. From a social science viewpoint, these models will need to be reï¬ned and extended to pro- vide good explanatory agents, which requires researchers in explainable AI to work closely with researchers from philosophy, psychology, cognitive science, and human-computer interaction. Already, projects of this type are underway, with impressive results; for example, see [91, 89, 157]. | 1706.07269#202 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 203 | # Acknowledgements
The author would like to thank Denis Hilton for his review on an earlier draft of this paper, pointers to several pieces of related work, and for his many insightful discussions on the link between explanation in social sciences and artiï¬cial intelligence. The author would also like to thank several others for critical input of an earlier draft: Natasha Goss, Michael Winikoï¬, Gary Klein, Robert Hoï¬man, and the anonymous reviewers; and Darryn Reid for his discussions on the link between self, trust, and explanation.
This work was undertaken while the author was on sabbatical at the Universit´e de Toulouse Capitole, and was partially funded by Australian Research Council DP160104083
59
Catering for individualsâ emotions in technology development and and a Sponsored Re- search Collaboration grant from the Commonwealth of Australia Defence Science and Technology Group and the Defence Science Institute, an initiative of the State Govern- ment of Victoria.
# References
[1] D. Allemang, M. C. Tanner, T. Bylander, J. R. Josephson, Computational Complexity of Hypoth- esis Assembly, in: IJCAI, vol. 87, 1112â1117, 1987. | 1706.07269#203 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 204 | [2] J. Angwin, J. Larson, S. Mattu, L. Kirchner, Machine bias, ProPublica, May 23. [3] C. Antaki, I. Leudar, Explaining in conversation: Towards an argument model, European Journal
of Social Psychology 22 (2) (1992) 181â194.
[4] A. Arioua, M. Croitoru, Formalizing explanatory dialogues, in: International Conference on Scal- able Uncertainty Management, Springer, 282â297, 2015.
[5] J. L. Aronson, On the grammar of âcauseâ, Synthese 22 (3) (1971) 414â430. [6] D. Baehrens, T. Schroeter, S. Harmeling, M. Kawanabe, K. Hansen, K.-R. M ËAËzller, How to explain individual classiï¬cation decisions, Journal of Machine Learning Research 11 (Jun) (2010) 1803â1831.
[7] E. Bekele, W. E. Lawson, Z. Horne, S. Khemlani, Human-level explanatory biases for person re-identiï¬cation . | 1706.07269#204 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 205 | [8] P. Besnard, A. Hunter, Elements of argumentation, vol. 47, MIT press Cambridge, 2008. [9] O. Biran, C. Cotton, Explanation and justiï¬cation in machine learning: A survey, in: IJCAI 2017
Workshop on Explainable Artiï¬cial Intelligence (XAI), 8â13, 2017.
[10] A. Boonzaier, J. McClure, R. M. Sutton, Distinguishing the eï¬ects of beliefs and preconditions: The folk psychology of goals and actions, European Journal of Social Psychology 35 (6) (2005) 725â740.
[11] R. I. Brafman, C. Domshlak, From One to Many: Planning for Loosely Coupled Multi-Agent Systems., in: International Conference on Automated Planning and Scheduling, 28â35, 2008. [12] J. Broekens, M. Harbers, K. Hindriks, K. Van Den Bosch, C. Jonker, J.-J. Meyer, Do you get it? User-evaluated explainable BDI agents, in: German Conference on Multiagent System Technolo- gies, Springer, 28â39, 2010. | 1706.07269#205 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 206 | [13] S. Bromberger, Whyâquestions, in: R. G. Colodny (Ed.), Mind and Cosmos: Essays in Contem- porary Science and Philosophy, Pittsburgh University Press, Pittsburgh, 68â111, 1966.
[14] B. Buchanan, E. Shortliï¬e, Rule-based expert systems: the MYCIN experiments of the Stanford Heuristic Programming Project, Addison-Wesley, 1984.
[15] A. Burguet, D. Hilton, Eï¬ets de contexte sur lâexplication causale, in: M. B. et A. Trognon (Ed.), Psychologie Sociale et Communication, Paris: Dunod, 219â228, 2004.
[16] R. M. Byrne, The Construction of Explanations, in: AI and Cognitive Scienceâ90, Springer, 337â 351, 1991.
[17] A. Cawsey, Generating Interactive Explanations., in: AAAI, 86â91, 1991. [18] A. Cawsey, Explanation and interaction: the computer generation of explanatory dialogues, MIT
press, 1992.
[19] A. Cawsey, Planning interactive explanations, International Journal of Man-Machine Studies 38 (2) (1993) 169â199. | 1706.07269#206 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 207 | press, 1992.
[19] A. Cawsey, Planning interactive explanations, International Journal of Man-Machine Studies 38 (2) (1993) 169â199.
[20] A. Cawsey, User modelling in interactive explanations, User Modeling and User-Adapted Interac- tion 3 (1993) 221â247.
[21] T. Chakraborti, S. Sreedharan, Y. Zhang, S. Kambhampati, Plan explanations as model rec- onciliation: Moving beyond explanation as soliloquy, in: Proceedings of IJCAI, URL https: //www.ijcai.org/proceedings/2017/0023.pdf, 2017.
[22] K. Chan, T.-W. Lee, P. A. Sample, M. H. Goldbaum, R. N. Weinreb, T. J. Sejnowski, Compar- ison of machine learning and traditional classiï¬ers in glaucoma diagnosis, IEEE Transactions on Biomedical Engineering 49 (9) (2002) 963â974.
[23] B. Chandrasekaran, M. C. Tanner, J. R. Josephson, Explaining control strategies in problem solving, IEEE Expert 4 (1) (1989) 9â15. | 1706.07269#207 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 208 | [24] E. Charniak, R. Goldman, A probabilistic model of plan recognition, in: Proceedings of the ninth National conference on Artiï¬cial intelligence-Volume 1, AAAI Press, 160â165, 1991.
60
[25] J. Y. Chen, K. Procci, M. Boyce, J. Wright, A. Garcia, M. Barnes, Situation awareness-based agent transparency, Tech. Rep. ARL-TR-6905, U.S. Army Research Laboratory, 2014.
[26] Y. Chevaleyre, U. Endriss, J. Lang, N. Maudet, A short introduction to computational social International Conference on Current Trends in Theory and Practice of Computer choice, in: Science, Springer, 51â69, 2007.
[27] S. Chin-Parker, A. Bradner, Background shifts aï¬ect explanatory style: how a pragmatic theory of explanation accounts for background eï¬ects in the generation of explanations, Cognitive Processing 11 (3) (2010) 227â249.
[28] S. Chin-Parker, J. Cantelon, Contrastive Constraints Guide Explanation-Based Category Learning, Cognitive science 41 (6) (2017) 1645â1655. | 1706.07269#208 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 209 | [29] H. Chockler, J. Y. Halpern, Responsibility and blame: A structural-model approach, Journal of Artiï¬cial Intelligence Research 22 (2004) 93â115.
[30] A. Cimpian, E. Salomon, The inherence heuristic: An intuitive means of making sense of the world, and a potential precursor to psychological essentialism, Behavioral and Brain Sciences 37 (5) (2014) 461â480.
[31] A. Cooper, The inmates are running the asylum: Why high-tech products drive us crazy and how to restore the sanity, Sams Indianapolis, IN, USA, 2004.
[32] DARPA, Explainable Artiï¬cial Intelligence (XAI) Program, http://www.darpa.mil/program/ explainable-artificial-intelligence, full solicitation at http://www.darpa.mil/attachments/ DARPA-BAA-16-53.pdf, 2016.
# ristie
[33] G. C. Davey, Characteristics of individuals with fear of spiders, Anxiety Research 4 (4) (1991) 299â314. | 1706.07269#209 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 210 | # ristie
[33] G. C. Davey, Characteristics of individuals with fear of spiders, Anxiety Research 4 (4) (1991) 299â314.
[34] M. M. de Graaf, B. F. Malle, How People Explain Action (and Autonomous Intelligent Systems Should Too), in: AAAI Fall Symposium on Artiï¬cial Intelligence for Human-Robot Interaction, 2017.
[35] D. C. Dennett, The intentional stance, MIT press, 1989. [36] D. C. Dennett, From bacteria to Bach and back: The evolution of minds, WW Norton & Company,
2017.
[37] F. Dignum, R. Prada, G. J. Hofstede, From autistic to social agents, in: Proceedings of the 2014 international conference on Autonomous agents and multi-agent systems, IFAAMAS, 1161â1164, 2014.
[38] D. H. Dodd, J. M. Bradshaw, Leading questions and memory: Pragmatic constraints, Journal of Memory and Language 19 (6) (1980) 695.
[39] P. Dowe, Wesley Salmonâs process theory of causality and the conserved quantity theory, Philos- ophy of Science 59 (2) (1992) 195â216. | 1706.07269#210 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 211 | [40] T. Eiter, T. Lukasiewicz, Complexity results for structure-based causality, Artiï¬cial Intelligence 142 (1) (2002) 53â89.
[41] T. Eiter, T. Lukasiewicz, Causes and explanations in the structural-model approach: Tractable cases, Artiï¬cial Intelligence 170 (6-7) (2006) 542â580.
[42] R. Fagin, J. Halpern, Y. Moses, M. Vardi, Reasoning about knowledge, vol. 4, MIT press Cam- bridge, 1995.
[43] D. Fair, Causation and the Flow of Energy, Erkenntnis 14 (3) (1979) 219â250. [44] G. Fischer, User modeling in humanâcomputer interaction, User modeling and user-adapted interaction 11 (1-2) (2001) 65â86.
[45] J. Fox, D. Glasspool, D. Grecu, S. Modgil, M. South, V. Patkar, Argumentation-based inference and decision makingâA medical perspective, IEEE intelligent systems 22 (6). | 1706.07269#211 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 212 | [46] M. Fox, D. Long, D. Magazzeni, Explainable Planning, in: IJCAI 2017 Workshop on Explainable Artiï¬cial Intelligence (XAI), URL https://arxiv.org/pdf/1709.10256, 2017.
[47] N. Frosst, G. Hinton, Distilling a Neural Network Into a Soft Decision Tree, arXiv e-prints 1711.09784, URL https://arxiv.org/abs/1711.09784.
[48] T. Gerstenberg, D. A. Lagnado, Spreading the blame: The allocation of responsibility amongst multiple agents, Cognition 115 (1) (2010) 166â171.
# Peterson
[49] T. Gerstenberg, M. F. Peterson, N. D. Goodman, D. A. Lagnado, J. B. Tenenbaum, Eye-tracking causality, Psychological science 28 (12) (2017) 1731â1744. | 1706.07269#212 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 213 | [50] M. Ghallab, D. Nau, P. Traverso, Automated Planning: theory and practice, Elsevier, 2004. [51] D. T. Gilbert, P. S. Malone, The correspondence bias, Psychological bulletin 117 (1) (1995) 21. [52] C. Ginet, In defense of a non-causal account of reasons explanations, The Journal of Ethics 12 (3-4)
(2008) 229â237.
[53] L. Giordano, C. Schwind, Conditional logic of actions and causation, Artiï¬cial Intelligence 157 (1- 61
2) (2004) 239â279.
[54] V. Girotto, P. Legrenzi, A. Rizzo, Event controllability in counterfactual thinking, Acta Psycho- logica 78 (1) (1991) 111â133.
[55] M. Greaves, H. Holmback, J. Bradshaw, What is a conversation policy?, in: Issues in Agent Communication, Springer, 118â131, 2000.
[56] H. P. Grice, Logic and conversation, in: Syntax and semantics 3: Speech arts, New York: Academic Press, 41â58, 1975. | 1706.07269#213 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 214 | [56] H. P. Grice, Logic and conversation, in: Syntax and semantics 3: Speech arts, New York: Academic Press, 41â58, 1975.
[57] J. Y. Halpern, Axiomatizing causal reasoning, Journal of Artiï¬cial Intelligence Research 12 (2000) 317â337.
[58] J. Y. Halpern, J. Pearl, Causes and explanations: A structural-model approach. Part I: Causes, The British Journal for the Philosophy of Science 56 (4) (2005) 843â887.
[59] J. Y. Halpern, J. Pearl, Causes and explanations: A structural-model approach. Part II: Explana- tions, The British Journal for the Philosophy of Science 56 (4) (2005) 889â911.
[60] R. J. Hankinson, Cause and explanation in ancient Greek thought, Oxford University Press, 2001. [61] N. R. Hanson, Patterns of discovery: An inquiry into the conceptual foundations of science, CUP
Archive, 1965.
[62] G. H. Harman, The inference to the best explanation, The philosophical review 74 (1) (1965) 88â95. | 1706.07269#214 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 215 | Archive, 1965.
[62] G. H. Harman, The inference to the best explanation, The philosophical review 74 (1) (1965) 88â95.
[63] M. Harradon, J. Druce, B. Ruttenberg, Causal Learning and Explanation of Deep Neural Net- works via Autoencoded Activations, arXiv e-prints 1802.00541, URL https://arxiv.org/abs/ 1802.00541.
[64] H. L. A. Hart, T. Honor´e, Causation in the Law, OUP Oxford, 1985. [65] B. Hayes, J. A. Shah, Improving Robot Controller Transparency Through Autonomous Policy Explanation, in: Proceedings of the 12th ACM/IEEE International Conference on Human-Robot Interaction (HRI 2017), 2017.
[66] F. Heider, The psychology of interpersonal relations, New York: Wiley, 1958. [67] F. Heider, M. Simmel, An experimental study of apparent behavior, The American Journal of
Psychology 57 (2) (1944) 243â259.
[68] C. G. Hempel, P. Oppenheim, Studies in the Logic of Explanation, Philosophy of Science 15 (2) (1948) 135â175. | 1706.07269#215 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 216 | [68] C. G. Hempel, P. Oppenheim, Studies in the Logic of Explanation, Philosophy of Science 15 (2) (1948) 135â175.
[69] G. Hesslow, The problem of causal selection, Contemporary science and natural explanation: Commonsense conceptions of causality (1988) 11â32.
[70] D. Hilton, Social Attribution and Explanation, in: Oxford Handbook of Causal Reasoning, Oxford University Press, 645â676, 2017.
[71] D. J. Hilton, Logic and causal attribution, in: Contemporary science and natural explanation: Commonsense conceptions of causality, New York University Press, 33â65, 1988.
[72] D. J. Hilton, Conversational processes and causal explanation, Psychological Bulletin 107 (1) (1990) 65â81.
[73] D. J. Hilton, Mental models and causal explanation: Judgements of probable cause and explanatory relevance, Thinking & Reasoning 2 (4) (1996) 273â308.
[74] D. J. Hilton, J. McClure, B. Slugoski, Counterfactuals, conditionals and causality: A social psy- chological perspective, in: D. R. Mande, D. J. Hilton, P. Catellani (Eds.), The psychology of counterfactual thinking, London: Routledge, 44â60, 2005. | 1706.07269#216 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 217 | [75] D. J. Hilton, J. McClure, R. M. Sutton, Selecting explanations from causal chains: Do statistical principles explain preferences for voluntary causes?, European Journal of Social Psychology 40 (3) (2010) 383â400.
[76] D. J. Hilton, J. L. McClure, R. Slugoski, Ben, The Course of Events: Counterfactuals, Causal Sequences and Explanation, in: D. R. Mandel, D. J. Hilton, P. Catellani (Eds.), The Psychology of Counterfactual Thinking, Routledge, 2005.
[77] D. J. Hilton, B. R. Slugoski, Knowledge-based causal attribution: The abnormal conditions focus model, Psychological review 93 (1) (1986) 75.
[78] R. R. Hoï¬man, G. Klein, Explaining explanation, part 1: theoretical foundations, IEEE Intelligent Systems 32 (3) (2017) 68â73.
[79] D. Hume, An enquiry concerning human understanding: A critical edition, vol. 3, Oxford Univer- sity Press, 2000.
[80] J. M. Jaspars, D. J. Hilton, Mental models of causal reasoning, in: The social psychology of knowledge, Cambridge University Press, 335â358, 1988. | 1706.07269#217 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 218 | [81] J. R. Josephson, S. G. Josephson, Abductive inference: Computation, philosophy, technology, Cambridge University Press, 1996.
62
[82] D. Kahneman, Thinking, fast and slow, Macmillan, 2011. [83] D. Kahneman, A. Tversky, The simulation heuristic, in: P. S. D. Kahneman, A. Tversky (Eds.), Judgment under Uncertainty: Heuristics and Biases, New York: Cambridge University Press, 1982.
[84] Y. Kashima, A. McKintyre, P. Cliï¬ord, The category of the mind: Folk psychology of belief, desire, and intention, Asian Journal of Social Psychology 1 (3) (1998) 289â313.
[85] A. Kass, D. Leake, Types of Explanations, Tech. Rep. ADA183253, DTIC Document, 1987. [86] H. H. Kelley, Attribution theory in social psychology, in: Nebraska symposium on motivation,
University of Nebraska Press, 192â238, 1967.
[87] H. H. Kelley, Causal schemata and the attribution process, General Learning Press, Morristown, NJ, 1972. | 1706.07269#218 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 219 | [87] H. H. Kelley, Causal schemata and the attribution process, General Learning Press, Morristown, NJ, 1972.
[88] J. Knobe, Intentional action and side eï¬ects in ordinary language, Analysis 63 (279) (2003) 190â 194.
[89] T. Kulesza, M. Burnett, W.-K. Wong, S. Stumpf, Principles of explanatory debugging to per- sonalize interactive machine learning, in: Proceedings of the 20th International Conference on Intelligent User Interfaces, ACM, 126â137, 2015.
[90] T. Kulesza, S. Stumpf, M. Burnett, S. Yang, I. Kwan, W.-K. Wong, Too much, too little, or just right? Ways explanations impact end usersâ mental models, in: Visual Languages and Human- Centric Computing (VL/HCC), 2013 IEEE Symposium on, IEEE, 3â10, 2013.
[91] T. Kulesza, S. Stumpf, W.-K. Wong, M. M. Burnett, S. Perona, A. Ko, I. Oberst, Why-oriented end-user debugging of naive Bayes text classiï¬cation, ACM Transactions on Interactive Intelligent Systems (TiiS) 1 (1) (2011) 2. | 1706.07269#219 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 220 | [92] D. A. Lagnado, S. Channon, Judgments of cause and blame: The eï¬ects of intentionality and foreseeability, Cognition 108 (3) (2008) 754â770.
[93] P. Langley, B. Meadows, M. Sridharan, D. Choi, Explainable Agency for Intelligent Autonomous Systems, in: Proceedings of the Twenty-Ninth Annual Conference on Innovative Applications of Artiï¬cial Intelligence, AAAI Press, 2017.
[94] D. B. Leake, Goal-Based Explanation Evaluation, Cognitive Science 15 (4) (1991) 509â545. [95] D. B. Leake, Abduction, experience, and goals: A model of everyday abductive explanation,
Journal of Experimental & Theoretical Artiï¬cial Intelligence 7 (4) (1995) 407â428.
[96] J. Leddo, R. P. Abelson, P. H. Gross, Conjunctive explanations: When two reasons are better than one, Journal of Personality and Social Psychology 47 (5) (1984) 933. | 1706.07269#220 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 221 | [97] H. J. Levesque, A knowledge-level account of abduction, in: IJCAI, 1061â1067, 1989. [98] D. Lewis, Causation, The Journal of Philosophy 70 (17) (1974) 556â567. [99] D. Lewis, Causal explanation, Philosophical Papers 2 (1986) 214â240.
[100] B. Y. Lim, A. K. Dey, Assessing demand for intelligibility in context-aware applications, in: Pro- ceedings of the 11th international conference on Ubiquitous computing, ACM, 195â204, 2009.
[101] M. P. Linegang, H. A. Stoner, M. J. Patterson, B. D. Seppelt, J. D. Hoï¬man, Z. B. Crittendon, J. D. Lee, Human-automation collaboration in dynamic mission planning: A challenge requiring an ecological approach, Proceedings of the Human Factors and Ergonomics Society Annual Meeting 50 (23) (2006) 2482â2486. | 1706.07269#221 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 222 | [102] P. Lipton, Contrastive explanation, Royal Institute of Philosophy Supplement 27 (1990) 247â266. [103] Z. C. Lipton, The mythos of model interpretability, arXiv preprint arXiv:1606.03490 . [104] T. Lombrozo, The structure and function of explanations, Trends in Cognitive Sciences 10 (10)
(2006) 464â470.
[105] T. Lombrozo, Simplicity and probability in causal explanation, Cognitive psychology 55 (3) (2007) 232â257.
[106] T. Lombrozo, Explanation and categorization: How âwhy?â informs âwhat?â, Cognition 110 (2) (2009) 248â253.
[107] T. Lombrozo, Causalâexplanatory pluralism: How intentions, functions, and mechanisms inï¬uence causal ascriptions, Cognitive Psychology 61 (4) (2010) 303â332.
[108] T. Lombrozo, Explanation and abductive inference, Oxford handbook of thinking and reasoning (2012) 260â276.
[109] T. Lombrozo, N. Z. Gwynne, Explanation and inference: mechanistic and functional explanations guide property generalization, Frontiers in human neuroscience 8 (2014) 700. | 1706.07269#222 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 223 | [110] J. L. Mackie, The cement of the universe, Oxford, 1980. [111] B. F. Malle, How people explain behavior: A new theoretical framework, Personality and Social
Psychology Review 3 (1) (1999) 23â48.
[112] B. F. Malle, How the mind explains behavior: Folk explanations, meaning, and social interaction, 63
MIT Press, 2004.
[113] B. F. Malle, Attribution theories: How people make sense of behavior, Theories in Social Psychol- ogy (2011) 72â95.
[114] B. F. Malle, Time to Give Up the Dogmas of Attribution: An Alternative Theory of Behavior Explanation, Advances in Experimental Social Psychology 44 (1) (2011) 297â311.
[115] B. F. Malle, J. Knobe, The folk concept of intentionality, Journal of Experimental Social Psychol- ogy 33 (2) (1997) 101â121.
[116] B. F. Malle, J. Knobe, M. J. OâLaughlin, G. E. Pearce, S. E. Nelson, Conceptual structure and social functions of behavior explanations: Beyond personâsituation attributions, Journal of Per- sonality and Social Psychology 79 (3) (2000) 309. | 1706.07269#223 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 224 | [117] B. F. Malle, J. M. Knobe, S. E. Nelson, Actor-observer asymmetries in explanations of behavior: new answers to an old question, Journal of Personality and Social Psychology 93 (4) (2007) 491. [118] B. F. Malle, G. E. Pearce, Attention to behavioral events during interaction: Two actor-observer gaps and three attempts to close them, Journal of Personality and Social Psychology 81 (2) (2001) 278â294.
[119] D. Marr, Vision: A computational investigation into the human representation and processing of visual information, Inc., New York, NY, 1982.
[120] D. Marr, T. Poggio, From understanding computation to understanding neural circuitry, AI Memos AIM-357, MIT, 1976.
[121] R. McCloy, R. M. Byrne, Counterfactual thinking about controllable events, Memory & Cognition 28 (6) (2000) 1071â1078.
[122] J. McClure, Goal-based explanations of actions and outcomes, European Review of Social Psy- chology 12 (1) (2002) 201â235. | 1706.07269#224 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 225 | [122] J. McClure, Goal-based explanations of actions and outcomes, European Review of Social Psy- chology 12 (1) (2002) 201â235.
[123] J. McClure, D. Hilton, For you canât always get what you want: When preconditions are better explanations than goals, British Journal of Social Psychology 36 (2) (1997) 223â240.
[124] J. McClure, D. Hilton, J. Cowan, L. Ishida, M. Wilson, When rich or poor people buy expensive Is the question how or why?, Journal of Language and Social Psychology 20 (2001) objects: 229â257.
[125] J. McClure, D. J. Hilton, Are goals or preconditions better explanations? It depends on the question, European Journal of Social Psychology 28 (6) (1998) 897â911.
[126] J. L. McClure, R. M. Sutton, D. J. Hilton, The Role of Goal-Based Explanations, in: Social judgments: Implicit and explicit processes, vol. 5, Cambridge University Press, 306, 2003. [127] A. L. McGill, J. G. Klein, Contrastive and counterfactual reasoning in causal judgment, Journal
of Personality and Social Psychology 64 (6) (1993) 897. | 1706.07269#225 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 226 | of Personality and Social Psychology 64 (6) (1993) 897.
[128] P. Menzies, H. Price, Causation as a secondary quality, The British Journal for the Philosophy of Science 44 (2) (1993) 187â203.
[129] J. E. Mercado, M. A. Rupp, J. Y. Chen, M. J. Barnes, D. Barber, K. Procci, Intelligent agent transparency in humanâagent teaming for Multi-UxV management, Human Factors 58 (3) (2016) 401â415.
[130] J. S. Mill, A system of logic: The collected works of John Stuart Mill, vol. III, 1973. [131] D. T. Miller, S. Gunasegaram, Temporal order and the perceived mutability of events: Implications
for blame assignment, Journal of personality and social psychology 59 (6) (1990) 1111.
[132] T. Miller, P. Howe, L. Sonenberg, Explainable AI: Beware of Inmates Running the Asylum, in: IJCAI 2017 Workshop on Explainable Artiï¬cial Intelligence (XAI), 36â42, URL http://people. eng.unimelb.edu.au/tmiller/pubs/explanation-inmates.pdf, 2017. | 1706.07269#226 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 227 | [133] T. M. Mitchell, R. M. Keller, S. T. Kedar-Cabelli, Explanation-based generalization: A unifying view, Machine learning 1 (1) (1986) 47â80.
[134] J. D. Moore, C. L. Paris, Planning text for advisory dialogues: Capturing intentional and rhetorical information, Computational linguistics 19 (4) (1993) 651â694.
[135] C. Muise, V. Belle, P. Felli, S. McIlraith, T. Miller, A. R. Pearce, L. Sonenberg, Planning Over Multi-Agent Epistemic States: A Classical Planning Approach, in: B. Bonet, S. Koenig (Eds.), Proceedings of AAAI 2015, 1â8, 2015.
[136] G. Nott, âExplainable Artiï¬cial Intelligenceâ: Cracking open the black box of AI, Computer World https://www.computerworld.com.au/article/617359/.
[137] M. J. OâLaughlin, B. F. Malle, How people explain actions performed by groups and individuals, Journal of Personality and Social Psychology 82 (1) (2002) 33. | 1706.07269#227 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 228 | in: D. B. L. Thomas Roth-Berghofer, Nava Tintarev (Ed.), Proceedings of the 6th International Explanation-Aware Computing (Ex- aCt) workshop, 41â50, 2011.
64
[139] J. A. Overton, Explanation in Science, Ph.D. thesis, The University of Western Ontario, 2012. [140] J. A. Overton, âExplainâ in scientiï¬c discourse, Synthese 190 (8) (2013) 1383â1405. [141] J. Pearl, D. Mackenzie, The Book of Why: The New Science of Cause and Eï¬ect, Hachette UK,
2018.
[142] C. S. Peirce, Harvard lectures on pragmatism, Collected Papers v. 5, 1903. [143] R. Petrick, M. E. Foster, Using General-Purpose Planning for Action Selection in Human-Robot Interaction, in: AAAI 2016 Fall Symposium on Artiï¬cial Intelligence for Human-Robot Interaction, 2016. | 1706.07269#228 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 229 | [144] D. Poole, Normality and Faults in logic-based diagnosis., in: IJCAI, vol. 89, 1304â1310, 1989. [145] H. E. Pople, On the mechanization of abductive logic, in: IJCAI, vol. 73, 147â152, 1973. [146] K. Popper, The logic of scientiï¬c discovery, Routledge, 2005. [147] H. Prakken, Formal systems for persuasion dialogue, The Knowledge Engineering Review 21 (02)
(2006) 163â188.
[148] S. Prasada, The scope of formal explanation, Psychonomic Bulletin & Review (2017) 1â10. [149] S. Prasada, E. M. Dillingham, Principled and statistical connections in common sense conception,
Cognition 99 (1) (2006) 73â112.
[150] J. Preston, N. Epley, Explanations versus applications: The explanatory power of valuable beliefs, Psychological Science 16 (10) (2005) 826â832.
[151] M. Ranney, P. Thagard, Explanatory coherence and belief revision in naive physics, in: Proceedings of the Tenth Annual Conference of the Cognitive Science Society, 426â432, 1988. | 1706.07269#229 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 230 | [152] A. S. Rao, M. P. Georgeï¬, BDI agents: From theory to practice., in: ICMAS, vol. 95, 312â319, 1995.
[153] S. J. Read, A. Marcus-Newhall, Explanatory coherence in social explanations: A parallel dis- tributed processing account, Journal of Personality and Social Psychology 65 (3) (1993) 429. [154] B. Rehder, A causal-model theory of conceptual representation and categorization, Journal of
Experimental Psychology: Learning, Memory, and Cognition 29 (6) (2003) 1141.
[155] B. Rehder, When similarity and causality compete in category-based property generalization, Memory & Cognition 34 (1) (2006) 3â16.
[156] R. Reiter, A theory of diagnosis from ï¬rst principles, Artiï¬cial intelligence 32 (1) (1987) 57â95. [157] M. T. Ribeiro, S. Singh, C. Guestrin, Why Should I Trust You?: Explaining the Predictions of Any Classiï¬er, in: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, ACM, 1135â1144, 2016. | 1706.07269#230 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 231 | # [158] M. Robnik-ËSikonja, I. Kononenko, Explaining classiï¬cations for individual instances, IEEE Transactions on Knowledge and Data Engineering 20 (5) (2008) 589â600.
[159] W. C. Salmon, Four decades of scientiï¬c explanation, University of Pittsburgh press, 2006. [160] J. Samland, M. Josephs, M. R. Waldmann, H. Rakoczy, The role of prescriptive norms and knowledge in childrenâs and adultsâ causal selection, Journal of Experimental Psychology: General 145 (2) (2016) 125.
in: P. Bello, M. Guarini, M. McShane, B. Scassellati (Eds.), Proceedings of the 36th Annual Conference of the Cognitive Science Society, Cognitive Science Society, 1359â1364, 2014.
[162] M. Scriven, The concept of comprehension: From semantics to software, in: J. B. Carroll, R. O. Freedle (Eds.), Language comprehension and the acquisition of knowledge, Washington: W. H. Winston & Sons, 31â39, 1972. | 1706.07269#231 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 232 | [163] Z. Shams, M. de Vos, N. Oren, J. Padget, Normative Practical Reasoning via Argumentation and Dialogue, in: Proceedings of the 25th International Joint Conference on Artiï¬cial Intelligence (IJCAI-16), AAAI Press, 2016.
[164] R. Singh, T. Miller, J. Newn, L. Sonenberg, E. Velloso, F. Vetere, Combining Planning with Gaze for Online Human Intention Recognition, in: Proceedings of the 17th International Conference on Autonomous Agents and Multiagent Systems, 2018.
[165] B. R. Slugoski, M. Lalljee, R. Lamb, G. P. Ginsburg, Attribution in conversational context: Eï¬ect of mutual knowledge on explanation-giving, European Journal of Social Psychology 23 (3) (1993) 219â238.
[166] K. Stubbs, P. Hinds, D. Wettergreen, Autonomy and common ground in human-robot interaction: A ï¬eld study, IEEE Intelligent Systems 22 (2) (2007) 42â50. | 1706.07269#232 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.