doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1710.11469 | 146 | Figure D.6: Data generating process for the stickmen example.
# D.4 MNIST: more sample eï¬cient data augmentation
Here, we show further results for the experiment introduced in §5.5. We vary the number of augmented training examples c from 100 to 5000 for m = 10000 and c â {100, 200, 500, 1000} for m = 1000. The degree of the rotations is sampled uniformly at random from [35, 70]. Figure D.5 shows the misclassiï¬cation rates. Test set 1 contains rotated digits only, test set 2 is the usual MNIST test set. We see that the misclassiï¬cation rates of CoRe are always lower on test set 1, showing that it makes data augmentation more eï¬cient. For m = 1000, it even turns out to be beneï¬cial for performance on test set 2.
52
(a) Examples from test sets 1â3. (b) Misclassiï¬cation rates.
atte dadapal
Bani
a wn | 1710.11469#146 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11469 | 147 | 52
(a) Examples from test sets 1â3. (b) Misclassiï¬cation rates.
atte dadapal
Bani
a wn
Figure D.7: a) Examples from the stickmen test set 1 (row 1), test set 2 (row 2) and test sets 3 (row 3). In each row, the ï¬rst three images from the left have y â¡ child; the remaining three images have y â¡ adult. Connected images are grouped examples. b) Misclassiï¬cation rates for diï¬erent numbers of grouped examples.
# D.5 Stickmen image-based age classiï¬cation | 1710.11469#147 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11469 | 148 | # D.5 Stickmen image-based age classiï¬cation
Here, we show further results for the experiment introduced in §5.4. Figure D.6 illustrates the data generating process. Recall that test set 1 follows the same distribution as the training set. In test sets 2 and 3 large movements are associated with both children and adults, while the movements are heavier in test set 3 than in test set 2. Figure D.7b shows results for diï¬erent numbers of grouping examples. For c = 20 the misclassiï¬cation rate of CoRe estimator has a large variance. For c â {50, 500, 2000}, the CoRe estimator shows similar results. Its performance is thus not sensitive to the number of grouped examples, once there are suï¬ciently many grouped observations in the training set. The pooled estimator fails to achieve good predictive performance on test sets 2 and 3 as it seems to use âmovementâ as a predictor for âageâ.
# D.6 Eyeglasses detection: image quality intervention | 1710.11469#148 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11469 | 149 | # D.6 Eyeglasses detection: image quality intervention
Here, we show further results for the experiments introduced in §5.3. Speciï¬cally, we con- sider interventions of diï¬erent strengths by varying the mean of the quality intervention in µ â {30, 40, 50}. Recall that we use ImageMagick to modify the image quality. In the training set and in test set 1, we sample the image quality value as qi,j â¼ N (µ, Ï = 10) and apply the command convert -quality q ij input.jpg output.jpg if yi â¡ glasses. If yi â¡ no glasses, the image is not modiï¬ed. In test set 2, the above command is applied if yi â¡ no glasses while images with yi â¡ glasses are not changed. In test set 3 all images are left unchanged and in test set 4 the command is applied to all images, i.e. the quality of all images is reduced.
We run experiments for grouping settings 1â3 and for c = 5000, where the deï¬nition of the grouping settings 1â3 is identical to §D.2. Figure D.8 shows examples from the respective training and test sets and Figure D.9 shows the corresponding misclassiï¬cation rates. Again, we observe that grouping setting 1 works best, followed by grouping setting 2. | 1710.11469#149 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11469 | 152 | Figure D.8: Examples from the CelebA image quality datasets, grouping settings 1â3 with µ â {30, 40, 50}. In all rows, the ï¬rst three images from the left have y â¡ no glasses; the remaining three images have y â¡ glasses. Connected images are grouped observations over which we calculate the conditional variance. In panels (a)â(c), row 1 shows exam- ples from the training set, rows 2â4 contain examples from test sets 2â4, respectively. Panels (d)â(i) show examples from the respective training sets.
Interestingly, there is a large performance diï¬erence between µ = 40 and µ = 50 for the pooled estimator. Possibly, with µ = 50 the image quality is not suï¬ciently predictive for the target.
54
70
a) z Z 50 w & 40 oe B v0 4 3 2 2
# tld
(a) Grouping setting 1
(b) Grouping setting 2
(c) Grouping setting 3 | 1710.11469#152 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11469 | 153 | # tld
(a) Grouping setting 1
(b) Grouping setting 2
(c) Grouping setting 3
setting setting Method [core Pf pooled Method [core {if pootes Method mean: 40 mean: 50 mean: 30 mean: 40 mean: 50 mean: 0 70 70 _ 00 _ 0 z z Zs Zo w w 40 & 40 a a B 20 B 20 s s 5 20 5 2 2g 2g i alt Wr TetTedTedTed Tr Tot Te2TedTed T TetTe2TeOTed Ty Tor Te2TedTed Tr Tet Te2TeOTed Tr TetTe2TeOTed Tr Dataset Dataset
# [| core {ff pootes
30
40
# mean:
# mean:
# mean:
# il
50
# Tr TetTe2TeOTed
TetTe2TeOTed Tr Tet TedTedTed Tr Tet Te2TeOTed Dataset
Figure D.9: Misclassiï¬cation rates for the CelebA eyeglasses detection with image quality interven- tions, grouping settings 1â3 with c = 5000 and the mean of the Gaussian distribution µ â {30, 40, 50}.
55
= Slee ae
= Slee ae om LA el A ee Ae cm bse EES i Ae oad Gal A ES
om LA el A
ee Ae
cm bse EES i
Ae oad Gal A ES | 1710.11469#153 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11469 | 155 | # D.7 Elmer the Elephant
The color interventions for the experiment introduced in §5.6 were created as follows. In the training set, if yi â¡ elephant we apply the following ImageMagick command for the grouped examples convert -modulate 100,0,100 input.jpg output.jpg. Test sets 1 and 2 were already discussed in §5.6: in test set 1, all images are left unchanged. In test set 2, the above command is applied if yi â¡ horse. If yi â¡ elephant, we sample ci,j â¼ N (µ = 20, Ï = 1) and apply convert -modulate 100,100,100-c ij input.jpg output.jpg to the image. Here, we consider again some more test sets than in §5.6. In test set 4, the latter command is applied to all images. It rotates the colors of the image, in a cyclic manner14. In test set 3, all images are changed to grayscale. The causal graph for the data generating process is shown in Figure D.12. Examples from all four test sets are shown in Figure D.10 and classiï¬cation results are shown in Figure D.11.
14. For more details, see http://www.imagemagick.org/Usage/color_mods/#color_mods.
56 | 1710.11469#155 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.10903 | 0 | 8 1 0 2
b e F 4 ] L M . t a t s [ 3 v 3 0 9 0 1 . 0 1 7 1 : v i X r a
Published as a conference paper at ICLR 2018
# GRAPH ATTENTION NETWORKS
# Petar VeliËckovi´câ Department of Computer Science and Technology University of Cambridge [email protected]
Guillem Cucurullâ Centre de Visi´o per Computador, UAB [email protected]
# Arantxa Casanovaâ Centre de Visi´o per Computador, UAB [email protected]
Adriana Romero Montr´eal Institute for Learning Algorithms [email protected]
# Pietro Li`o Department of Computer Science and Technology University of Cambridge [email protected]
Yoshua Bengio Montr´eal Institute for Learning Algorithms [email protected]
# ABSTRACT | 1710.10903#0 | Graph Attention Networks | We present graph attention networks (GATs), novel neural network
architectures that operate on graph-structured data, leveraging masked
self-attentional layers to address the shortcomings of prior methods based on
graph convolutions or their approximations. By stacking layers in which nodes
are able to attend over their neighborhoods' features, we enable (implicitly)
specifying different weights to different nodes in a neighborhood, without
requiring any kind of costly matrix operation (such as inversion) or depending
on knowing the graph structure upfront. In this way, we address several key
challenges of spectral-based graph neural networks simultaneously, and make our
model readily applicable to inductive as well as transductive problems. Our GAT
models have achieved or matched state-of-the-art results across four
established transductive and inductive graph benchmarks: the Cora, Citeseer and
Pubmed citation network datasets, as well as a protein-protein interaction
dataset (wherein test graphs remain unseen during training). | http://arxiv.org/pdf/1710.10903 | Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, Yoshua Bengio | stat.ML, cs.AI, cs.LG, cs.SI | To appear at ICLR 2018. 12 pages, 2 figures | null | stat.ML | 20171030 | 20180204 | [
{
"id": "1706.06383"
},
{
"id": "1601.06733"
},
{
"id": "1506.05163"
},
{
"id": "1703.07326"
},
{
"id": "1703.03130"
},
{
"id": "1611.08402"
},
{
"id": "1706.03762"
},
{
"id": "1706.01427"
}
] |
1710.10903 | 1 | Yoshua Bengio Montr´eal Institute for Learning Algorithms [email protected]
# ABSTRACT
We present graph attention networks (GATs), novel neural network architectures that operate on graph-structured data, leveraging masked self-attentional layers to address the shortcomings of prior methods based on graph convolutions or their approximations. By stacking layers in which nodes are able to attend over their neighborhoodsâ features, we enable (implicitly) specifying different weights to different nodes in a neighborhood, without requiring any kind of costly matrix op- eration (such as inversion) or depending on knowing the graph structure upfront. In this way, we address several key challenges of spectral-based graph neural net- works simultaneously, and make our model readily applicable to inductive as well as transductive problems. Our GAT models have achieved or matched state-of-the- art results across four established transductive and inductive graph benchmarks: the Cora, Citeseer and Pubmed citation network datasets, as well as a protein- protein interaction dataset (wherein test graphs remain unseen during training).
# INTRODUCTION | 1710.10903#1 | Graph Attention Networks | We present graph attention networks (GATs), novel neural network
architectures that operate on graph-structured data, leveraging masked
self-attentional layers to address the shortcomings of prior methods based on
graph convolutions or their approximations. By stacking layers in which nodes
are able to attend over their neighborhoods' features, we enable (implicitly)
specifying different weights to different nodes in a neighborhood, without
requiring any kind of costly matrix operation (such as inversion) or depending
on knowing the graph structure upfront. In this way, we address several key
challenges of spectral-based graph neural networks simultaneously, and make our
model readily applicable to inductive as well as transductive problems. Our GAT
models have achieved or matched state-of-the-art results across four
established transductive and inductive graph benchmarks: the Cora, Citeseer and
Pubmed citation network datasets, as well as a protein-protein interaction
dataset (wherein test graphs remain unseen during training). | http://arxiv.org/pdf/1710.10903 | Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, Yoshua Bengio | stat.ML, cs.AI, cs.LG, cs.SI | To appear at ICLR 2018. 12 pages, 2 figures | null | stat.ML | 20171030 | 20180204 | [
{
"id": "1706.06383"
},
{
"id": "1601.06733"
},
{
"id": "1506.05163"
},
{
"id": "1703.07326"
},
{
"id": "1703.03130"
},
{
"id": "1611.08402"
},
{
"id": "1706.03762"
},
{
"id": "1706.01427"
}
] |
1710.10903 | 2 | # INTRODUCTION
Convolutional Neural Networks (CNNs) have been successfully applied to tackle problems such as image classiï¬cation (He et al., 2016), semantic segmentation (J´egou et al., 2017) or machine translation (Gehring et al., 2016), where the underlying data representation has a grid-like structure. These architectures efï¬ciently reuse their local ï¬lters, with learnable parameters, by applying them to all the input positions.
However, many interesting tasks involve data that can not be represented in a grid-like structure and that instead lies in an irregular domain. This is the case of 3D meshes, social networks, telecommu- nication networks, biological networks or brain connectomes. Such data can usually be represented in the form of graphs. | 1710.10903#2 | Graph Attention Networks | We present graph attention networks (GATs), novel neural network
architectures that operate on graph-structured data, leveraging masked
self-attentional layers to address the shortcomings of prior methods based on
graph convolutions or their approximations. By stacking layers in which nodes
are able to attend over their neighborhoods' features, we enable (implicitly)
specifying different weights to different nodes in a neighborhood, without
requiring any kind of costly matrix operation (such as inversion) or depending
on knowing the graph structure upfront. In this way, we address several key
challenges of spectral-based graph neural networks simultaneously, and make our
model readily applicable to inductive as well as transductive problems. Our GAT
models have achieved or matched state-of-the-art results across four
established transductive and inductive graph benchmarks: the Cora, Citeseer and
Pubmed citation network datasets, as well as a protein-protein interaction
dataset (wherein test graphs remain unseen during training). | http://arxiv.org/pdf/1710.10903 | Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, Yoshua Bengio | stat.ML, cs.AI, cs.LG, cs.SI | To appear at ICLR 2018. 12 pages, 2 figures | null | stat.ML | 20171030 | 20180204 | [
{
"id": "1706.06383"
},
{
"id": "1601.06733"
},
{
"id": "1506.05163"
},
{
"id": "1703.07326"
},
{
"id": "1703.03130"
},
{
"id": "1611.08402"
},
{
"id": "1706.03762"
},
{
"id": "1706.01427"
}
] |
1710.10903 | 3 | There have been several attempts in the literature to extend neural networks to deal with arbitrarily structured graphs. Early work used recursive neural networks to process data represented in graph domains as directed acyclic graphs (Frasconi et al., 1998; Sperduti & Starita, 1997). Graph Neural Networks (GNNs) were introduced in Gori et al. (2005) and Scarselli et al. (2009) as a generalization of recursive neural networks that can directly deal with a more general class of graphs, e.g. cyclic, directed and undirected graphs. GNNs consist of an iterative process, which propagates the node states until equilibrium; followed by a neural network, which produces an output for each node
âWork performed while the author was at the Montr´eal Institute of Learning Algorithms.
1
Published as a conference paper at ICLR 2018
based on its state. This idea was adopted and improved by Li et al. (2016), which propose to use gated recurrent units (Cho et al., 2014) in the propagation step.
Nevertheless, there is an increasing interest in generalizing convolutions to the graph domain. Ad- vances in this direction are often categorized as spectral approaches and non-spectral approaches. | 1710.10903#3 | Graph Attention Networks | We present graph attention networks (GATs), novel neural network
architectures that operate on graph-structured data, leveraging masked
self-attentional layers to address the shortcomings of prior methods based on
graph convolutions or their approximations. By stacking layers in which nodes
are able to attend over their neighborhoods' features, we enable (implicitly)
specifying different weights to different nodes in a neighborhood, without
requiring any kind of costly matrix operation (such as inversion) or depending
on knowing the graph structure upfront. In this way, we address several key
challenges of spectral-based graph neural networks simultaneously, and make our
model readily applicable to inductive as well as transductive problems. Our GAT
models have achieved or matched state-of-the-art results across four
established transductive and inductive graph benchmarks: the Cora, Citeseer and
Pubmed citation network datasets, as well as a protein-protein interaction
dataset (wherein test graphs remain unseen during training). | http://arxiv.org/pdf/1710.10903 | Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, Yoshua Bengio | stat.ML, cs.AI, cs.LG, cs.SI | To appear at ICLR 2018. 12 pages, 2 figures | null | stat.ML | 20171030 | 20180204 | [
{
"id": "1706.06383"
},
{
"id": "1601.06733"
},
{
"id": "1506.05163"
},
{
"id": "1703.07326"
},
{
"id": "1703.03130"
},
{
"id": "1611.08402"
},
{
"id": "1706.03762"
},
{
"id": "1706.01427"
}
] |
1710.10903 | 4 | On one hand, spectral approaches work with a spectral representation of the graphs and have been successfully applied in the context of node classiï¬cation. In Bruna et al. (2014), the convolution operation is deï¬ned in the Fourier domain by computing the eigendecomposition of the graph Lapla- cian, resulting in potentially intense computations and non-spatially localized ï¬lters. These issues were addressed by subsequent works. Henaff et al. (2015) introduced a parameterization of the spectral ï¬lters with smooth coefï¬cients in order to make them spatially localized. Later, Defferrard et al. (2016) proposed to approximate the ï¬lters by means of a Chebyshev expansion of the graph Laplacian, removing the need to compute the eigenvectors of the Laplacian and yielding spatially localized ï¬lters. Finally, Kipf & Welling (2017) simpliï¬ed the previous method by restricting the ï¬lters to operate in a 1-step neighborhood around each node. However, in all of the aforementioned spectral approaches, the learned | 1710.10903#4 | Graph Attention Networks | We present graph attention networks (GATs), novel neural network
architectures that operate on graph-structured data, leveraging masked
self-attentional layers to address the shortcomings of prior methods based on
graph convolutions or their approximations. By stacking layers in which nodes
are able to attend over their neighborhoods' features, we enable (implicitly)
specifying different weights to different nodes in a neighborhood, without
requiring any kind of costly matrix operation (such as inversion) or depending
on knowing the graph structure upfront. In this way, we address several key
challenges of spectral-based graph neural networks simultaneously, and make our
model readily applicable to inductive as well as transductive problems. Our GAT
models have achieved or matched state-of-the-art results across four
established transductive and inductive graph benchmarks: the Cora, Citeseer and
Pubmed citation network datasets, as well as a protein-protein interaction
dataset (wherein test graphs remain unseen during training). | http://arxiv.org/pdf/1710.10903 | Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, Yoshua Bengio | stat.ML, cs.AI, cs.LG, cs.SI | To appear at ICLR 2018. 12 pages, 2 figures | null | stat.ML | 20171030 | 20180204 | [
{
"id": "1706.06383"
},
{
"id": "1601.06733"
},
{
"id": "1506.05163"
},
{
"id": "1703.07326"
},
{
"id": "1703.03130"
},
{
"id": "1611.08402"
},
{
"id": "1706.03762"
},
{
"id": "1706.01427"
}
] |
1710.10903 | 6 | On the other hand, we have non-spectral approaches (Duvenaud et al., 2015; Atwood & Towsley, 2016; Hamilton et al., 2017), which deï¬ne convolutions directly on the graph, operating on groups of spatially close neighbors. One of the challenges of these approaches is to deï¬ne an operator which works with different sized neighborhoods and maintains the weight sharing property of CNNs. In some cases, this requires learning a speciï¬c weight matrix for each node degree (Duvenaud et al., 2015), using the powers of a transition matrix to deï¬ne the neighborhood while learning weights for each input channel and neighborhood degree (Atwood & Towsley, 2016), or extracting and normal- izing neighborhoods containing a ï¬xed number of nodes (Niepert et al., 2016). Monti et al. (2016) presented mixture model CNNs (MoNet), a spatial approach which provides a uniï¬ed generaliza- tion of CNN architectures to graphs. More recently, Hamilton et al. (2017) introduced GraphSAGE, a method for computing node representations in an inductive manner. This technique operates by sampling a | 1710.10903#6 | Graph Attention Networks | We present graph attention networks (GATs), novel neural network
architectures that operate on graph-structured data, leveraging masked
self-attentional layers to address the shortcomings of prior methods based on
graph convolutions or their approximations. By stacking layers in which nodes
are able to attend over their neighborhoods' features, we enable (implicitly)
specifying different weights to different nodes in a neighborhood, without
requiring any kind of costly matrix operation (such as inversion) or depending
on knowing the graph structure upfront. In this way, we address several key
challenges of spectral-based graph neural networks simultaneously, and make our
model readily applicable to inductive as well as transductive problems. Our GAT
models have achieved or matched state-of-the-art results across four
established transductive and inductive graph benchmarks: the Cora, Citeseer and
Pubmed citation network datasets, as well as a protein-protein interaction
dataset (wherein test graphs remain unseen during training). | http://arxiv.org/pdf/1710.10903 | Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, Yoshua Bengio | stat.ML, cs.AI, cs.LG, cs.SI | To appear at ICLR 2018. 12 pages, 2 figures | null | stat.ML | 20171030 | 20180204 | [
{
"id": "1706.06383"
},
{
"id": "1601.06733"
},
{
"id": "1506.05163"
},
{
"id": "1703.07326"
},
{
"id": "1703.03130"
},
{
"id": "1611.08402"
},
{
"id": "1706.03762"
},
{
"id": "1706.01427"
}
] |
1710.10903 | 7 | recently, Hamilton et al. (2017) introduced GraphSAGE, a method for computing node representations in an inductive manner. This technique operates by sampling a ï¬xed-size neighborhood of each node, and then performing a speciï¬c aggregator over it (such as the mean over all the sampled neighborsâ feature vectors, or the result of feeding them through a recurrent neural network). This approach has yielded impressive performance across sev- eral large-scale inductive benchmarks. | 1710.10903#7 | Graph Attention Networks | We present graph attention networks (GATs), novel neural network
architectures that operate on graph-structured data, leveraging masked
self-attentional layers to address the shortcomings of prior methods based on
graph convolutions or their approximations. By stacking layers in which nodes
are able to attend over their neighborhoods' features, we enable (implicitly)
specifying different weights to different nodes in a neighborhood, without
requiring any kind of costly matrix operation (such as inversion) or depending
on knowing the graph structure upfront. In this way, we address several key
challenges of spectral-based graph neural networks simultaneously, and make our
model readily applicable to inductive as well as transductive problems. Our GAT
models have achieved or matched state-of-the-art results across four
established transductive and inductive graph benchmarks: the Cora, Citeseer and
Pubmed citation network datasets, as well as a protein-protein interaction
dataset (wherein test graphs remain unseen during training). | http://arxiv.org/pdf/1710.10903 | Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, Yoshua Bengio | stat.ML, cs.AI, cs.LG, cs.SI | To appear at ICLR 2018. 12 pages, 2 figures | null | stat.ML | 20171030 | 20180204 | [
{
"id": "1706.06383"
},
{
"id": "1601.06733"
},
{
"id": "1506.05163"
},
{
"id": "1703.07326"
},
{
"id": "1703.03130"
},
{
"id": "1611.08402"
},
{
"id": "1706.03762"
},
{
"id": "1706.01427"
}
] |
1710.10903 | 8 | Attention mechanisms have become almost a de facto standard in many sequence-based tasks (Bah- danau et al., 2015; Gehring et al., 2016). One of the beneï¬ts of attention mechanisms is that they allow for dealing with variable sized inputs, focusing on the most relevant parts of the input to make decisions. When an attention mechanism is used to compute a representation of a single sequence, it is commonly referred to as self-attention or intra-attention. Together with Recurrent Neural Net- works (RNNs) or convolutions, self-attention has proven to be useful for tasks such as machine reading (Cheng et al., 2016) and learning sentence representations (Lin et al., 2017). However, Vaswani et al. (2017) showed that not only self-attention can improve a method based on RNNs or convolutions, but also that it is sufï¬cient for constructing a powerful model obtaining state-of-the-art performance on the machine translation task. | 1710.10903#8 | Graph Attention Networks | We present graph attention networks (GATs), novel neural network
architectures that operate on graph-structured data, leveraging masked
self-attentional layers to address the shortcomings of prior methods based on
graph convolutions or their approximations. By stacking layers in which nodes
are able to attend over their neighborhoods' features, we enable (implicitly)
specifying different weights to different nodes in a neighborhood, without
requiring any kind of costly matrix operation (such as inversion) or depending
on knowing the graph structure upfront. In this way, we address several key
challenges of spectral-based graph neural networks simultaneously, and make our
model readily applicable to inductive as well as transductive problems. Our GAT
models have achieved or matched state-of-the-art results across four
established transductive and inductive graph benchmarks: the Cora, Citeseer and
Pubmed citation network datasets, as well as a protein-protein interaction
dataset (wherein test graphs remain unseen during training). | http://arxiv.org/pdf/1710.10903 | Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, Yoshua Bengio | stat.ML, cs.AI, cs.LG, cs.SI | To appear at ICLR 2018. 12 pages, 2 figures | null | stat.ML | 20171030 | 20180204 | [
{
"id": "1706.06383"
},
{
"id": "1601.06733"
},
{
"id": "1506.05163"
},
{
"id": "1703.07326"
},
{
"id": "1703.03130"
},
{
"id": "1611.08402"
},
{
"id": "1706.03762"
},
{
"id": "1706.01427"
}
] |
1710.10903 | 9 | Inspired by this recent work, we introduce an attention-based architecture to perform node classiï¬ca- tion of graph-structured data. The idea is to compute the hidden representations of each node in the graph, by attending over its neighbors, following a self-attention strategy. The attention architecture has several interesting properties: (1) the operation is efï¬cient, since it is parallelizable across node- neighbor pairs; (2) it can be applied to graph nodes having different degrees by specifying arbitrary weights to the neighbors; and (3) the model is directly applicable to inductive learning problems, including tasks where the model has to generalize to completely unseen graphs. We validate the proposed approach on four challenging benchmarks: Cora, Citeseer and Pubmed citation networks as well as an inductive protein-protein interaction dataset, achieving or matching state-of-the-art re- sults that highlight the potential of attention-based models when dealing with arbitrarily structured graphs.
It is worth noting that, as Kipf & Welling (2017) and Atwood & Towsley (2016), our work can also be reformulated as a particular instance of MoNet (Monti et al., 2016). Moreover, our approach of
2
Published as a conference paper at ICLR 2018 | 1710.10903#9 | Graph Attention Networks | We present graph attention networks (GATs), novel neural network
architectures that operate on graph-structured data, leveraging masked
self-attentional layers to address the shortcomings of prior methods based on
graph convolutions or their approximations. By stacking layers in which nodes
are able to attend over their neighborhoods' features, we enable (implicitly)
specifying different weights to different nodes in a neighborhood, without
requiring any kind of costly matrix operation (such as inversion) or depending
on knowing the graph structure upfront. In this way, we address several key
challenges of spectral-based graph neural networks simultaneously, and make our
model readily applicable to inductive as well as transductive problems. Our GAT
models have achieved or matched state-of-the-art results across four
established transductive and inductive graph benchmarks: the Cora, Citeseer and
Pubmed citation network datasets, as well as a protein-protein interaction
dataset (wherein test graphs remain unseen during training). | http://arxiv.org/pdf/1710.10903 | Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, Yoshua Bengio | stat.ML, cs.AI, cs.LG, cs.SI | To appear at ICLR 2018. 12 pages, 2 figures | null | stat.ML | 20171030 | 20180204 | [
{
"id": "1706.06383"
},
{
"id": "1601.06733"
},
{
"id": "1506.05163"
},
{
"id": "1703.07326"
},
{
"id": "1703.03130"
},
{
"id": "1611.08402"
},
{
"id": "1706.03762"
},
{
"id": "1706.01427"
}
] |
1710.10903 | 10 | 2
Published as a conference paper at ICLR 2018
sharing a neural network computation across edges is reminiscent of the formulation of relational networks (Santoro et al., 2017) and VAIN (Hoshen, 2017), wherein relations between objects or agents are aggregated pair-wise, by employing a shared mechanism. Similarly, our proposed at- tention model can be connected to the works by Duan et al. (2017) and Denil et al. (2017), which use a neighborhood attention operation to compute attention coefï¬cients between different objects in an environment. Other related approaches include locally linear embedding (LLE) (Roweis & Saul, 2000) and memory networks (Weston et al., 2014). LLE selects a ï¬xed number of neighbors around each data point, and learns a weight coefï¬cient for each neighbor to reconstruct each point as a weighted sum of its neighbors. A second optimization step extracts the pointâs feature embed- ding. Memory networks also share some connections with our work, in particular, if we interpret the neighborhood of a node as the memory, which is used to compute the node features by attending over its values, and then is updated by storing the new features in the same position.
# 2 GAT ARCHITECTURE | 1710.10903#10 | Graph Attention Networks | We present graph attention networks (GATs), novel neural network
architectures that operate on graph-structured data, leveraging masked
self-attentional layers to address the shortcomings of prior methods based on
graph convolutions or their approximations. By stacking layers in which nodes
are able to attend over their neighborhoods' features, we enable (implicitly)
specifying different weights to different nodes in a neighborhood, without
requiring any kind of costly matrix operation (such as inversion) or depending
on knowing the graph structure upfront. In this way, we address several key
challenges of spectral-based graph neural networks simultaneously, and make our
model readily applicable to inductive as well as transductive problems. Our GAT
models have achieved or matched state-of-the-art results across four
established transductive and inductive graph benchmarks: the Cora, Citeseer and
Pubmed citation network datasets, as well as a protein-protein interaction
dataset (wherein test graphs remain unseen during training). | http://arxiv.org/pdf/1710.10903 | Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, Yoshua Bengio | stat.ML, cs.AI, cs.LG, cs.SI | To appear at ICLR 2018. 12 pages, 2 figures | null | stat.ML | 20171030 | 20180204 | [
{
"id": "1706.06383"
},
{
"id": "1601.06733"
},
{
"id": "1506.05163"
},
{
"id": "1703.07326"
},
{
"id": "1703.03130"
},
{
"id": "1611.08402"
},
{
"id": "1706.03762"
},
{
"id": "1706.01427"
}
] |
1710.10903 | 11 | # 2 GAT ARCHITECTURE
In this section, we will present the building block layer used to construct arbitrary graph attention networks (through stacking this layer), and directly outline its theoretical and practical beneï¬ts and limitations compared to prior work in the domain of neural graph processing.
2.1 GRAPH ATTENTIONAL LAYER
We will start by describing a single graph attentional layer, as the sole layer utilized throughout all of the GAT architectures used in our experiments. The particular attentional setup utilized by us closely follows the work of|Bahdanau et al. (2015)âbut the framework is agnostic to the particular choice of attention mechanism. The input to our layer is a set of node features, h = {ha, he, Lee hn}, hy ⬠RÂ¥, where N is the number of nodes, and Fâ is the number of features in each node. The layer produces a new set of node features (of potentially different cardinality Fâ), hâ = {hi,h},...,hiy}, hi ⬠RFâ, as its output. | 1710.10903#11 | Graph Attention Networks | We present graph attention networks (GATs), novel neural network
architectures that operate on graph-structured data, leveraging masked
self-attentional layers to address the shortcomings of prior methods based on
graph convolutions or their approximations. By stacking layers in which nodes
are able to attend over their neighborhoods' features, we enable (implicitly)
specifying different weights to different nodes in a neighborhood, without
requiring any kind of costly matrix operation (such as inversion) or depending
on knowing the graph structure upfront. In this way, we address several key
challenges of spectral-based graph neural networks simultaneously, and make our
model readily applicable to inductive as well as transductive problems. Our GAT
models have achieved or matched state-of-the-art results across four
established transductive and inductive graph benchmarks: the Cora, Citeseer and
Pubmed citation network datasets, as well as a protein-protein interaction
dataset (wherein test graphs remain unseen during training). | http://arxiv.org/pdf/1710.10903 | Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, Yoshua Bengio | stat.ML, cs.AI, cs.LG, cs.SI | To appear at ICLR 2018. 12 pages, 2 figures | null | stat.ML | 20171030 | 20180204 | [
{
"id": "1706.06383"
},
{
"id": "1601.06733"
},
{
"id": "1506.05163"
},
{
"id": "1703.07326"
},
{
"id": "1703.03130"
},
{
"id": "1611.08402"
},
{
"id": "1706.03762"
},
{
"id": "1706.01427"
}
] |
1710.10903 | 12 | In order to obtain sufficient expressive power to transform the input features into higher-level fea- tures, at least one learnable linear transformation is required. To that end, as an initial step, a shared linear transformation, parametrized by a weight matrix, W ⬠RY 'XF is applied to every node. We then perform se/f-attention on the nodesâa shared attentional mechanism a : RY âxR >R computes attention coefficients
. . ej = a(Wh;, Wh;) (1)
that indicate the importance of node jâs features to node i. In its most general formulation, the model allows every node to attend on every other node, dropping all structural information. We inject the graph structure into the mechanism by performing masked attentionâwe only compute eij for nodes j â Ni, where Ni is some neighborhood of node i in the graph. In all our experiments, these will be exactly the ï¬rst-order neighbors of i (including i). To make coefï¬cients easily comparable across different nodes, we normalize them across all choices of j using the softmax function:
exp(ei;) fo 2 Uren, &xP(C:ik) ° aij = softmax; (ej) = | 1710.10903#12 | Graph Attention Networks | We present graph attention networks (GATs), novel neural network
architectures that operate on graph-structured data, leveraging masked
self-attentional layers to address the shortcomings of prior methods based on
graph convolutions or their approximations. By stacking layers in which nodes
are able to attend over their neighborhoods' features, we enable (implicitly)
specifying different weights to different nodes in a neighborhood, without
requiring any kind of costly matrix operation (such as inversion) or depending
on knowing the graph structure upfront. In this way, we address several key
challenges of spectral-based graph neural networks simultaneously, and make our
model readily applicable to inductive as well as transductive problems. Our GAT
models have achieved or matched state-of-the-art results across four
established transductive and inductive graph benchmarks: the Cora, Citeseer and
Pubmed citation network datasets, as well as a protein-protein interaction
dataset (wherein test graphs remain unseen during training). | http://arxiv.org/pdf/1710.10903 | Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, Yoshua Bengio | stat.ML, cs.AI, cs.LG, cs.SI | To appear at ICLR 2018. 12 pages, 2 figures | null | stat.ML | 20171030 | 20180204 | [
{
"id": "1706.06383"
},
{
"id": "1601.06733"
},
{
"id": "1506.05163"
},
{
"id": "1703.07326"
},
{
"id": "1703.03130"
},
{
"id": "1611.08402"
},
{
"id": "1706.03762"
},
{
"id": "1706.01427"
}
] |
1710.10903 | 13 | exp(ei;) fo 2 Uren, &xP(C:ik) ° aij = softmax; (ej) =
In our experiments, the attention mechanism a is a single-layer feedforward neural network, parametrized by a weight vector a ⬠R?â âand applying the LeakyReLU nonlinearity (with negative input slope a = 0.2). Fully expanded out, the coefficients computed by the attention mechanism (illustrated by Figure[T](left)) may then be expressed as:
exp (LeakyReLU (a? [Wh; ||Wi,)) ) (3) Aig Deen, OXP (LeakyReLU (a7[Whi|Wi)) )
where -ââ represents transposition and || is the concatenation operation.
Once obtained, the normalized attention coefï¬cients are used to compute a linear combination of the features corresponding to them, to serve as the ï¬nal output features for every node (after potentially
3
Published as a conference paper at ICLR 2018
softmax ; | 1710.10903#13 | Graph Attention Networks | We present graph attention networks (GATs), novel neural network
architectures that operate on graph-structured data, leveraging masked
self-attentional layers to address the shortcomings of prior methods based on
graph convolutions or their approximations. By stacking layers in which nodes
are able to attend over their neighborhoods' features, we enable (implicitly)
specifying different weights to different nodes in a neighborhood, without
requiring any kind of costly matrix operation (such as inversion) or depending
on knowing the graph structure upfront. In this way, we address several key
challenges of spectral-based graph neural networks simultaneously, and make our
model readily applicable to inductive as well as transductive problems. Our GAT
models have achieved or matched state-of-the-art results across four
established transductive and inductive graph benchmarks: the Cora, Citeseer and
Pubmed citation network datasets, as well as a protein-protein interaction
dataset (wherein test graphs remain unseen during training). | http://arxiv.org/pdf/1710.10903 | Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, Yoshua Bengio | stat.ML, cs.AI, cs.LG, cs.SI | To appear at ICLR 2018. 12 pages, 2 figures | null | stat.ML | 20171030 | 20180204 | [
{
"id": "1706.06383"
},
{
"id": "1601.06733"
},
{
"id": "1506.05163"
},
{
"id": "1703.07326"
},
{
"id": "1703.03130"
},
{
"id": "1611.08402"
},
{
"id": "1706.03762"
},
{
"id": "1706.01427"
}
] |
1710.10903 | 14 | 3
Published as a conference paper at ICLR 2018
softmax ;
Figure 1: Left: The attention mechanism a(Wh;, Wh;) employed by our model, parametrized by a weight vector a ⬠R?â - applying a LeakyReLU activation. Right: An illustration of multi- head attention (with KY = 3 heads) by node | on its neighborhood. Different arrow styles and colors denote independent attention computations. The aggregated features from each head are concatenated or averaged to obtain hi.
applying a nonlinearity, Ï):
i=o > aijWh, | . (4) GENG
To stabilize the learning process of self-attention, we have found extending our mechanism to em- ploy multi-head attention to be beneï¬cial, similarly to Vaswani et al. (2017). Speciï¬cally, K inde- pendent attention mechanisms execute the transformation of Equation 4, and then their features are concatenated, resulting in the following output feature representation:
K > kwh? = |) of So ok Wh, (5) k=l GEN: | 1710.10903#14 | Graph Attention Networks | We present graph attention networks (GATs), novel neural network
architectures that operate on graph-structured data, leveraging masked
self-attentional layers to address the shortcomings of prior methods based on
graph convolutions or their approximations. By stacking layers in which nodes
are able to attend over their neighborhoods' features, we enable (implicitly)
specifying different weights to different nodes in a neighborhood, without
requiring any kind of costly matrix operation (such as inversion) or depending
on knowing the graph structure upfront. In this way, we address several key
challenges of spectral-based graph neural networks simultaneously, and make our
model readily applicable to inductive as well as transductive problems. Our GAT
models have achieved or matched state-of-the-art results across four
established transductive and inductive graph benchmarks: the Cora, Citeseer and
Pubmed citation network datasets, as well as a protein-protein interaction
dataset (wherein test graphs remain unseen during training). | http://arxiv.org/pdf/1710.10903 | Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, Yoshua Bengio | stat.ML, cs.AI, cs.LG, cs.SI | To appear at ICLR 2018. 12 pages, 2 figures | null | stat.ML | 20171030 | 20180204 | [
{
"id": "1706.06383"
},
{
"id": "1601.06733"
},
{
"id": "1506.05163"
},
{
"id": "1703.07326"
},
{
"id": "1703.03130"
},
{
"id": "1611.08402"
},
{
"id": "1706.03762"
},
{
"id": "1706.01427"
}
] |
1710.10903 | 15 | K > kwh? = |) of So ok Wh, (5) k=l GEN:
where || represents concatenation, aly are normalized attention coefficients computed by the k-th attention mechanism (a"), and Wâ" is the corresponding input linear transformationâs weight matrix. Note that, in this setting, the final returned output, hâ, will consist of K Fâ features (rather than F'â) for each node.
Specially, if we perform multi-head attention on the ï¬nal (prediction) layer of the network, concate- nation is no longer sensibleâinstead, we employ averaging, and delay applying the ï¬nal nonlinear- ity (usually a softmax or logistic sigmoid for classiï¬cation problems) until then:
K ni =o zu > ak Why (6) k=1jENG
The aggregation process of a multi-head graph attentional layer is illustrated by Figure 1 (right).
# 2.2 COMPARISONS TO RELATED WORK
The graph attentional layer described in subsection 2.1 directly addresses several issues that were present in prior approaches to modelling graph-structured data with neural networks: | 1710.10903#15 | Graph Attention Networks | We present graph attention networks (GATs), novel neural network
architectures that operate on graph-structured data, leveraging masked
self-attentional layers to address the shortcomings of prior methods based on
graph convolutions or their approximations. By stacking layers in which nodes
are able to attend over their neighborhoods' features, we enable (implicitly)
specifying different weights to different nodes in a neighborhood, without
requiring any kind of costly matrix operation (such as inversion) or depending
on knowing the graph structure upfront. In this way, we address several key
challenges of spectral-based graph neural networks simultaneously, and make our
model readily applicable to inductive as well as transductive problems. Our GAT
models have achieved or matched state-of-the-art results across four
established transductive and inductive graph benchmarks: the Cora, Citeseer and
Pubmed citation network datasets, as well as a protein-protein interaction
dataset (wherein test graphs remain unseen during training). | http://arxiv.org/pdf/1710.10903 | Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, Yoshua Bengio | stat.ML, cs.AI, cs.LG, cs.SI | To appear at ICLR 2018. 12 pages, 2 figures | null | stat.ML | 20171030 | 20180204 | [
{
"id": "1706.06383"
},
{
"id": "1601.06733"
},
{
"id": "1506.05163"
},
{
"id": "1703.07326"
},
{
"id": "1703.03130"
},
{
"id": "1611.08402"
},
{
"id": "1706.03762"
},
{
"id": "1706.01427"
}
] |
1710.10903 | 16 | The graph attentional layer described in subsection 2.1 directly addresses several issues that were present in prior approaches to modelling graph-structured data with neural networks:
⢠Computationally, it is highly efï¬cient: the operation of the self-attentional layer can be par- allelized across all edges, and the computation of output features can be parallelized across
4
Published as a conference paper at ICLR 2018
all nodes. No eigendecompositions or similar costly matrix operations are required. The time complexity of a single GAT attention head computing Fâ features may be expressed as O(|V|F'Fâ + |E|F"â), where F is the number of input features, and |V| and || are the numbers of nodes and edges in the graph, respectively. This complexity is on par with the baseline methods such as Graph Convolutional Networks (GCNs) [2017). Applying multi-head attention multiplies the storage and parameter requirements by a factor of Aâ, while the individual headsâ computations are fully independent and can be parallelized. | 1710.10903#16 | Graph Attention Networks | We present graph attention networks (GATs), novel neural network
architectures that operate on graph-structured data, leveraging masked
self-attentional layers to address the shortcomings of prior methods based on
graph convolutions or their approximations. By stacking layers in which nodes
are able to attend over their neighborhoods' features, we enable (implicitly)
specifying different weights to different nodes in a neighborhood, without
requiring any kind of costly matrix operation (such as inversion) or depending
on knowing the graph structure upfront. In this way, we address several key
challenges of spectral-based graph neural networks simultaneously, and make our
model readily applicable to inductive as well as transductive problems. Our GAT
models have achieved or matched state-of-the-art results across four
established transductive and inductive graph benchmarks: the Cora, Citeseer and
Pubmed citation network datasets, as well as a protein-protein interaction
dataset (wherein test graphs remain unseen during training). | http://arxiv.org/pdf/1710.10903 | Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, Yoshua Bengio | stat.ML, cs.AI, cs.LG, cs.SI | To appear at ICLR 2018. 12 pages, 2 figures | null | stat.ML | 20171030 | 20180204 | [
{
"id": "1706.06383"
},
{
"id": "1601.06733"
},
{
"id": "1506.05163"
},
{
"id": "1703.07326"
},
{
"id": "1703.03130"
},
{
"id": "1611.08402"
},
{
"id": "1706.03762"
},
{
"id": "1706.01427"
}
] |
1710.10903 | 17 | As opposed to GCNs, our model allows for (implicitly) assigning different importances to nodes of a same neighborhood, enabling a leap in model capacity. Furthermore, analyzing the learned attentional weights may lead to beneï¬ts in interpretability, as was the case in the machine translation domain (e.g. the qualitative analysis of Bahdanau et al. (2015)). ⢠The attention mechanism is applied in a shared manner to all edges in the graph, and there- fore it does not depend on upfront access to the global graph structure or (features of) all of its nodes (a limitation of many prior techniques). This has several desirable implications:
â The graph is not required to be undirected (we may simply leave out computing αij if edge j â i is not present).
â It makes our technique directly applicable to inductive learningâincluding tasks where the model is evaluated on graphs that are completely unseen during training. | 1710.10903#17 | Graph Attention Networks | We present graph attention networks (GATs), novel neural network
architectures that operate on graph-structured data, leveraging masked
self-attentional layers to address the shortcomings of prior methods based on
graph convolutions or their approximations. By stacking layers in which nodes
are able to attend over their neighborhoods' features, we enable (implicitly)
specifying different weights to different nodes in a neighborhood, without
requiring any kind of costly matrix operation (such as inversion) or depending
on knowing the graph structure upfront. In this way, we address several key
challenges of spectral-based graph neural networks simultaneously, and make our
model readily applicable to inductive as well as transductive problems. Our GAT
models have achieved or matched state-of-the-art results across four
established transductive and inductive graph benchmarks: the Cora, Citeseer and
Pubmed citation network datasets, as well as a protein-protein interaction
dataset (wherein test graphs remain unseen during training). | http://arxiv.org/pdf/1710.10903 | Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, Yoshua Bengio | stat.ML, cs.AI, cs.LG, cs.SI | To appear at ICLR 2018. 12 pages, 2 figures | null | stat.ML | 20171030 | 20180204 | [
{
"id": "1706.06383"
},
{
"id": "1601.06733"
},
{
"id": "1506.05163"
},
{
"id": "1703.07326"
},
{
"id": "1703.03130"
},
{
"id": "1611.08402"
},
{
"id": "1706.03762"
},
{
"id": "1706.01427"
}
] |
1710.10903 | 18 | â It makes our technique directly applicable to inductive learningâincluding tasks where the model is evaluated on graphs that are completely unseen during training.
⢠The recently published inductive method of Hamilton et al. (2017) samples a ï¬xed-size neighborhood of each node, in order to keep its computational footprint consistent; this does not allow it access to the entirety of the neighborhood while performing inference. Moreover, this technique achieved some of its strongest results when an LSTM (Hochreiter & Schmidhuber, 1997)-based neighborhood aggregator is used. This assumes the existence of a consistent sequential node ordering across neighborhoods, and the authors have rec- tiï¬ed it by consistently feeding randomly-ordered sequences to the LSTM. Our technique does not suffer from either of these issuesâit works with the entirety of the neighborhood (at the expense of a variable computational footprint, which is still on-par with methods like the GCN), and does not assume any ordering within it. | 1710.10903#18 | Graph Attention Networks | We present graph attention networks (GATs), novel neural network
architectures that operate on graph-structured data, leveraging masked
self-attentional layers to address the shortcomings of prior methods based on
graph convolutions or their approximations. By stacking layers in which nodes
are able to attend over their neighborhoods' features, we enable (implicitly)
specifying different weights to different nodes in a neighborhood, without
requiring any kind of costly matrix operation (such as inversion) or depending
on knowing the graph structure upfront. In this way, we address several key
challenges of spectral-based graph neural networks simultaneously, and make our
model readily applicable to inductive as well as transductive problems. Our GAT
models have achieved or matched state-of-the-art results across four
established transductive and inductive graph benchmarks: the Cora, Citeseer and
Pubmed citation network datasets, as well as a protein-protein interaction
dataset (wherein test graphs remain unseen during training). | http://arxiv.org/pdf/1710.10903 | Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, Yoshua Bengio | stat.ML, cs.AI, cs.LG, cs.SI | To appear at ICLR 2018. 12 pages, 2 figures | null | stat.ML | 20171030 | 20180204 | [
{
"id": "1706.06383"
},
{
"id": "1601.06733"
},
{
"id": "1506.05163"
},
{
"id": "1703.07326"
},
{
"id": "1703.03130"
},
{
"id": "1611.08402"
},
{
"id": "1706.03762"
},
{
"id": "1706.01427"
}
] |
1710.10903 | 19 | e As mentioned in Section [I] GAT can be reformulated as a particular instance of MoNet a More specifically, setting the pseudo-coordinate function to be u(z,y) = f(x y), where f(a) represent (potentially MLP-transformed) features of node x and || is concatenation; and the weight function to be w;(u) = softmax(MLP(w)) (with the softmax performed over the entire neighborhood of a node) would make MoNetâs patch operator similar to ours. Nevertheless, one should note that, in comparison to previ- ously considered MoNet instances, our model uses node features for similarity computa- tions, rather than the nodeâs structural properties (which would assume knowing the graph structure upfront). | 1710.10903#19 | Graph Attention Networks | We present graph attention networks (GATs), novel neural network
architectures that operate on graph-structured data, leveraging masked
self-attentional layers to address the shortcomings of prior methods based on
graph convolutions or their approximations. By stacking layers in which nodes
are able to attend over their neighborhoods' features, we enable (implicitly)
specifying different weights to different nodes in a neighborhood, without
requiring any kind of costly matrix operation (such as inversion) or depending
on knowing the graph structure upfront. In this way, we address several key
challenges of spectral-based graph neural networks simultaneously, and make our
model readily applicable to inductive as well as transductive problems. Our GAT
models have achieved or matched state-of-the-art results across four
established transductive and inductive graph benchmarks: the Cora, Citeseer and
Pubmed citation network datasets, as well as a protein-protein interaction
dataset (wherein test graphs remain unseen during training). | http://arxiv.org/pdf/1710.10903 | Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, Yoshua Bengio | stat.ML, cs.AI, cs.LG, cs.SI | To appear at ICLR 2018. 12 pages, 2 figures | null | stat.ML | 20171030 | 20180204 | [
{
"id": "1706.06383"
},
{
"id": "1601.06733"
},
{
"id": "1506.05163"
},
{
"id": "1703.07326"
},
{
"id": "1703.03130"
},
{
"id": "1611.08402"
},
{
"id": "1706.03762"
},
{
"id": "1706.01427"
}
] |
1710.10903 | 20 | We were able to produce a version of the GAT layer that leverages sparse matrix operations, reducing the storage complexity to linear in the number of nodes and edges and enabling the execution of GAT models on larger graph datasets. However, the tensor manipulation framework we used only supports sparse matrix multiplication for rank-2 tensors, which limits the batching capabilities of the layer as it is currently implemented (especially for datasets with multiple graphs). Appropriately addressing this constraint is an important direction for future work. Depending on the regularity of the graph structure in place, GPUs may not be able to offer major performance beneï¬ts compared to CPUs in these sparse scenarios. It should also be noted that the size of the âreceptive ï¬eldâ of our model is upper-bounded by the depth of the network (similarly as for GCN and similar models). Techniques such as skip connections (He et al., 2016) could be readily applied for appropriately extending the depth, however. Lastly, parallelization across all the graph edges, especially in a distributed manner, may involve a lot of redundant computation, as the neighborhoods will often highly overlap in graphs of interest.
# 3 EVALUATION
We have performed comparative evaluation of GAT models against a wide variety of strong base- lines and previous approaches, on four established graph-based benchmark tasks (transductive as
5
Published as a conference paper at ICLR 2018 | 1710.10903#20 | Graph Attention Networks | We present graph attention networks (GATs), novel neural network
architectures that operate on graph-structured data, leveraging masked
self-attentional layers to address the shortcomings of prior methods based on
graph convolutions or their approximations. By stacking layers in which nodes
are able to attend over their neighborhoods' features, we enable (implicitly)
specifying different weights to different nodes in a neighborhood, without
requiring any kind of costly matrix operation (such as inversion) or depending
on knowing the graph structure upfront. In this way, we address several key
challenges of spectral-based graph neural networks simultaneously, and make our
model readily applicable to inductive as well as transductive problems. Our GAT
models have achieved or matched state-of-the-art results across four
established transductive and inductive graph benchmarks: the Cora, Citeseer and
Pubmed citation network datasets, as well as a protein-protein interaction
dataset (wherein test graphs remain unseen during training). | http://arxiv.org/pdf/1710.10903 | Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, Yoshua Bengio | stat.ML, cs.AI, cs.LG, cs.SI | To appear at ICLR 2018. 12 pages, 2 figures | null | stat.ML | 20171030 | 20180204 | [
{
"id": "1706.06383"
},
{
"id": "1601.06733"
},
{
"id": "1506.05163"
},
{
"id": "1703.07326"
},
{
"id": "1703.03130"
},
{
"id": "1611.08402"
},
{
"id": "1706.03762"
},
{
"id": "1706.01427"
}
] |
1710.10903 | 21 | 5
Published as a conference paper at ICLR 2018
Table 1: Summary of the datasets used in our experiments.
Cora Citeseer Pubmed PPI Task # Nodes # Edges # Features/Node # Classes # Training Nodes # Validation Nodes # Test Nodes Transductive 2708 (1 graph) 5429 1433 7 140 500 1000 Transductive 3327 (1 graph) 4732 3703 6 120 500 1000 Transductive 19717 (1 graph) 44338 500 3 60 500 1000 Inductive 56944 (24 graphs) 818716 50 121 (multilabel) 44906 (20 graphs) 6514 (2 graphs) 5524 (2 graphs)
well as inductive), achieving or matching state-of-the-art performance across all of them. This sec- tion summarizes our experimental setup, results, and a brief qualitative analysis of a GAT modelâs extracted feature representations.
# 3.1 DATASETS | 1710.10903#21 | Graph Attention Networks | We present graph attention networks (GATs), novel neural network
architectures that operate on graph-structured data, leveraging masked
self-attentional layers to address the shortcomings of prior methods based on
graph convolutions or their approximations. By stacking layers in which nodes
are able to attend over their neighborhoods' features, we enable (implicitly)
specifying different weights to different nodes in a neighborhood, without
requiring any kind of costly matrix operation (such as inversion) or depending
on knowing the graph structure upfront. In this way, we address several key
challenges of spectral-based graph neural networks simultaneously, and make our
model readily applicable to inductive as well as transductive problems. Our GAT
models have achieved or matched state-of-the-art results across four
established transductive and inductive graph benchmarks: the Cora, Citeseer and
Pubmed citation network datasets, as well as a protein-protein interaction
dataset (wherein test graphs remain unseen during training). | http://arxiv.org/pdf/1710.10903 | Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, Yoshua Bengio | stat.ML, cs.AI, cs.LG, cs.SI | To appear at ICLR 2018. 12 pages, 2 figures | null | stat.ML | 20171030 | 20180204 | [
{
"id": "1706.06383"
},
{
"id": "1601.06733"
},
{
"id": "1506.05163"
},
{
"id": "1703.07326"
},
{
"id": "1703.03130"
},
{
"id": "1611.08402"
},
{
"id": "1706.03762"
},
{
"id": "1706.01427"
}
] |
1710.10903 | 22 | # 3.1 DATASETS
Transductive learning We utilize three standard citation network benchmark datasetsâCora, Citeseer and Pubmed (Sen et al., 2008)âand closely follow the transductive experimental setup of Yang et al. (2016). In all of these datasets, nodes correspond to documents and edges to (undirected) citations. Node features correspond to elements of a bag-of-words representation of a document. Each node has a class label. We allow for only 20 nodes per class to be used for trainingâhowever, honoring the transductive setup, the training algorithm has access to all of the nodesâ feature vec- tors. The predictive power of the trained models is evaluated on 1000 test nodes, and we use 500 additional nodes for validation purposes (the same ones as used by Kipf & Welling (2017)). The Cora dataset contains 2708 nodes, 5429 edges, 7 classes and 1433 features per node. The Citeseer dataset contains 3327 nodes, 4732 edges, 6 classes and 3703 features per node. The Pubmed dataset contains 19717 nodes, 44338 edges, 3 classes and 500 features per node. | 1710.10903#22 | Graph Attention Networks | We present graph attention networks (GATs), novel neural network
architectures that operate on graph-structured data, leveraging masked
self-attentional layers to address the shortcomings of prior methods based on
graph convolutions or their approximations. By stacking layers in which nodes
are able to attend over their neighborhoods' features, we enable (implicitly)
specifying different weights to different nodes in a neighborhood, without
requiring any kind of costly matrix operation (such as inversion) or depending
on knowing the graph structure upfront. In this way, we address several key
challenges of spectral-based graph neural networks simultaneously, and make our
model readily applicable to inductive as well as transductive problems. Our GAT
models have achieved or matched state-of-the-art results across four
established transductive and inductive graph benchmarks: the Cora, Citeseer and
Pubmed citation network datasets, as well as a protein-protein interaction
dataset (wherein test graphs remain unseen during training). | http://arxiv.org/pdf/1710.10903 | Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, Yoshua Bengio | stat.ML, cs.AI, cs.LG, cs.SI | To appear at ICLR 2018. 12 pages, 2 figures | null | stat.ML | 20171030 | 20180204 | [
{
"id": "1706.06383"
},
{
"id": "1601.06733"
},
{
"id": "1506.05163"
},
{
"id": "1703.07326"
},
{
"id": "1703.03130"
},
{
"id": "1611.08402"
},
{
"id": "1706.03762"
},
{
"id": "1706.01427"
}
] |
1710.10903 | 23 | Inductive learning We make use of a protein-protein interaction (PPI) dataset that consists of graphs corresponding to different human tissues (Zitnik & Leskovec, 2017). The dataset contains 20 graphs for training, 2 for validation and 2 for testing. Critically, testing graphs remain com- pletely unobserved during training. To construct the graphs, we used the preprocessed data provided by Hamilton et al. (2017). The average number of nodes per graph is 2372. Each node has 50 features that are composed of positional gene sets, motif gene sets and immunological signatures. There are 121 labels for each node set from gene ontology, collected from the Molecular Signatures Database (Subramanian et al., 2005), and a node can possess several labels simultaneously.
An overview of the interesting characteristics of the datasets is given in Table 1.
3.2 STATE-OF-THE-ART METHODS | 1710.10903#23 | Graph Attention Networks | We present graph attention networks (GATs), novel neural network
architectures that operate on graph-structured data, leveraging masked
self-attentional layers to address the shortcomings of prior methods based on
graph convolutions or their approximations. By stacking layers in which nodes
are able to attend over their neighborhoods' features, we enable (implicitly)
specifying different weights to different nodes in a neighborhood, without
requiring any kind of costly matrix operation (such as inversion) or depending
on knowing the graph structure upfront. In this way, we address several key
challenges of spectral-based graph neural networks simultaneously, and make our
model readily applicable to inductive as well as transductive problems. Our GAT
models have achieved or matched state-of-the-art results across four
established transductive and inductive graph benchmarks: the Cora, Citeseer and
Pubmed citation network datasets, as well as a protein-protein interaction
dataset (wherein test graphs remain unseen during training). | http://arxiv.org/pdf/1710.10903 | Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, Yoshua Bengio | stat.ML, cs.AI, cs.LG, cs.SI | To appear at ICLR 2018. 12 pages, 2 figures | null | stat.ML | 20171030 | 20180204 | [
{
"id": "1706.06383"
},
{
"id": "1601.06733"
},
{
"id": "1506.05163"
},
{
"id": "1703.07326"
},
{
"id": "1703.03130"
},
{
"id": "1611.08402"
},
{
"id": "1706.03762"
},
{
"id": "1706.01427"
}
] |
1710.10903 | 24 | An overview of the interesting characteristics of the datasets is given in Table 1.
3.2 STATE-OF-THE-ART METHODS
Transductive learning For transductive learning tasks, we compare against the same strong base- lines and state-of-the-art approaches as speciï¬ed in Kipf & Welling (2017). This includes label propagation (LP) (Zhu et al., 2003), semi-supervised embedding (SemiEmb) (Weston et al., 2012), manifold regularization (ManiReg) (Belkin et al., 2006), skip-gram based graph embeddings (Deep- Walk) (Perozzi et al., 2014), the iterative classiï¬cation algorithm (ICA) (Lu & Getoor, 2003) and Planetoid (Yang et al., 2016). We also directly compare our model against GCNs (Kipf & Welling, 2017), as well as graph convolutional models utilising higher-order Chebyshev ï¬lters (Defferrard et al., 2016), and the MoNet model presented in Monti et al. (2016). | 1710.10903#24 | Graph Attention Networks | We present graph attention networks (GATs), novel neural network
architectures that operate on graph-structured data, leveraging masked
self-attentional layers to address the shortcomings of prior methods based on
graph convolutions or their approximations. By stacking layers in which nodes
are able to attend over their neighborhoods' features, we enable (implicitly)
specifying different weights to different nodes in a neighborhood, without
requiring any kind of costly matrix operation (such as inversion) or depending
on knowing the graph structure upfront. In this way, we address several key
challenges of spectral-based graph neural networks simultaneously, and make our
model readily applicable to inductive as well as transductive problems. Our GAT
models have achieved or matched state-of-the-art results across four
established transductive and inductive graph benchmarks: the Cora, Citeseer and
Pubmed citation network datasets, as well as a protein-protein interaction
dataset (wherein test graphs remain unseen during training). | http://arxiv.org/pdf/1710.10903 | Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, Yoshua Bengio | stat.ML, cs.AI, cs.LG, cs.SI | To appear at ICLR 2018. 12 pages, 2 figures | null | stat.ML | 20171030 | 20180204 | [
{
"id": "1706.06383"
},
{
"id": "1601.06733"
},
{
"id": "1506.05163"
},
{
"id": "1703.07326"
},
{
"id": "1703.03130"
},
{
"id": "1611.08402"
},
{
"id": "1706.03762"
},
{
"id": "1706.01427"
}
] |
1710.10903 | 25 | Inductive learning For the inductive learning task, we compare against the four different super- vised GraphSAGE inductive methods presented in Hamilton et al. (2017). These provide a variety of approaches to aggregating features within a sampled neighborhood: GraphSAGE-GCN (which extends a graph convolution-style operation to the inductive setting), GraphSAGE-mean (taking
6
Published as a conference paper at ICLR 2018
the elementwise mean value of feature vectors), GraphSAGE-LSTM (aggregating by feeding the neighborhood features into an LSTM) and GraphSAGE-pool (taking the elementwise maximization operation of feature vectors transformed by a shared nonlinear multilayer perceptron). The other transductive approaches are either completely inappropriate in an inductive setting or assume that nodes are incrementally added to a single graph, making them unusable for the setup where test graphs are completely unseen during training (such as the PPI dataset).
Additionally, for both tasks we provide the performance of a per-node shared multilayer perceptron (MLP) classiï¬er (that does not incorporate graph structure at all).
3.3 EXPERIMENTAL SETUP | 1710.10903#25 | Graph Attention Networks | We present graph attention networks (GATs), novel neural network
architectures that operate on graph-structured data, leveraging masked
self-attentional layers to address the shortcomings of prior methods based on
graph convolutions or their approximations. By stacking layers in which nodes
are able to attend over their neighborhoods' features, we enable (implicitly)
specifying different weights to different nodes in a neighborhood, without
requiring any kind of costly matrix operation (such as inversion) or depending
on knowing the graph structure upfront. In this way, we address several key
challenges of spectral-based graph neural networks simultaneously, and make our
model readily applicable to inductive as well as transductive problems. Our GAT
models have achieved or matched state-of-the-art results across four
established transductive and inductive graph benchmarks: the Cora, Citeseer and
Pubmed citation network datasets, as well as a protein-protein interaction
dataset (wherein test graphs remain unseen during training). | http://arxiv.org/pdf/1710.10903 | Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, Yoshua Bengio | stat.ML, cs.AI, cs.LG, cs.SI | To appear at ICLR 2018. 12 pages, 2 figures | null | stat.ML | 20171030 | 20180204 | [
{
"id": "1706.06383"
},
{
"id": "1601.06733"
},
{
"id": "1506.05163"
},
{
"id": "1703.07326"
},
{
"id": "1703.03130"
},
{
"id": "1611.08402"
},
{
"id": "1706.03762"
},
{
"id": "1706.01427"
}
] |
1710.10903 | 26 | Transductive learning For the transductive learning tasks, we apply a two-layer GAT model. Its architectural hyperparameters have been optimized on the Cora dataset and are then reused for Cite- seer. The first layer consists of kKâ = 8 attention heads computing Fâ = 8 features each (for a total of 64 features), followed by an exponential linear unit (ELU) 2016) nonlinearity. The second layer is used for classification: a single attention head that computes Câ features (where Câ is the number of classes), followed by a softmax activation. For coping with the small training set sizes, regularization is liberally applied within the model. During training, we apply L regulariza- tion with \ = 0.0005. Furthermore, dropout with p = 0.6 is applied to both layersâ inputs, as well as to the normalized attention coefficients (critically, this means that at each training iteration, each node is exposed to a stochastically sampled neighborhood). Similarly as observed by{Monti et al.|(2016), we found that Pubmedâs training set size (60 examples) required slight changes to the GAT architecture: we have applied AK = 8 output attention heads (instead of one), and strengthened the Lz regularization to \ = 0.001. Otherwise, the architecture matches the one used for Cora and Citeseer. ° | 1710.10903#26 | Graph Attention Networks | We present graph attention networks (GATs), novel neural network
architectures that operate on graph-structured data, leveraging masked
self-attentional layers to address the shortcomings of prior methods based on
graph convolutions or their approximations. By stacking layers in which nodes
are able to attend over their neighborhoods' features, we enable (implicitly)
specifying different weights to different nodes in a neighborhood, without
requiring any kind of costly matrix operation (such as inversion) or depending
on knowing the graph structure upfront. In this way, we address several key
challenges of spectral-based graph neural networks simultaneously, and make our
model readily applicable to inductive as well as transductive problems. Our GAT
models have achieved or matched state-of-the-art results across four
established transductive and inductive graph benchmarks: the Cora, Citeseer and
Pubmed citation network datasets, as well as a protein-protein interaction
dataset (wherein test graphs remain unseen during training). | http://arxiv.org/pdf/1710.10903 | Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, Yoshua Bengio | stat.ML, cs.AI, cs.LG, cs.SI | To appear at ICLR 2018. 12 pages, 2 figures | null | stat.ML | 20171030 | 20180204 | [
{
"id": "1706.06383"
},
{
"id": "1601.06733"
},
{
"id": "1506.05163"
},
{
"id": "1703.07326"
},
{
"id": "1703.03130"
},
{
"id": "1611.08402"
},
{
"id": "1706.03762"
},
{
"id": "1706.01427"
}
] |
1710.10903 | 27 | Inductive learning For the inductive learning task, we apply a three-layer GAT model. Both of the first two layers consist of = 4 attention heads computing Fâ = 256 features (for a total of 1024 features), followed by an ELU nonlinearity. The final layer is used for (multi-label) classification: = 6 attention heads computing 121 features each, that are averaged and followed by a logistic sigmoid activation. The training sets for this task are sufficiently large and we found no need to apply Ly regularization or dropoutâwe have, however, successfully employed skip connections (2016) across the intermediate attentional layer. We utilize a batch size of 2 graphs during training. To strictly evaluate the benefits of applying an attention mechanism in this setting (i.e. comparing with a near GCN-equivalent model), we also provide the results when a constant attention mechanism, a(x, y) = 1, is used, with the same architectureâthis will assign the same weight to every neighbor. | 1710.10903#27 | Graph Attention Networks | We present graph attention networks (GATs), novel neural network
architectures that operate on graph-structured data, leveraging masked
self-attentional layers to address the shortcomings of prior methods based on
graph convolutions or their approximations. By stacking layers in which nodes
are able to attend over their neighborhoods' features, we enable (implicitly)
specifying different weights to different nodes in a neighborhood, without
requiring any kind of costly matrix operation (such as inversion) or depending
on knowing the graph structure upfront. In this way, we address several key
challenges of spectral-based graph neural networks simultaneously, and make our
model readily applicable to inductive as well as transductive problems. Our GAT
models have achieved or matched state-of-the-art results across four
established transductive and inductive graph benchmarks: the Cora, Citeseer and
Pubmed citation network datasets, as well as a protein-protein interaction
dataset (wherein test graphs remain unseen during training). | http://arxiv.org/pdf/1710.10903 | Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, Yoshua Bengio | stat.ML, cs.AI, cs.LG, cs.SI | To appear at ICLR 2018. 12 pages, 2 figures | null | stat.ML | 20171030 | 20180204 | [
{
"id": "1706.06383"
},
{
"id": "1601.06733"
},
{
"id": "1506.05163"
},
{
"id": "1703.07326"
},
{
"id": "1703.03130"
},
{
"id": "1611.08402"
},
{
"id": "1706.03762"
},
{
"id": "1706.01427"
}
] |
1710.10903 | 28 | Both models are initialized using Glorot initialization (Glorot & Bengio, 2010) and trained to mini- mize cross-entropy on the training nodes using the Adam SGD optimizer (Kingma & Ba, 2014) with an initial learning rate of 0.01 for Pubmed, and 0.005 for all other datasets. In both cases we use an early stopping strategy on both the cross-entropy loss and accuracy (transductive) or micro-F1 (inductive) score on the validation nodes, with a patience of 100 epochs1.
# 3.4 RESULTS
The results of our comparative evaluation experiments are summarized in Tables 2 and 3. | 1710.10903#28 | Graph Attention Networks | We present graph attention networks (GATs), novel neural network
architectures that operate on graph-structured data, leveraging masked
self-attentional layers to address the shortcomings of prior methods based on
graph convolutions or their approximations. By stacking layers in which nodes
are able to attend over their neighborhoods' features, we enable (implicitly)
specifying different weights to different nodes in a neighborhood, without
requiring any kind of costly matrix operation (such as inversion) or depending
on knowing the graph structure upfront. In this way, we address several key
challenges of spectral-based graph neural networks simultaneously, and make our
model readily applicable to inductive as well as transductive problems. Our GAT
models have achieved or matched state-of-the-art results across four
established transductive and inductive graph benchmarks: the Cora, Citeseer and
Pubmed citation network datasets, as well as a protein-protein interaction
dataset (wherein test graphs remain unseen during training). | http://arxiv.org/pdf/1710.10903 | Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, Yoshua Bengio | stat.ML, cs.AI, cs.LG, cs.SI | To appear at ICLR 2018. 12 pages, 2 figures | null | stat.ML | 20171030 | 20180204 | [
{
"id": "1706.06383"
},
{
"id": "1601.06733"
},
{
"id": "1506.05163"
},
{
"id": "1703.07326"
},
{
"id": "1703.03130"
},
{
"id": "1611.08402"
},
{
"id": "1706.03762"
},
{
"id": "1706.01427"
}
] |
1710.10903 | 29 | # 3.4 RESULTS
The results of our comparative evaluation experiments are summarized in Tables 2 and 3.
For the transductive tasks, we report the mean classiï¬cation accuracy (with standard deviation) on the test nodes of our method after 100 runs, and reuse the metrics already reported in Kipf & Welling (2017) and Monti et al. (2016) for state-of-the-art techniques. Speciï¬cally, for the Chebyshev ï¬lter- based approach (Defferrard et al., 2016), we provide the maximum reported performance for ï¬lters of orders K = 2 and K = 3. In order to fairly assess the beneï¬ts of the attention mechanism, we further evaluate a GCN model that computes 64 hidden features, attempting both the ReLU and ELU activation, and reporting (as GCN-64â) the better result after 100 runs (which was the ReLU in all three cases).
For the inductive task, we report the micro-averaged F1 score on the nodes of the two unseen test graphs, averaged after 10 runs, and reuse the metrics already reported in Hamilton et al. (2017) for
1Our implementation of the GAT layer may be found at: https://github.com/PetarV-/GAT.
7
Published as a conference paper at ICLR 2018 | 1710.10903#29 | Graph Attention Networks | We present graph attention networks (GATs), novel neural network
architectures that operate on graph-structured data, leveraging masked
self-attentional layers to address the shortcomings of prior methods based on
graph convolutions or their approximations. By stacking layers in which nodes
are able to attend over their neighborhoods' features, we enable (implicitly)
specifying different weights to different nodes in a neighborhood, without
requiring any kind of costly matrix operation (such as inversion) or depending
on knowing the graph structure upfront. In this way, we address several key
challenges of spectral-based graph neural networks simultaneously, and make our
model readily applicable to inductive as well as transductive problems. Our GAT
models have achieved or matched state-of-the-art results across four
established transductive and inductive graph benchmarks: the Cora, Citeseer and
Pubmed citation network datasets, as well as a protein-protein interaction
dataset (wherein test graphs remain unseen during training). | http://arxiv.org/pdf/1710.10903 | Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, Yoshua Bengio | stat.ML, cs.AI, cs.LG, cs.SI | To appear at ICLR 2018. 12 pages, 2 figures | null | stat.ML | 20171030 | 20180204 | [
{
"id": "1706.06383"
},
{
"id": "1601.06733"
},
{
"id": "1506.05163"
},
{
"id": "1703.07326"
},
{
"id": "1703.03130"
},
{
"id": "1611.08402"
},
{
"id": "1706.03762"
},
{
"id": "1706.01427"
}
] |
1710.10903 | 31 | # Transductive Cora
Method Citeseer Pubmed MLP ManiReg (Belkin et al., 2006) SemiEmb (Weston et al., 2012) LP (Zhu et al., 2003) DeepWalk (Perozzi et al., 2014) ICA (Lu & Getoor, 2003) Planetoid (Yang et al., 2016) Chebyshev (Defferrard et al., 2016) GCN (Kipf & Welling, 2017) MoNet (Monti et al., 2016) GCN-64â GAT (ours) 55.1% 59.5% 59.0% 68.0% 67.2% 75.1% 75.7% 81.2% 81.5% 81.7 ± 0.5% â 46.5% 60.1% 59.6% 45.3% 43.2% 69.1% 64.7% 69.8% 70.3% 71.4% 70.7% 71.7% 63.0% 65.3% 73.9% 77.2% 74.4% 79.0% 78.8 ± 0.3% 81.4 ± 0.5% 70.9 ± 0.5% 79.0 ± 0.3% 83.0 ± 0.7% 72.5 ± 0.7% 79.0 ± 0.3% | 1710.10903#31 | Graph Attention Networks | We present graph attention networks (GATs), novel neural network
architectures that operate on graph-structured data, leveraging masked
self-attentional layers to address the shortcomings of prior methods based on
graph convolutions or their approximations. By stacking layers in which nodes
are able to attend over their neighborhoods' features, we enable (implicitly)
specifying different weights to different nodes in a neighborhood, without
requiring any kind of costly matrix operation (such as inversion) or depending
on knowing the graph structure upfront. In this way, we address several key
challenges of spectral-based graph neural networks simultaneously, and make our
model readily applicable to inductive as well as transductive problems. Our GAT
models have achieved or matched state-of-the-art results across four
established transductive and inductive graph benchmarks: the Cora, Citeseer and
Pubmed citation network datasets, as well as a protein-protein interaction
dataset (wherein test graphs remain unseen during training). | http://arxiv.org/pdf/1710.10903 | Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, Yoshua Bengio | stat.ML, cs.AI, cs.LG, cs.SI | To appear at ICLR 2018. 12 pages, 2 figures | null | stat.ML | 20171030 | 20180204 | [
{
"id": "1706.06383"
},
{
"id": "1601.06733"
},
{
"id": "1506.05163"
},
{
"id": "1703.07326"
},
{
"id": "1703.03130"
},
{
"id": "1611.08402"
},
{
"id": "1706.03762"
},
{
"id": "1706.01427"
}
] |
1710.10903 | 32 | Table 3: Summary of results in terms of micro-averaged F1 scores, for the PPI dataset. GraphSAGEâ corresponds to the best GraphSAGE result we were able to obtain by just modifying its architecture. Const-GAT corresponds to a model with the same architecture as GAT, but with a constant attention mechanism (assigning same importance to each neighbor; GCN-like inductive operator).
Inductive
Method PPI Random MLP GraphSAGE-GCN (Hamilton et al., 2017) GraphSAGE-mean (Hamilton et al., 2017) GraphSAGE-LSTM (Hamilton et al., 2017) GraphSAGE-pool (Hamilton et al., 2017) GraphSAGEâ Const-GAT (ours) GAT (ours) 0.396 0.422 0.500 0.598 0.612 0.600 0.768 0.934 ± 0.006 0.973 ± 0.002 | 1710.10903#32 | Graph Attention Networks | We present graph attention networks (GATs), novel neural network
architectures that operate on graph-structured data, leveraging masked
self-attentional layers to address the shortcomings of prior methods based on
graph convolutions or their approximations. By stacking layers in which nodes
are able to attend over their neighborhoods' features, we enable (implicitly)
specifying different weights to different nodes in a neighborhood, without
requiring any kind of costly matrix operation (such as inversion) or depending
on knowing the graph structure upfront. In this way, we address several key
challenges of spectral-based graph neural networks simultaneously, and make our
model readily applicable to inductive as well as transductive problems. Our GAT
models have achieved or matched state-of-the-art results across four
established transductive and inductive graph benchmarks: the Cora, Citeseer and
Pubmed citation network datasets, as well as a protein-protein interaction
dataset (wherein test graphs remain unseen during training). | http://arxiv.org/pdf/1710.10903 | Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, Yoshua Bengio | stat.ML, cs.AI, cs.LG, cs.SI | To appear at ICLR 2018. 12 pages, 2 figures | null | stat.ML | 20171030 | 20180204 | [
{
"id": "1706.06383"
},
{
"id": "1601.06733"
},
{
"id": "1506.05163"
},
{
"id": "1703.07326"
},
{
"id": "1703.03130"
},
{
"id": "1611.08402"
},
{
"id": "1706.03762"
},
{
"id": "1706.01427"
}
] |
1710.10903 | 33 | the other techniques. Speciï¬cally, as our setup is supervised, we compare against the supervised GraphSAGE approaches. To evaluate the beneï¬ts of aggregating across the entire neighborhood, we further provide (as GraphSAGEâ) the best result we were able to achieve with GraphSAGE by just modifying its architecture (this was with a three-layer GraphSAGE-LSTM with [512, 512, 726] features computed in each layer and 128 features used for aggregating neighborhoods). Finally, we report the 10-run result of our constant attention GAT model (as Const-GAT), to fairly evaluate the beneï¬ts of the attention mechanism against a GCN-like aggregation scheme (with the same architecture). | 1710.10903#33 | Graph Attention Networks | We present graph attention networks (GATs), novel neural network
architectures that operate on graph-structured data, leveraging masked
self-attentional layers to address the shortcomings of prior methods based on
graph convolutions or their approximations. By stacking layers in which nodes
are able to attend over their neighborhoods' features, we enable (implicitly)
specifying different weights to different nodes in a neighborhood, without
requiring any kind of costly matrix operation (such as inversion) or depending
on knowing the graph structure upfront. In this way, we address several key
challenges of spectral-based graph neural networks simultaneously, and make our
model readily applicable to inductive as well as transductive problems. Our GAT
models have achieved or matched state-of-the-art results across four
established transductive and inductive graph benchmarks: the Cora, Citeseer and
Pubmed citation network datasets, as well as a protein-protein interaction
dataset (wherein test graphs remain unseen during training). | http://arxiv.org/pdf/1710.10903 | Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, Yoshua Bengio | stat.ML, cs.AI, cs.LG, cs.SI | To appear at ICLR 2018. 12 pages, 2 figures | null | stat.ML | 20171030 | 20180204 | [
{
"id": "1706.06383"
},
{
"id": "1601.06733"
},
{
"id": "1506.05163"
},
{
"id": "1703.07326"
},
{
"id": "1703.03130"
},
{
"id": "1611.08402"
},
{
"id": "1706.03762"
},
{
"id": "1706.01427"
}
] |
1710.10903 | 34 | Our results successfully demonstrate state-of-the-art performance being achieved or matched across all four datasetsâin concordance with our expectations, as per the discussion in Section 2.2. More speciï¬cally, we are able to improve upon GCNs by a margin of 1.5% and 1.6% on Cora and Cite- seer, respectively, suggesting that assigning different weights to nodes of a same neighborhood may be beneï¬cial. It is worth noting the improvements achieved on the PPI dataset: Our GAT model improves by 20.5% w.r.t. the best GraphSAGE result we were able to obtain, demonstrating that our model has the potential to be applied in inductive settings, and that larger predictive power can be leveraged by observing the entire neighborhood. Furthermore, it improves by 3.9% w.r.t. Const-GAT (the identical architecture with constant attention mechanism), once again directly demonstrating the signiï¬cance of being able to assign different weights to different neighbors.
8
Published as a conference paper at ICLR 2018 | 1710.10903#34 | Graph Attention Networks | We present graph attention networks (GATs), novel neural network
architectures that operate on graph-structured data, leveraging masked
self-attentional layers to address the shortcomings of prior methods based on
graph convolutions or their approximations. By stacking layers in which nodes
are able to attend over their neighborhoods' features, we enable (implicitly)
specifying different weights to different nodes in a neighborhood, without
requiring any kind of costly matrix operation (such as inversion) or depending
on knowing the graph structure upfront. In this way, we address several key
challenges of spectral-based graph neural networks simultaneously, and make our
model readily applicable to inductive as well as transductive problems. Our GAT
models have achieved or matched state-of-the-art results across four
established transductive and inductive graph benchmarks: the Cora, Citeseer and
Pubmed citation network datasets, as well as a protein-protein interaction
dataset (wherein test graphs remain unseen during training). | http://arxiv.org/pdf/1710.10903 | Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, Yoshua Bengio | stat.ML, cs.AI, cs.LG, cs.SI | To appear at ICLR 2018. 12 pages, 2 figures | null | stat.ML | 20171030 | 20180204 | [
{
"id": "1706.06383"
},
{
"id": "1601.06733"
},
{
"id": "1506.05163"
},
{
"id": "1703.07326"
},
{
"id": "1703.03130"
},
{
"id": "1611.08402"
},
{
"id": "1706.03762"
},
{
"id": "1706.01427"
}
] |
1710.10903 | 35 | 8
Published as a conference paper at ICLR 2018
The effectiveness of the learned feature representations may also be investigated qualitativelyâand for this purpose we provide a visualization of the t-SNE (Maaten & Hinton, 2008)-transformed feature representations extracted by the ï¬rst layer of a GAT model pre-trained on the Cora dataset (Figure 2). The representation exhibits discernible clustering in the projected 2D space. Note that these clusters correspond to the seven labels of the dataset, verifying the modelâs discriminative power across the seven topic classes of Cora. Additionally, we visualize the relative strengths of the normalized attention coefï¬cients (averaged across all eight attention heads). Properly interpret- ing these coefï¬cients (as performed by e.g. Bahdanau et al. (2015)) will require further domain knowledge about the dataset under study, and is left for future work.
# 4 CONCLUSIONS | 1710.10903#35 | Graph Attention Networks | We present graph attention networks (GATs), novel neural network
architectures that operate on graph-structured data, leveraging masked
self-attentional layers to address the shortcomings of prior methods based on
graph convolutions or their approximations. By stacking layers in which nodes
are able to attend over their neighborhoods' features, we enable (implicitly)
specifying different weights to different nodes in a neighborhood, without
requiring any kind of costly matrix operation (such as inversion) or depending
on knowing the graph structure upfront. In this way, we address several key
challenges of spectral-based graph neural networks simultaneously, and make our
model readily applicable to inductive as well as transductive problems. Our GAT
models have achieved or matched state-of-the-art results across four
established transductive and inductive graph benchmarks: the Cora, Citeseer and
Pubmed citation network datasets, as well as a protein-protein interaction
dataset (wherein test graphs remain unseen during training). | http://arxiv.org/pdf/1710.10903 | Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, Yoshua Bengio | stat.ML, cs.AI, cs.LG, cs.SI | To appear at ICLR 2018. 12 pages, 2 figures | null | stat.ML | 20171030 | 20180204 | [
{
"id": "1706.06383"
},
{
"id": "1601.06733"
},
{
"id": "1506.05163"
},
{
"id": "1703.07326"
},
{
"id": "1703.03130"
},
{
"id": "1611.08402"
},
{
"id": "1706.03762"
},
{
"id": "1706.01427"
}
] |
1710.10903 | 36 | # 4 CONCLUSIONS
We have presented graph attention networks (GATs), novel convolution-style neural networks that operate on graph-structured data, leveraging masked self-attentional layers. The graph attentional layer utilized throughout these networks is computationally efï¬cient (does not require costly ma- trix operations, and is parallelizable across all nodes in the graph), allows for (implicitly) assign- ing different importances to different nodes within a neighborhood while dealing with different sized neighborhoods, and does not depend on knowing the entire graph structure upfrontâthus addressing many of the theoretical issues with previous spectral-based approaches. Our models leveraging attention have successfully achieved or matched state-of-the-art performance across four well-established node classiï¬cation benchmarks, both transductive and inductive (especially, with completely unseen graphs used for testing). | 1710.10903#36 | Graph Attention Networks | We present graph attention networks (GATs), novel neural network
architectures that operate on graph-structured data, leveraging masked
self-attentional layers to address the shortcomings of prior methods based on
graph convolutions or their approximations. By stacking layers in which nodes
are able to attend over their neighborhoods' features, we enable (implicitly)
specifying different weights to different nodes in a neighborhood, without
requiring any kind of costly matrix operation (such as inversion) or depending
on knowing the graph structure upfront. In this way, we address several key
challenges of spectral-based graph neural networks simultaneously, and make our
model readily applicable to inductive as well as transductive problems. Our GAT
models have achieved or matched state-of-the-art results across four
established transductive and inductive graph benchmarks: the Cora, Citeseer and
Pubmed citation network datasets, as well as a protein-protein interaction
dataset (wherein test graphs remain unseen during training). | http://arxiv.org/pdf/1710.10903 | Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, Yoshua Bengio | stat.ML, cs.AI, cs.LG, cs.SI | To appear at ICLR 2018. 12 pages, 2 figures | null | stat.ML | 20171030 | 20180204 | [
{
"id": "1706.06383"
},
{
"id": "1601.06733"
},
{
"id": "1506.05163"
},
{
"id": "1703.07326"
},
{
"id": "1703.03130"
},
{
"id": "1611.08402"
},
{
"id": "1706.03762"
},
{
"id": "1706.01427"
}
] |
1710.10903 | 37 | There are several potential improvements and extensions to graph attention networks that could be addressed as future work, such as overcoming the practical problems described in subsection 2.2 to be able to handle larger batch sizes. A particularly interesting research direction would be taking advantage of the attention mechanism to perform a thorough analysis on the model interpretability. Moreover, extending the method to perform graph classiï¬cation instead of node classiï¬cation would also be relevant from the application perspective. Finally, extending the model to incorporate edge features (possibly indicating relationship among nodes) would allow us to tackle a larger variety of problems. | 1710.10903#37 | Graph Attention Networks | We present graph attention networks (GATs), novel neural network
architectures that operate on graph-structured data, leveraging masked
self-attentional layers to address the shortcomings of prior methods based on
graph convolutions or their approximations. By stacking layers in which nodes
are able to attend over their neighborhoods' features, we enable (implicitly)
specifying different weights to different nodes in a neighborhood, without
requiring any kind of costly matrix operation (such as inversion) or depending
on knowing the graph structure upfront. In this way, we address several key
challenges of spectral-based graph neural networks simultaneously, and make our
model readily applicable to inductive as well as transductive problems. Our GAT
models have achieved or matched state-of-the-art results across four
established transductive and inductive graph benchmarks: the Cora, Citeseer and
Pubmed citation network datasets, as well as a protein-protein interaction
dataset (wherein test graphs remain unseen during training). | http://arxiv.org/pdf/1710.10903 | Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, Yoshua Bengio | stat.ML, cs.AI, cs.LG, cs.SI | To appear at ICLR 2018. 12 pages, 2 figures | null | stat.ML | 20171030 | 20180204 | [
{
"id": "1706.06383"
},
{
"id": "1601.06733"
},
{
"id": "1506.05163"
},
{
"id": "1703.07326"
},
{
"id": "1703.03130"
},
{
"id": "1611.08402"
},
{
"id": "1706.03762"
},
{
"id": "1706.01427"
}
] |
1710.10903 | 38 | Figure 2: A t-SNE plot of the computed feature representations of a pre-trained GAT modelâs first hidden layer on the Cora dataset. Node colors denote classes. Edge thickness indicates ag- gregated normalized attention coefficients between nodes 7 and j, across all eight attention heads (Seat ak, + at).
9
Published as a conference paper at ICLR 2018
# ACKNOWLEDGEMENTS
The authors would like to thank the developers of TensorFlow (Abadi et al., 2015). PV and PL have received funding from the European Unionâs Horizon 2020 research and innovation programme PROPAG-AGEING under grant agreement No 634821. We further acknowledge the support of the following agencies for research funding and computing support: CIFAR, Canada Research Chairs, Compute Canada and Calcul Qu´ebec, as well as NVIDIA for the generous GPU support. Special thanks to: Benjamin Day and Fabian Jansen for kindly pointing out issues in a previous iteration of the paper; MichaÅ DroËzdËzal for useful discussions, feedback and support; and Ga´etan Marceau for reviewing the paper prior to submission.
# REFERENCES | 1710.10903#38 | Graph Attention Networks | We present graph attention networks (GATs), novel neural network
architectures that operate on graph-structured data, leveraging masked
self-attentional layers to address the shortcomings of prior methods based on
graph convolutions or their approximations. By stacking layers in which nodes
are able to attend over their neighborhoods' features, we enable (implicitly)
specifying different weights to different nodes in a neighborhood, without
requiring any kind of costly matrix operation (such as inversion) or depending
on knowing the graph structure upfront. In this way, we address several key
challenges of spectral-based graph neural networks simultaneously, and make our
model readily applicable to inductive as well as transductive problems. Our GAT
models have achieved or matched state-of-the-art results across four
established transductive and inductive graph benchmarks: the Cora, Citeseer and
Pubmed citation network datasets, as well as a protein-protein interaction
dataset (wherein test graphs remain unseen during training). | http://arxiv.org/pdf/1710.10903 | Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, Yoshua Bengio | stat.ML, cs.AI, cs.LG, cs.SI | To appear at ICLR 2018. 12 pages, 2 figures | null | stat.ML | 20171030 | 20180204 | [
{
"id": "1706.06383"
},
{
"id": "1601.06733"
},
{
"id": "1506.05163"
},
{
"id": "1703.07326"
},
{
"id": "1703.03130"
},
{
"id": "1611.08402"
},
{
"id": "1706.03762"
},
{
"id": "1706.01427"
}
] |
1710.10903 | 39 | # REFERENCES
Mart´ın Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Man´e, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vin- cent Vanhoucke, Vijay Vasudevan, Fernanda Vi´egas, Oriol Vinyals, Pete Warden, Martin Watten- berg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. TensorFlow: Large-scale machine learning on heterogeneous systems, 2015. URL https://www.tensorflow.org/. Software avail- able from tensorï¬ow.org.
James Atwood and Don Towsley. Diffusion-convolutional neural networks. In Advances in Neural Information Processing Systems, pp. 1993â2001, 2016. | 1710.10903#39 | Graph Attention Networks | We present graph attention networks (GATs), novel neural network
architectures that operate on graph-structured data, leveraging masked
self-attentional layers to address the shortcomings of prior methods based on
graph convolutions or their approximations. By stacking layers in which nodes
are able to attend over their neighborhoods' features, we enable (implicitly)
specifying different weights to different nodes in a neighborhood, without
requiring any kind of costly matrix operation (such as inversion) or depending
on knowing the graph structure upfront. In this way, we address several key
challenges of spectral-based graph neural networks simultaneously, and make our
model readily applicable to inductive as well as transductive problems. Our GAT
models have achieved or matched state-of-the-art results across four
established transductive and inductive graph benchmarks: the Cora, Citeseer and
Pubmed citation network datasets, as well as a protein-protein interaction
dataset (wherein test graphs remain unseen during training). | http://arxiv.org/pdf/1710.10903 | Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, Yoshua Bengio | stat.ML, cs.AI, cs.LG, cs.SI | To appear at ICLR 2018. 12 pages, 2 figures | null | stat.ML | 20171030 | 20180204 | [
{
"id": "1706.06383"
},
{
"id": "1601.06733"
},
{
"id": "1506.05163"
},
{
"id": "1703.07326"
},
{
"id": "1703.03130"
},
{
"id": "1611.08402"
},
{
"id": "1706.03762"
},
{
"id": "1706.01427"
}
] |
1710.10903 | 40 | James Atwood and Don Towsley. Diffusion-convolutional neural networks. In Advances in Neural Information Processing Systems, pp. 1993â2001, 2016.
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. International Conference on Learning Representations (ICLR), 2015.
Mikhail Belkin, Partha Niyogi, and Vikas Sindhwani. Manifold regularization: A geometric frame- work for learning from labeled and unlabeled examples. Journal of machine learning research, 7 (Nov):2399â2434, 2006.
Joan Bruna, Wojciech Zaremba, Arthur Szlam, and Yann LeCun. Spectral networks and locally connected networks on graphs. International Conference on Learning Representations (ICLR), 2014.
Jianpeng Cheng, Li Dong, and Mirella Lapata. Long short-term memory-networks for machine reading. arXiv preprint arXiv:1601.06733, 2016. | 1710.10903#40 | Graph Attention Networks | We present graph attention networks (GATs), novel neural network
architectures that operate on graph-structured data, leveraging masked
self-attentional layers to address the shortcomings of prior methods based on
graph convolutions or their approximations. By stacking layers in which nodes
are able to attend over their neighborhoods' features, we enable (implicitly)
specifying different weights to different nodes in a neighborhood, without
requiring any kind of costly matrix operation (such as inversion) or depending
on knowing the graph structure upfront. In this way, we address several key
challenges of spectral-based graph neural networks simultaneously, and make our
model readily applicable to inductive as well as transductive problems. Our GAT
models have achieved or matched state-of-the-art results across four
established transductive and inductive graph benchmarks: the Cora, Citeseer and
Pubmed citation network datasets, as well as a protein-protein interaction
dataset (wherein test graphs remain unseen during training). | http://arxiv.org/pdf/1710.10903 | Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, Yoshua Bengio | stat.ML, cs.AI, cs.LG, cs.SI | To appear at ICLR 2018. 12 pages, 2 figures | null | stat.ML | 20171030 | 20180204 | [
{
"id": "1706.06383"
},
{
"id": "1601.06733"
},
{
"id": "1506.05163"
},
{
"id": "1703.07326"
},
{
"id": "1703.03130"
},
{
"id": "1611.08402"
},
{
"id": "1706.03762"
},
{
"id": "1706.01427"
}
] |
1710.10903 | 41 | Kyunghyun Cho, Bart Van Merri¨enboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Hol- ger Schwenk, and Yoshua Bengio. Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078, 2014.
Djork-Arn´e Clevert, Thomas Unterthiner, and Sepp Hochreiter. Fast and accurate deep network learning by exponential linear units (elus). International Conference on Learning Representations (ICLR), 2016.
Micha¨el Defferrard, Xavier Bresson, and Pierre Vandergheynst. Convolutional neural networks on graphs with fast localized spectral ï¬ltering. In Advances in Neural Information Processing Systems, pp. 3844â3852, 2016.
Misha Denil, Sergio G´omez Colmenarejo, Serkan Cabi, David Saxton, and Nando de Freitas. Pro- grammable agents. arXiv preprint arXiv:1706.06383, 2017. | 1710.10903#41 | Graph Attention Networks | We present graph attention networks (GATs), novel neural network
architectures that operate on graph-structured data, leveraging masked
self-attentional layers to address the shortcomings of prior methods based on
graph convolutions or their approximations. By stacking layers in which nodes
are able to attend over their neighborhoods' features, we enable (implicitly)
specifying different weights to different nodes in a neighborhood, without
requiring any kind of costly matrix operation (such as inversion) or depending
on knowing the graph structure upfront. In this way, we address several key
challenges of spectral-based graph neural networks simultaneously, and make our
model readily applicable to inductive as well as transductive problems. Our GAT
models have achieved or matched state-of-the-art results across four
established transductive and inductive graph benchmarks: the Cora, Citeseer and
Pubmed citation network datasets, as well as a protein-protein interaction
dataset (wherein test graphs remain unseen during training). | http://arxiv.org/pdf/1710.10903 | Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, Yoshua Bengio | stat.ML, cs.AI, cs.LG, cs.SI | To appear at ICLR 2018. 12 pages, 2 figures | null | stat.ML | 20171030 | 20180204 | [
{
"id": "1706.06383"
},
{
"id": "1601.06733"
},
{
"id": "1506.05163"
},
{
"id": "1703.07326"
},
{
"id": "1703.03130"
},
{
"id": "1611.08402"
},
{
"id": "1706.03762"
},
{
"id": "1706.01427"
}
] |
1710.10903 | 42 | Yan Duan, Marcin Andrychowicz, Bradly Stadie, Jonathan Ho, Jonas Schneider, Ilya Sutskever, arXiv preprint Pieter Abbeel, and Wojciech Zaremba. arXiv:1703.07326, 2017. One-shot imitation learning.
David K Duvenaud, Dougal Maclaurin, Jorge Iparraguirre, Rafael Bombarell, Timothy Hirzel, Al´an Aspuru-Guzik, and Ryan P Adams. Convolutional networks on graphs for learning molecular ï¬ngerprints. In Advances in neural information processing systems, pp. 2224â2232, 2015.
10
Published as a conference paper at ICLR 2018
Paolo Frasconi, Marco Gori, and Alessandro Sperduti. A general framework for adaptive processing of data structures. IEEE transactions on Neural Networks, 9(5):768â786, 1998.
Jonas Gehring, Michael Auli, David Grangier, and Yann N. Dauphin. A convolutional encoder model for neural machine translation. CoRR, abs/1611.02344, 2016. URL http://arxiv. org/abs/1611.02344. | 1710.10903#42 | Graph Attention Networks | We present graph attention networks (GATs), novel neural network
architectures that operate on graph-structured data, leveraging masked
self-attentional layers to address the shortcomings of prior methods based on
graph convolutions or their approximations. By stacking layers in which nodes
are able to attend over their neighborhoods' features, we enable (implicitly)
specifying different weights to different nodes in a neighborhood, without
requiring any kind of costly matrix operation (such as inversion) or depending
on knowing the graph structure upfront. In this way, we address several key
challenges of spectral-based graph neural networks simultaneously, and make our
model readily applicable to inductive as well as transductive problems. Our GAT
models have achieved or matched state-of-the-art results across four
established transductive and inductive graph benchmarks: the Cora, Citeseer and
Pubmed citation network datasets, as well as a protein-protein interaction
dataset (wherein test graphs remain unseen during training). | http://arxiv.org/pdf/1710.10903 | Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, Yoshua Bengio | stat.ML, cs.AI, cs.LG, cs.SI | To appear at ICLR 2018. 12 pages, 2 figures | null | stat.ML | 20171030 | 20180204 | [
{
"id": "1706.06383"
},
{
"id": "1601.06733"
},
{
"id": "1506.05163"
},
{
"id": "1703.07326"
},
{
"id": "1703.03130"
},
{
"id": "1611.08402"
},
{
"id": "1706.03762"
},
{
"id": "1706.01427"
}
] |
1710.10903 | 43 | Xavier Glorot and Yoshua Bengio. Understanding the difï¬culty of training deep feedforward neural networks. In Proceedings of the Thirteenth International Conference on Artiï¬cial Intelligence and Statistics, pp. 249â256, 2010.
Marco Gori, Gabriele Monfardini, and Franco Scarselli. A new model for learning in graph domains. In IEEE International Joint Conference on Neural Networks, pp. 729734, 2005.
William L Hamilton, Rex Ying, and Jure Leskovec. Inductive representation learning on large graphs. Neural Information Processing Systems (NIPS), 2017.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog- nition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770â778, 2016.
Mikael Henaff, Joan Bruna, and Yann LeCun. Deep convolutional networks on graph-structured data. arXiv preprint arXiv:1506.05163, 2015.
Sepp Hochreiter and J¨urgen Schmidhuber. Long short-term memory. Neural computation, 9(8): 1735â1780, 1997. | 1710.10903#43 | Graph Attention Networks | We present graph attention networks (GATs), novel neural network
architectures that operate on graph-structured data, leveraging masked
self-attentional layers to address the shortcomings of prior methods based on
graph convolutions or their approximations. By stacking layers in which nodes
are able to attend over their neighborhoods' features, we enable (implicitly)
specifying different weights to different nodes in a neighborhood, without
requiring any kind of costly matrix operation (such as inversion) or depending
on knowing the graph structure upfront. In this way, we address several key
challenges of spectral-based graph neural networks simultaneously, and make our
model readily applicable to inductive as well as transductive problems. Our GAT
models have achieved or matched state-of-the-art results across four
established transductive and inductive graph benchmarks: the Cora, Citeseer and
Pubmed citation network datasets, as well as a protein-protein interaction
dataset (wherein test graphs remain unseen during training). | http://arxiv.org/pdf/1710.10903 | Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, Yoshua Bengio | stat.ML, cs.AI, cs.LG, cs.SI | To appear at ICLR 2018. 12 pages, 2 figures | null | stat.ML | 20171030 | 20180204 | [
{
"id": "1706.06383"
},
{
"id": "1601.06733"
},
{
"id": "1506.05163"
},
{
"id": "1703.07326"
},
{
"id": "1703.03130"
},
{
"id": "1611.08402"
},
{
"id": "1706.03762"
},
{
"id": "1706.01427"
}
] |
1710.10903 | 44 | Sepp Hochreiter and J¨urgen Schmidhuber. Long short-term memory. Neural computation, 9(8): 1735â1780, 1997.
In I. Guyon, Vain: Attentional multi-agent predictive modeling. U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Gar- nett pp. 2698â URL http://papers.nips.cc/paper/ 2708. Curran Associates, 6863-vain-attentional-multi-agent-predictive-modeling.pdf.
Simon J´egou, Michal Drozdzal, David V´azquez, Adriana Romero, and Yoshua Bengio. The one hundred layers tiramisu: Fully convolutional densenets for semantic segmentation. In Workshop on Computer Vision in Vehicle Technology CVPRW, 2017.
Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
Thomas N Kipf and Max Welling. Semi-supervised classiï¬cation with graph convolutional net- works. International Conference on Learning Representations (ICLR), 2017. | 1710.10903#44 | Graph Attention Networks | We present graph attention networks (GATs), novel neural network
architectures that operate on graph-structured data, leveraging masked
self-attentional layers to address the shortcomings of prior methods based on
graph convolutions or their approximations. By stacking layers in which nodes
are able to attend over their neighborhoods' features, we enable (implicitly)
specifying different weights to different nodes in a neighborhood, without
requiring any kind of costly matrix operation (such as inversion) or depending
on knowing the graph structure upfront. In this way, we address several key
challenges of spectral-based graph neural networks simultaneously, and make our
model readily applicable to inductive as well as transductive problems. Our GAT
models have achieved or matched state-of-the-art results across four
established transductive and inductive graph benchmarks: the Cora, Citeseer and
Pubmed citation network datasets, as well as a protein-protein interaction
dataset (wherein test graphs remain unseen during training). | http://arxiv.org/pdf/1710.10903 | Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, Yoshua Bengio | stat.ML, cs.AI, cs.LG, cs.SI | To appear at ICLR 2018. 12 pages, 2 figures | null | stat.ML | 20171030 | 20180204 | [
{
"id": "1706.06383"
},
{
"id": "1601.06733"
},
{
"id": "1506.05163"
},
{
"id": "1703.07326"
},
{
"id": "1703.03130"
},
{
"id": "1611.08402"
},
{
"id": "1706.03762"
},
{
"id": "1706.01427"
}
] |
1710.10903 | 45 | Yujia Li, Daniel Tarlow, Marc Brockschmidt, and Richard Zemel. Gated graph sequence neural networks. International Conference on Learning Representations (ICLR), 2016.
Zhouhan Lin, Minwei Feng, Cicero Nogueira dos Santos, Mo Yu, Bing Xiang, Bowen Zhou, arXiv preprint and Yoshua Bengio. arXiv:1703.03130, 2017. A structured self-attentive sentence embedding.
Qing Lu and Lise Getoor. Link-based classiï¬cation. In Proceedings of the 20th International Conference on Machine Learning (ICML-03), pp. 496â503, 2003.
Laurens van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. Journal of Machine Learning Research, 9(Nov):2579â2605, 2008.
Federico Monti, Davide Boscaini, Jonathan Masci, Emanuele Rodol`a, Jan Svoboda, and Michael M Bronstein. Geometric deep learning on graphs and manifolds using mixture model cnns. arXiv preprint arXiv:1611.08402, 2016. | 1710.10903#45 | Graph Attention Networks | We present graph attention networks (GATs), novel neural network
architectures that operate on graph-structured data, leveraging masked
self-attentional layers to address the shortcomings of prior methods based on
graph convolutions or their approximations. By stacking layers in which nodes
are able to attend over their neighborhoods' features, we enable (implicitly)
specifying different weights to different nodes in a neighborhood, without
requiring any kind of costly matrix operation (such as inversion) or depending
on knowing the graph structure upfront. In this way, we address several key
challenges of spectral-based graph neural networks simultaneously, and make our
model readily applicable to inductive as well as transductive problems. Our GAT
models have achieved or matched state-of-the-art results across four
established transductive and inductive graph benchmarks: the Cora, Citeseer and
Pubmed citation network datasets, as well as a protein-protein interaction
dataset (wherein test graphs remain unseen during training). | http://arxiv.org/pdf/1710.10903 | Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, Yoshua Bengio | stat.ML, cs.AI, cs.LG, cs.SI | To appear at ICLR 2018. 12 pages, 2 figures | null | stat.ML | 20171030 | 20180204 | [
{
"id": "1706.06383"
},
{
"id": "1601.06733"
},
{
"id": "1506.05163"
},
{
"id": "1703.07326"
},
{
"id": "1703.03130"
},
{
"id": "1611.08402"
},
{
"id": "1706.03762"
},
{
"id": "1706.01427"
}
] |
1710.10903 | 46 | Mathias Niepert, Mohamed Ahmed, and Konstantin Kutzkov. Learning convolutional neural net- works for graphs. In Proceedings of The 33rd International Conference on Machine Learning, volume 48, pp. 2014â2023, 2016.
11
Published as a conference paper at ICLR 2018
Bryan Perozzi, Rami Al-Rfou, and Steven Skiena. Deepwalk: Online learning of social repre- In Proceedings of the 20th ACM SIGKDD international conference on Knowledge sentations. discovery and data mining, pp. 701â710. ACM, 2014.
Sam T. Roweis and Lawrence K. Saul. Nonlinear dimensionality reduction by locally linear embed- ding. Science, 290:2323â2326, 2000.
Adam Santoro, David Raposo, David GT Barrett, Mateusz Malinowski, Razvan Pascanu, Peter Battaglia, and Timothy Lillicrap. A simple neural network module for relational reasoning. arXiv preprint arXiv:1706.01427, 2017.
Franco Scarselli, Marco Gori, Ah Chung Tsoi, Markus Hagenbuchner, and Gabriele Monfardini. The graph neural network model. IEEE Transactions on Neural Networks, 20(1):61â80, 2009. | 1710.10903#46 | Graph Attention Networks | We present graph attention networks (GATs), novel neural network
architectures that operate on graph-structured data, leveraging masked
self-attentional layers to address the shortcomings of prior methods based on
graph convolutions or their approximations. By stacking layers in which nodes
are able to attend over their neighborhoods' features, we enable (implicitly)
specifying different weights to different nodes in a neighborhood, without
requiring any kind of costly matrix operation (such as inversion) or depending
on knowing the graph structure upfront. In this way, we address several key
challenges of spectral-based graph neural networks simultaneously, and make our
model readily applicable to inductive as well as transductive problems. Our GAT
models have achieved or matched state-of-the-art results across four
established transductive and inductive graph benchmarks: the Cora, Citeseer and
Pubmed citation network datasets, as well as a protein-protein interaction
dataset (wherein test graphs remain unseen during training). | http://arxiv.org/pdf/1710.10903 | Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, Yoshua Bengio | stat.ML, cs.AI, cs.LG, cs.SI | To appear at ICLR 2018. 12 pages, 2 figures | null | stat.ML | 20171030 | 20180204 | [
{
"id": "1706.06383"
},
{
"id": "1601.06733"
},
{
"id": "1506.05163"
},
{
"id": "1703.07326"
},
{
"id": "1703.03130"
},
{
"id": "1611.08402"
},
{
"id": "1706.03762"
},
{
"id": "1706.01427"
}
] |
1710.10903 | 47 | Prithviraj Sen, Galileo Namata, Mustafa Bilgic, Lise Getoor, Brian Galligher, and Tina Eliassi-Rad. Collective classiï¬cation in network data. AI magazine, 29(3):93, 2008.
A. Sperduti and A. Starita. Supervised neural networks for the classiï¬cation of structures. Trans. Neur. Netw., 8(3):714â735, May 1997. ISSN 1045-9227. doi: 10.1109/72.572108.
Nitish Srivastava, Geoffrey E Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: a simple way to prevent neural networks from overï¬tting. Journal of machine learning research, 15(1):1929â1958, 2014.
Aravind Subramanian, Pablo Tamayo, Vamsi K Mootha, Sayan Mukherjee, Benjamin L Ebert, Michael A Gillette, Amanda Paulovich, Scott L Pomeroy, Todd R Golub, Eric S Lander, et al. Gene set enrichment analysis: a knowledge-based approach for interpreting genome-wide expres- sion proï¬les. Proceedings of the National Academy of Sciences, 102(43):15545â15550, 2005. | 1710.10903#47 | Graph Attention Networks | We present graph attention networks (GATs), novel neural network
architectures that operate on graph-structured data, leveraging masked
self-attentional layers to address the shortcomings of prior methods based on
graph convolutions or their approximations. By stacking layers in which nodes
are able to attend over their neighborhoods' features, we enable (implicitly)
specifying different weights to different nodes in a neighborhood, without
requiring any kind of costly matrix operation (such as inversion) or depending
on knowing the graph structure upfront. In this way, we address several key
challenges of spectral-based graph neural networks simultaneously, and make our
model readily applicable to inductive as well as transductive problems. Our GAT
models have achieved or matched state-of-the-art results across four
established transductive and inductive graph benchmarks: the Cora, Citeseer and
Pubmed citation network datasets, as well as a protein-protein interaction
dataset (wherein test graphs remain unseen during training). | http://arxiv.org/pdf/1710.10903 | Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, Yoshua Bengio | stat.ML, cs.AI, cs.LG, cs.SI | To appear at ICLR 2018. 12 pages, 2 figures | null | stat.ML | 20171030 | 20180204 | [
{
"id": "1706.06383"
},
{
"id": "1601.06733"
},
{
"id": "1506.05163"
},
{
"id": "1703.07326"
},
{
"id": "1703.03130"
},
{
"id": "1611.08402"
},
{
"id": "1706.03762"
},
{
"id": "1706.01427"
}
] |
1710.10903 | 48 | Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. arXiv preprint arXiv:1706.03762, 2017.
Jason Weston, Fr´ed´eric Ratle, Hossein Mobahi, and Ronan Collobert. Deep learning via semi- supervised embedding. In Neural Networks: Tricks of the Trade, pp. 639â655. Springer, 2012.
Jason Weston, Sumit Chopra, and Antoine Bordes. Memory networks. CoRR, abs/1410.3916, 2014. URL http://arxiv.org/abs/1410.3916.
Zhilin Yang, William Cohen, and Ruslan Salakhudinov. Revisiting semi-supervised learning with graph embeddings. In International Conference on Machine Learning, pp. 40â48, 2016.
Xiaojin Zhu, Zoubin Ghahramani, and John D Lafferty. Semi-supervised learning using gaussian ï¬elds and harmonic functions. In Proceedings of the 20th International conference on Machine learning (ICML-03), pp. 912â919, 2003. | 1710.10903#48 | Graph Attention Networks | We present graph attention networks (GATs), novel neural network
architectures that operate on graph-structured data, leveraging masked
self-attentional layers to address the shortcomings of prior methods based on
graph convolutions or their approximations. By stacking layers in which nodes
are able to attend over their neighborhoods' features, we enable (implicitly)
specifying different weights to different nodes in a neighborhood, without
requiring any kind of costly matrix operation (such as inversion) or depending
on knowing the graph structure upfront. In this way, we address several key
challenges of spectral-based graph neural networks simultaneously, and make our
model readily applicable to inductive as well as transductive problems. Our GAT
models have achieved or matched state-of-the-art results across four
established transductive and inductive graph benchmarks: the Cora, Citeseer and
Pubmed citation network datasets, as well as a protein-protein interaction
dataset (wherein test graphs remain unseen during training). | http://arxiv.org/pdf/1710.10903 | Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, Yoshua Bengio | stat.ML, cs.AI, cs.LG, cs.SI | To appear at ICLR 2018. 12 pages, 2 figures | null | stat.ML | 20171030 | 20180204 | [
{
"id": "1706.06383"
},
{
"id": "1601.06733"
},
{
"id": "1506.05163"
},
{
"id": "1703.07326"
},
{
"id": "1703.03130"
},
{
"id": "1611.08402"
},
{
"id": "1706.03762"
},
{
"id": "1706.01427"
}
] |
1710.10723 | 0 | 7 1 0 2
v o N 7 ] L C . s c [
2 v 3 2 7 0 1 . 0 1 7 1 : v i X r a
# Simple and Effective Multi-Paragraph Reading Comprehension
# Christopher Clarkâ University of Washington [email protected]
# Matt Gardner Allen Institute for Artiï¬cial Intelligence [email protected]
# Abstract
We consider the problem of adapting neural paragraph-level question answering models to the case where entire documents are given as input. Our proposed solution trains models to produce well calibrated conï¬dence scores for their results on individual paragraphs. We sample multiple paragraphs from the doc- uments during training, and use a shared- normalization training objective that encour- ages the model to produce globally correct out- put. We combine this method with a state- of-the-art pipeline for training models on doc- ument QA data. Experiments demonstrate strong performance on several document QA datasets. Overall, we are able to achieve a score of 71.3 F1 on the web portion of Triv- iaQA, a large improvement from the 56.7 F1 of the previous best system. | 1710.10723#0 | Simple and Effective Multi-Paragraph Reading Comprehension | We consider the problem of adapting neural paragraph-level question answering
models to the case where entire documents are given as input. Our proposed
solution trains models to produce well calibrated confidence scores for their
results on individual paragraphs. We sample multiple paragraphs from the
documents during training, and use a shared-normalization training objective
that encourages the model to produce globally correct output. We combine this
method with a state-of-the-art pipeline for training models on document QA
data. Experiments demonstrate strong performance on several document QA
datasets. Overall, we are able to achieve a score of 71.3 F1 on the web portion
of TriviaQA, a large improvement from the 56.7 F1 of the previous best system. | http://arxiv.org/pdf/1710.10723 | Christopher Clark, Matt Gardner | cs.CL | 11 pages, updated a reference | null | cs.CL | 20171029 | 20171107 | [
{
"id": "1608.07905"
},
{
"id": "1703.04816"
},
{
"id": "1511.02301"
},
{
"id": "1707.09098"
},
{
"id": "1702.02171"
},
{
"id": "1709.00023"
},
{
"id": "1606.05250"
},
{
"id": "1601.06733"
},
{
"id": "1611.09268"
},
{
"id": "1706.02596"
},
{
"id": "1608.03542"
},
{
"id": "1706.04815"
},
{
"id": "1704.00051"
},
{
"id": "1705.03551"
},
{
"id": "1707.03904"
},
{
"id": "1603.01547"
},
{
"id": "1707.07328"
}
] |
1710.10723 | 1 | from the input documents, which is then passed to the paragraph model to extract an answer (Joshi et al., 2017; Wang et al., 2017a). Conï¬dence based methods apply the model to multiple para- graphs and returns the answer with the highest conï¬dence (Chen et al., 2017). Conï¬dence meth- ods have the advantage of being robust to errors in the (usually less sophisticated) paragraph selec- tion step, however they require a model that can produce accurate conï¬dence scores for each para- graph. As we shall show, naively trained models often struggle to meet this requirement.
In this paper we start by proposing an improved pipelined method which achieves state-of-the-art results. Then we introduce a method for training models to produce accurate per-paragraph conï¬- dence scores, and we show how combining this method with multiple paragraph selection further increases performance.
# Introduction | 1710.10723#1 | Simple and Effective Multi-Paragraph Reading Comprehension | We consider the problem of adapting neural paragraph-level question answering
models to the case where entire documents are given as input. Our proposed
solution trains models to produce well calibrated confidence scores for their
results on individual paragraphs. We sample multiple paragraphs from the
documents during training, and use a shared-normalization training objective
that encourages the model to produce globally correct output. We combine this
method with a state-of-the-art pipeline for training models on document QA
data. Experiments demonstrate strong performance on several document QA
datasets. Overall, we are able to achieve a score of 71.3 F1 on the web portion
of TriviaQA, a large improvement from the 56.7 F1 of the previous best system. | http://arxiv.org/pdf/1710.10723 | Christopher Clark, Matt Gardner | cs.CL | 11 pages, updated a reference | null | cs.CL | 20171029 | 20171107 | [
{
"id": "1608.07905"
},
{
"id": "1703.04816"
},
{
"id": "1511.02301"
},
{
"id": "1707.09098"
},
{
"id": "1702.02171"
},
{
"id": "1709.00023"
},
{
"id": "1606.05250"
},
{
"id": "1601.06733"
},
{
"id": "1611.09268"
},
{
"id": "1706.02596"
},
{
"id": "1608.03542"
},
{
"id": "1706.04815"
},
{
"id": "1704.00051"
},
{
"id": "1705.03551"
},
{
"id": "1707.03904"
},
{
"id": "1603.01547"
},
{
"id": "1707.07328"
}
] |
1710.10723 | 2 | # Introduction
Teaching machines to answer arbitrary user- generated questions is a long-term goal of natural language processing. For a wide range of ques- tions, existing information retrieval methods are capable of locating documents that are likely to contain the answer. However, automatically ex- tracting the answer from those texts remains an open challenge. The recent success of neural mod- els at answering questions given a related para- graph (Wang et al., 2017b; Tan et al., 2017) sug- gests neural models have the potential to be a key part of a solution to this problem. Training and testing neural models that take entire documents as input is extremely computationally expensive, so typically this requires adapting a paragraph-level model to process document-level input.
There are two basic approaches to this task. Pipelined approaches select a single paragraph
âWork completed while interning at the Allen Institute
for Artiï¬cial Intelligence | 1710.10723#2 | Simple and Effective Multi-Paragraph Reading Comprehension | We consider the problem of adapting neural paragraph-level question answering
models to the case where entire documents are given as input. Our proposed
solution trains models to produce well calibrated confidence scores for their
results on individual paragraphs. We sample multiple paragraphs from the
documents during training, and use a shared-normalization training objective
that encourages the model to produce globally correct output. We combine this
method with a state-of-the-art pipeline for training models on document QA
data. Experiments demonstrate strong performance on several document QA
datasets. Overall, we are able to achieve a score of 71.3 F1 on the web portion
of TriviaQA, a large improvement from the 56.7 F1 of the previous best system. | http://arxiv.org/pdf/1710.10723 | Christopher Clark, Matt Gardner | cs.CL | 11 pages, updated a reference | null | cs.CL | 20171029 | 20171107 | [
{
"id": "1608.07905"
},
{
"id": "1703.04816"
},
{
"id": "1511.02301"
},
{
"id": "1707.09098"
},
{
"id": "1702.02171"
},
{
"id": "1709.00023"
},
{
"id": "1606.05250"
},
{
"id": "1601.06733"
},
{
"id": "1611.09268"
},
{
"id": "1706.02596"
},
{
"id": "1608.03542"
},
{
"id": "1706.04815"
},
{
"id": "1704.00051"
},
{
"id": "1705.03551"
},
{
"id": "1707.03904"
},
{
"id": "1603.01547"
},
{
"id": "1707.07328"
}
] |
1710.10723 | 3 | There are two basic approaches to this task. Pipelined approaches select a single paragraph
âWork completed while interning at the Allen Institute
for Artiï¬cial Intelligence
Our pipelined method focuses on addressing the challenges that come with training on document- level data. We propose a TF-IDF heuristic to select which paragraphs to train and test on. Since anno- tating entire documents is very expensive, data of this sort is typically distantly supervised, mean- ing only the answer text, not the answer spans, are known. To handle the noise this creates, we use a summed objective function that marginal- izes the modelâs output over all locations the an- swer text occurs. We apply this approach with a model design that integrates some recent ideas in reading comprehension models, including self- attention (Cheng et al., 2016) and bi-directional at- tention (Seo et al., 2016).
Our conï¬dence method extends this approach to better handle the multi-paragraph setting. Pre- vious approaches trained the model on questions paired with paragraphs that are known a priori to contain the answer. This has several downsides: the model is not trained to produce low conï¬dence | 1710.10723#3 | Simple and Effective Multi-Paragraph Reading Comprehension | We consider the problem of adapting neural paragraph-level question answering
models to the case where entire documents are given as input. Our proposed
solution trains models to produce well calibrated confidence scores for their
results on individual paragraphs. We sample multiple paragraphs from the
documents during training, and use a shared-normalization training objective
that encourages the model to produce globally correct output. We combine this
method with a state-of-the-art pipeline for training models on document QA
data. Experiments demonstrate strong performance on several document QA
datasets. Overall, we are able to achieve a score of 71.3 F1 on the web portion
of TriviaQA, a large improvement from the 56.7 F1 of the previous best system. | http://arxiv.org/pdf/1710.10723 | Christopher Clark, Matt Gardner | cs.CL | 11 pages, updated a reference | null | cs.CL | 20171029 | 20171107 | [
{
"id": "1608.07905"
},
{
"id": "1703.04816"
},
{
"id": "1511.02301"
},
{
"id": "1707.09098"
},
{
"id": "1702.02171"
},
{
"id": "1709.00023"
},
{
"id": "1606.05250"
},
{
"id": "1601.06733"
},
{
"id": "1611.09268"
},
{
"id": "1706.02596"
},
{
"id": "1608.03542"
},
{
"id": "1706.04815"
},
{
"id": "1704.00051"
},
{
"id": "1705.03551"
},
{
"id": "1707.03904"
},
{
"id": "1603.01547"
},
{
"id": "1707.07328"
}
] |
1710.10723 | 4 | scores for paragraphs that do not contain an an- swer, and the training objective does not require conï¬dence scores to be comparable between para- graphs. We resolve these problems by sampling paragraphs from the context documents, includ- ing paragraphs that do not contain an answer, to train on. We then use a shared-normalization ob- jective where paragraphs are processed indepen- dently, but the probability of an answer candidate is marginalized over all paragraphs sampled from the same document. This requires the model to produce globally correct output even though each paragraph is processed independently. | 1710.10723#4 | Simple and Effective Multi-Paragraph Reading Comprehension | We consider the problem of adapting neural paragraph-level question answering
models to the case where entire documents are given as input. Our proposed
solution trains models to produce well calibrated confidence scores for their
results on individual paragraphs. We sample multiple paragraphs from the
documents during training, and use a shared-normalization training objective
that encourages the model to produce globally correct output. We combine this
method with a state-of-the-art pipeline for training models on document QA
data. Experiments demonstrate strong performance on several document QA
datasets. Overall, we are able to achieve a score of 71.3 F1 on the web portion
of TriviaQA, a large improvement from the 56.7 F1 of the previous best system. | http://arxiv.org/pdf/1710.10723 | Christopher Clark, Matt Gardner | cs.CL | 11 pages, updated a reference | null | cs.CL | 20171029 | 20171107 | [
{
"id": "1608.07905"
},
{
"id": "1703.04816"
},
{
"id": "1511.02301"
},
{
"id": "1707.09098"
},
{
"id": "1702.02171"
},
{
"id": "1709.00023"
},
{
"id": "1606.05250"
},
{
"id": "1601.06733"
},
{
"id": "1611.09268"
},
{
"id": "1706.02596"
},
{
"id": "1608.03542"
},
{
"id": "1706.04815"
},
{
"id": "1704.00051"
},
{
"id": "1705.03551"
},
{
"id": "1707.03904"
},
{
"id": "1603.01547"
},
{
"id": "1707.07328"
}
] |
1710.10723 | 5 | We evaluate our work on TriviaQA web (Joshi et al., 2017), a dataset of questions paired with web documents that contain the answer. We achieve 71.3 F1 on the test set, a 15 point abso- lute gain over prior work. We additionally perform an ablation study on our pipelined method, and we show the effectiveness of our multi-paragraph methods on TriviaQA unï¬ltered and a modiï¬ed version of SQuAD (Rajpurkar et al., 2016) where only the correct document, not the correct para- graph, is known. We also build a demonstration of our method by combining our model with a re- implementation of the retrieval mechanism used in TriviaQA to build a prototype end-to-end gen- eral question answering system 1. We release our code 2 to facilitate future work in this ï¬eld.
# 2 Pipelined Method
In this section we propose an approach to train- ing pipelined question answering systems, where a single paragraph is heuristically extracted from the context document(s) and passed to a paragraph- level QA model. We suggest using a TF-IDF based paragraph selection method and argue that a summed objective function should be used to handle noisy supervision. We also propose a re- ï¬ned model that incorporates some recent model- ing ideas for reading comprehension systems. | 1710.10723#5 | Simple and Effective Multi-Paragraph Reading Comprehension | We consider the problem of adapting neural paragraph-level question answering
models to the case where entire documents are given as input. Our proposed
solution trains models to produce well calibrated confidence scores for their
results on individual paragraphs. We sample multiple paragraphs from the
documents during training, and use a shared-normalization training objective
that encourages the model to produce globally correct output. We combine this
method with a state-of-the-art pipeline for training models on document QA
data. Experiments demonstrate strong performance on several document QA
datasets. Overall, we are able to achieve a score of 71.3 F1 on the web portion
of TriviaQA, a large improvement from the 56.7 F1 of the previous best system. | http://arxiv.org/pdf/1710.10723 | Christopher Clark, Matt Gardner | cs.CL | 11 pages, updated a reference | null | cs.CL | 20171029 | 20171107 | [
{
"id": "1608.07905"
},
{
"id": "1703.04816"
},
{
"id": "1511.02301"
},
{
"id": "1707.09098"
},
{
"id": "1702.02171"
},
{
"id": "1709.00023"
},
{
"id": "1606.05250"
},
{
"id": "1601.06733"
},
{
"id": "1611.09268"
},
{
"id": "1706.02596"
},
{
"id": "1608.03542"
},
{
"id": "1706.04815"
},
{
"id": "1704.00051"
},
{
"id": "1705.03551"
},
{
"id": "1707.03904"
},
{
"id": "1603.01547"
},
{
"id": "1707.07328"
}
] |
1710.10723 | 6 | # 2.1 Paragraph Selection
Our paragraph selection method chooses the para- graph that has the smallest TF-IDF cosine dis- tance with the question. Document frequencies are computed using just the paragraphs within the relevant documents, not the entire corpus. The advantage of this approach is that if a question word is prevalent in the context, for example if
# 1documentqa.allenai.org 2github.com/allenai/document-qa
the word âtigerâ is prevalent in the document(s) for the question âWhat is the largest living sub- species of the tiger?â, greater weight will be given to question words that are less common, such as âlargestâ or âsub-speciesâ. Relative to selecting the ï¬rst paragraph in the document, this improves the chance of the selected paragraph containing the correct answer from 83.1% to 85.1% on Triv- iaQA web. We also expect this approach to do a better job of selecting paragraphs that relate to the question since it is explicitly selecting paragraphs that contain question words.
# 2.2 Handling Noisy Labels | 1710.10723#6 | Simple and Effective Multi-Paragraph Reading Comprehension | We consider the problem of adapting neural paragraph-level question answering
models to the case where entire documents are given as input. Our proposed
solution trains models to produce well calibrated confidence scores for their
results on individual paragraphs. We sample multiple paragraphs from the
documents during training, and use a shared-normalization training objective
that encourages the model to produce globally correct output. We combine this
method with a state-of-the-art pipeline for training models on document QA
data. Experiments demonstrate strong performance on several document QA
datasets. Overall, we are able to achieve a score of 71.3 F1 on the web portion
of TriviaQA, a large improvement from the 56.7 F1 of the previous best system. | http://arxiv.org/pdf/1710.10723 | Christopher Clark, Matt Gardner | cs.CL | 11 pages, updated a reference | null | cs.CL | 20171029 | 20171107 | [
{
"id": "1608.07905"
},
{
"id": "1703.04816"
},
{
"id": "1511.02301"
},
{
"id": "1707.09098"
},
{
"id": "1702.02171"
},
{
"id": "1709.00023"
},
{
"id": "1606.05250"
},
{
"id": "1601.06733"
},
{
"id": "1611.09268"
},
{
"id": "1706.02596"
},
{
"id": "1608.03542"
},
{
"id": "1706.04815"
},
{
"id": "1704.00051"
},
{
"id": "1705.03551"
},
{
"id": "1707.03904"
},
{
"id": "1603.01547"
},
{
"id": "1707.07328"
}
] |
1710.10723 | 7 | # 2.2 Handling Noisy Labels
Question: Which British general was killed at Khartoum in 1885? Answer: Gordon Context: In February 1885 Gordon returned to the Sudan to evacuate Egyptian forces. Khartoum came under siege the next month and rebels broke into the city, killing Gor- don and the other defenders. The British public reacted to his death by acclaiming âGordon of Khartoumâ, a saint. However, historians have suggested that Gordon deï¬ed orders and refused to evacuate...
Figure 1: Noisy supervision causes many spans of text that contain the answer, but are not situated in a con- text that relates to the question, to be labelled as correct answer spans (highlighted in red). This risks distract- ing the model from learning from more relevant spans (highlighted in green). | 1710.10723#7 | Simple and Effective Multi-Paragraph Reading Comprehension | We consider the problem of adapting neural paragraph-level question answering
models to the case where entire documents are given as input. Our proposed
solution trains models to produce well calibrated confidence scores for their
results on individual paragraphs. We sample multiple paragraphs from the
documents during training, and use a shared-normalization training objective
that encourages the model to produce globally correct output. We combine this
method with a state-of-the-art pipeline for training models on document QA
data. Experiments demonstrate strong performance on several document QA
datasets. Overall, we are able to achieve a score of 71.3 F1 on the web portion
of TriviaQA, a large improvement from the 56.7 F1 of the previous best system. | http://arxiv.org/pdf/1710.10723 | Christopher Clark, Matt Gardner | cs.CL | 11 pages, updated a reference | null | cs.CL | 20171029 | 20171107 | [
{
"id": "1608.07905"
},
{
"id": "1703.04816"
},
{
"id": "1511.02301"
},
{
"id": "1707.09098"
},
{
"id": "1702.02171"
},
{
"id": "1709.00023"
},
{
"id": "1606.05250"
},
{
"id": "1601.06733"
},
{
"id": "1611.09268"
},
{
"id": "1706.02596"
},
{
"id": "1608.03542"
},
{
"id": "1706.04815"
},
{
"id": "1704.00051"
},
{
"id": "1705.03551"
},
{
"id": "1707.03904"
},
{
"id": "1603.01547"
},
{
"id": "1707.07328"
}
] |
1710.10723 | 8 | In a distantly supervised setup we label all text spans that match the answer text as being correct. This can lead to training the model to select un- wanted answer spans. Figure 1 contains an exam- ple. To handle this difï¬culty, we use a summed objective function similar to the one from Kadlec et al. (2016), that optimizes the sum of the proba- bilities of all answer spans. The models we con- sider here work by independently predicting the start and end token of the answer span, so we take this approach for both predictions. Thus the ob- jective for the span start boundaries becomes:
loe Urea e⢠â log so Si i= ©
where A is the set of tokens that start an answer span, n is the number of context tokens, and si is a scalar score computed by the model for span i. This optimizes the negative log-likelihood of se- lecting any correct start token. This objective is agnostic to how the model distributes probability
mass across the possible answer spans, thus the model can âchooseâ to focus on only the more rel- evant spans.
# 2.3 Model | 1710.10723#8 | Simple and Effective Multi-Paragraph Reading Comprehension | We consider the problem of adapting neural paragraph-level question answering
models to the case where entire documents are given as input. Our proposed
solution trains models to produce well calibrated confidence scores for their
results on individual paragraphs. We sample multiple paragraphs from the
documents during training, and use a shared-normalization training objective
that encourages the model to produce globally correct output. We combine this
method with a state-of-the-art pipeline for training models on document QA
data. Experiments demonstrate strong performance on several document QA
datasets. Overall, we are able to achieve a score of 71.3 F1 on the web portion
of TriviaQA, a large improvement from the 56.7 F1 of the previous best system. | http://arxiv.org/pdf/1710.10723 | Christopher Clark, Matt Gardner | cs.CL | 11 pages, updated a reference | null | cs.CL | 20171029 | 20171107 | [
{
"id": "1608.07905"
},
{
"id": "1703.04816"
},
{
"id": "1511.02301"
},
{
"id": "1707.09098"
},
{
"id": "1702.02171"
},
{
"id": "1709.00023"
},
{
"id": "1606.05250"
},
{
"id": "1601.06733"
},
{
"id": "1611.09268"
},
{
"id": "1706.02596"
},
{
"id": "1608.03542"
},
{
"id": "1706.04815"
},
{
"id": "1704.00051"
},
{
"id": "1705.03551"
},
{
"id": "1707.03904"
},
{
"id": "1603.01547"
},
{
"id": "1707.07328"
}
] |
1710.10723 | 9 | mass across the possible answer spans, thus the model can âchooseâ to focus on only the more rel- evant spans.
# 2.3 Model
Start Scores Linear Linear a Bi-GRU - Bi-GRU â_" Concat) ( ) cy Linear ReLU Layer C J rN Self-Attention + âBi-GRU Prediction Self-Attention {Attention} Pre-Process { Embedding Linear ReLU Layer 1 Bi-Attention nN Bi-GRU L 4 ne CNN + Max Pool CNN + Max Pool n k Embed * r Char Embed L . Embed Char Embed f ( Context Text Context Text J
Figure 2: High level outline of our model.
We use a model with the following layers (shown in Figure 2):
Embedding: We embed words using pre- trained word vectors. We also embed the char- acters in each word into size 20 vectors which are learned, and run a convolution neural network followed by max-pooling to get character-derived embeddings for each word. The character-level and word-level embeddings are then concatenated and passed to the next layer. We do not update the word embeddings during training. A shared
bi-directional GRU (Cho et al., 2014) is used to map the question and passage embeddings to context- aware embeddings. | 1710.10723#9 | Simple and Effective Multi-Paragraph Reading Comprehension | We consider the problem of adapting neural paragraph-level question answering
models to the case where entire documents are given as input. Our proposed
solution trains models to produce well calibrated confidence scores for their
results on individual paragraphs. We sample multiple paragraphs from the
documents during training, and use a shared-normalization training objective
that encourages the model to produce globally correct output. We combine this
method with a state-of-the-art pipeline for training models on document QA
data. Experiments demonstrate strong performance on several document QA
datasets. Overall, we are able to achieve a score of 71.3 F1 on the web portion
of TriviaQA, a large improvement from the 56.7 F1 of the previous best system. | http://arxiv.org/pdf/1710.10723 | Christopher Clark, Matt Gardner | cs.CL | 11 pages, updated a reference | null | cs.CL | 20171029 | 20171107 | [
{
"id": "1608.07905"
},
{
"id": "1703.04816"
},
{
"id": "1511.02301"
},
{
"id": "1707.09098"
},
{
"id": "1702.02171"
},
{
"id": "1709.00023"
},
{
"id": "1606.05250"
},
{
"id": "1601.06733"
},
{
"id": "1611.09268"
},
{
"id": "1706.02596"
},
{
"id": "1608.03542"
},
{
"id": "1706.04815"
},
{
"id": "1704.00051"
},
{
"id": "1705.03551"
},
{
"id": "1707.03904"
},
{
"id": "1603.01547"
},
{
"id": "1707.07328"
}
] |
1710.10723 | 10 | bi-directional GRU (Cho et al., 2014) is used to map the question and passage embeddings to context- aware embeddings.
Attention: The bi-directional attention mech- anism from the Bi-Directional Attention Flow (BiDAF) model (Seo et al., 2016) is used to build a query-aware context representation. Let hi be
the vector for context word i, qj be the vector for question word j, and nq and nc be the lengths of the question and context respectively. We com- pute attention between context word i and ques- tion word j as:
aij = Wy - hy + wa - qj + wg - (hi © qj)
where w1, We, and wg are learned vectors and © is element-wise multiplication. We then compute an attended vector c; for each context token as:
# pij =
c=
We also compute a query-to-context vector qc:
mi = max 1â¤jâ¤nq aij
1<j<nq emi i= Ne mM. > Mel e Ne de = » hip;
computed
The final vector for each token is built by concatenating hj, cj, hy © cj, and qe © cj. In our model we subsequently pass the result through a linear layer with ReLU activations. | 1710.10723#10 | Simple and Effective Multi-Paragraph Reading Comprehension | We consider the problem of adapting neural paragraph-level question answering
models to the case where entire documents are given as input. Our proposed
solution trains models to produce well calibrated confidence scores for their
results on individual paragraphs. We sample multiple paragraphs from the
documents during training, and use a shared-normalization training objective
that encourages the model to produce globally correct output. We combine this
method with a state-of-the-art pipeline for training models on document QA
data. Experiments demonstrate strong performance on several document QA
datasets. Overall, we are able to achieve a score of 71.3 F1 on the web portion
of TriviaQA, a large improvement from the 56.7 F1 of the previous best system. | http://arxiv.org/pdf/1710.10723 | Christopher Clark, Matt Gardner | cs.CL | 11 pages, updated a reference | null | cs.CL | 20171029 | 20171107 | [
{
"id": "1608.07905"
},
{
"id": "1703.04816"
},
{
"id": "1511.02301"
},
{
"id": "1707.09098"
},
{
"id": "1702.02171"
},
{
"id": "1709.00023"
},
{
"id": "1606.05250"
},
{
"id": "1601.06733"
},
{
"id": "1611.09268"
},
{
"id": "1706.02596"
},
{
"id": "1608.03542"
},
{
"id": "1706.04815"
},
{
"id": "1704.00051"
},
{
"id": "1705.03551"
},
{
"id": "1707.03904"
},
{
"id": "1603.01547"
},
{
"id": "1707.07328"
}
] |
1710.10723 | 11 | Self-Attention: Next we use a layer of residual self-attention. The input is passed through another bi-directional GRU. Then we apply the same at- tention mechanism, only now between the passage In this case we do not use query-to- and itself. context attention and we set aij = âinf if i = j. As before, we pass the concatenated output through a linear layer with ReLU activations. This layer is applied residually, so this output is addi- tionally summed with the input.
Prediction: In the last layer of our model a bi- directional GRU is applied, followed by a linear layer that computes answer start scores for each token. The hidden states of that layer are con- catenated with the input and fed into a second bi- directional GRU and linear layer to predict answer end scores. The softmax operation is applied to the start and end scores to produce start and end probabilities, and we optimize the negative log- likelihood of selecting correct start and end tokens. Dropout: We also employ variational dropout, where a randomly selected set of hidden units | 1710.10723#11 | Simple and Effective Multi-Paragraph Reading Comprehension | We consider the problem of adapting neural paragraph-level question answering
models to the case where entire documents are given as input. Our proposed
solution trains models to produce well calibrated confidence scores for their
results on individual paragraphs. We sample multiple paragraphs from the
documents during training, and use a shared-normalization training objective
that encourages the model to produce globally correct output. We combine this
method with a state-of-the-art pipeline for training models on document QA
data. Experiments demonstrate strong performance on several document QA
datasets. Overall, we are able to achieve a score of 71.3 F1 on the web portion
of TriviaQA, a large improvement from the 56.7 F1 of the previous best system. | http://arxiv.org/pdf/1710.10723 | Christopher Clark, Matt Gardner | cs.CL | 11 pages, updated a reference | null | cs.CL | 20171029 | 20171107 | [
{
"id": "1608.07905"
},
{
"id": "1703.04816"
},
{
"id": "1511.02301"
},
{
"id": "1707.09098"
},
{
"id": "1702.02171"
},
{
"id": "1709.00023"
},
{
"id": "1606.05250"
},
{
"id": "1601.06733"
},
{
"id": "1611.09268"
},
{
"id": "1706.02596"
},
{
"id": "1608.03542"
},
{
"id": "1706.04815"
},
{
"id": "1704.00051"
},
{
"id": "1705.03551"
},
{
"id": "1707.03904"
},
{
"id": "1603.01547"
},
{
"id": "1707.07328"
}
] |
1710.10723 | 12 | are set to zero across all time steps during train- ing (Gal and Ghahramani, 2016). We dropout the input to all the GRUs, including the word embed- dings, as well as the input to the attention mecha- nisms, at a rate of 0.2.
# 3 Conï¬dence Method
We adapt this model to the multi-paragraph setting by using the un-normalized and un-exponentiated (i.e., before the softmax operator is applied) score given to each span as a measure of the modelâs conï¬dence. For the boundary-based models we use here, a spanâs score is the sum of the start and end score given to its start and end token. At test time we run the model on each paragraph and se- lect the answer span with the highest conï¬dence. This is the approach taken by Chen et al. (2017).
Applying this approach without altering how the model is trained is, however, a gamble; the training objective does not require these conï¬- dence scores to be comparable between para- graphs. Our experiments in Section 5 show that in practice these models can be very poor at provid- ing good conï¬dence scores. Table 1 shows some qualitative examples of this phenomenon. | 1710.10723#12 | Simple and Effective Multi-Paragraph Reading Comprehension | We consider the problem of adapting neural paragraph-level question answering
models to the case where entire documents are given as input. Our proposed
solution trains models to produce well calibrated confidence scores for their
results on individual paragraphs. We sample multiple paragraphs from the
documents during training, and use a shared-normalization training objective
that encourages the model to produce globally correct output. We combine this
method with a state-of-the-art pipeline for training models on document QA
data. Experiments demonstrate strong performance on several document QA
datasets. Overall, we are able to achieve a score of 71.3 F1 on the web portion
of TriviaQA, a large improvement from the 56.7 F1 of the previous best system. | http://arxiv.org/pdf/1710.10723 | Christopher Clark, Matt Gardner | cs.CL | 11 pages, updated a reference | null | cs.CL | 20171029 | 20171107 | [
{
"id": "1608.07905"
},
{
"id": "1703.04816"
},
{
"id": "1511.02301"
},
{
"id": "1707.09098"
},
{
"id": "1702.02171"
},
{
"id": "1709.00023"
},
{
"id": "1606.05250"
},
{
"id": "1601.06733"
},
{
"id": "1611.09268"
},
{
"id": "1706.02596"
},
{
"id": "1608.03542"
},
{
"id": "1706.04815"
},
{
"id": "1704.00051"
},
{
"id": "1705.03551"
},
{
"id": "1707.03904"
},
{
"id": "1603.01547"
},
{
"id": "1707.07328"
}
] |
1710.10723 | 13 | We hypothesize that there are two key reasons a modelâs conï¬dence scores might not be well cal- ibrated. First, for models trained with the soft- max objective, the pre-softmax scores for all spans can be arbitrarily increased or decreased by a con- stant value without changing the resulting softmax probability distribution. As a result, nothing pre- vents models from producing scores that are arbi- trarily all larger or all smaller for one paragraph than another. Second, if the model only sees para- graphs that contain answers, it might become too conï¬dent in heuristics or patterns that are only ef- fective when it is known a priori that an answer exists. For example, in Table 1 we observe that the model will assign high conï¬dence values to spans that strongly match the category of the answer, even if the question words do not match the con- text. This might work passably well if an answer is present, but can lead to highly over-conï¬dent extractions in other cases. Similar kinds of errors have been observed when distractor sentences are added to the context (Jia and Liang, 2017). | 1710.10723#13 | Simple and Effective Multi-Paragraph Reading Comprehension | We consider the problem of adapting neural paragraph-level question answering
models to the case where entire documents are given as input. Our proposed
solution trains models to produce well calibrated confidence scores for their
results on individual paragraphs. We sample multiple paragraphs from the
documents during training, and use a shared-normalization training objective
that encourages the model to produce globally correct output. We combine this
method with a state-of-the-art pipeline for training models on document QA
data. Experiments demonstrate strong performance on several document QA
datasets. Overall, we are able to achieve a score of 71.3 F1 on the web portion
of TriviaQA, a large improvement from the 56.7 F1 of the previous best system. | http://arxiv.org/pdf/1710.10723 | Christopher Clark, Matt Gardner | cs.CL | 11 pages, updated a reference | null | cs.CL | 20171029 | 20171107 | [
{
"id": "1608.07905"
},
{
"id": "1703.04816"
},
{
"id": "1511.02301"
},
{
"id": "1707.09098"
},
{
"id": "1702.02171"
},
{
"id": "1709.00023"
},
{
"id": "1606.05250"
},
{
"id": "1601.06733"
},
{
"id": "1611.09268"
},
{
"id": "1706.02596"
},
{
"id": "1608.03542"
},
{
"id": "1706.04815"
},
{
"id": "1704.00051"
},
{
"id": "1705.03551"
},
{
"id": "1707.03904"
},
{
"id": "1603.01547"
},
{
"id": "1707.07328"
}
] |
1710.10723 | 14 | We experiment with four approaches to training models to produce comparable conï¬dence scores, shown in the follow subsections. In all cases we will sample paragraphs that do not contain an an- swer as additional training points.
# 3.1 Shared-Normalization
In this approach all paragraphs are processed in- dependently as usual. However, a modiï¬ed objec- tive function is used where the normalization fac- tor in the softmax operation is shared between all paragraphs from the same context. Therefore, the probability that token a from paragraph p starts an answer span is computed as:
efor Vier Wize
where P is the set of paragraphs that are from the same context as p, and sij is the score given to to- ken i from paragraph j. We train on this objective by including multiple paragraphs from the same context in each mini-batch.
This is similar to simply feeding the model mul- tiple paragraphs from each context concatenated together, except that each paragraph is processed independently until the normalization step. The key idea is that this will force the model to produce scores that are comparable between paragraphs, even though it does not have access to information about the other paragraphs being considered.
# 3.2 Merge | 1710.10723#14 | Simple and Effective Multi-Paragraph Reading Comprehension | We consider the problem of adapting neural paragraph-level question answering
models to the case where entire documents are given as input. Our proposed
solution trains models to produce well calibrated confidence scores for their
results on individual paragraphs. We sample multiple paragraphs from the
documents during training, and use a shared-normalization training objective
that encourages the model to produce globally correct output. We combine this
method with a state-of-the-art pipeline for training models on document QA
data. Experiments demonstrate strong performance on several document QA
datasets. Overall, we are able to achieve a score of 71.3 F1 on the web portion
of TriviaQA, a large improvement from the 56.7 F1 of the previous best system. | http://arxiv.org/pdf/1710.10723 | Christopher Clark, Matt Gardner | cs.CL | 11 pages, updated a reference | null | cs.CL | 20171029 | 20171107 | [
{
"id": "1608.07905"
},
{
"id": "1703.04816"
},
{
"id": "1511.02301"
},
{
"id": "1707.09098"
},
{
"id": "1702.02171"
},
{
"id": "1709.00023"
},
{
"id": "1606.05250"
},
{
"id": "1601.06733"
},
{
"id": "1611.09268"
},
{
"id": "1706.02596"
},
{
"id": "1608.03542"
},
{
"id": "1706.04815"
},
{
"id": "1704.00051"
},
{
"id": "1705.03551"
},
{
"id": "1707.03904"
},
{
"id": "1603.01547"
},
{
"id": "1707.07328"
}
] |
1710.10723 | 15 | # 3.2 Merge
As an alternative to the previous method, we ex- periment with concatenating all paragraphs sam- pled from the same context together during train- ing. A paragraph separator token with a learned embedding is added before each paragraph. Our motive is to test whether simply exposing the model to more text will teach the model to be more adept at ignoring irrelevant text.
# 3.3 No-Answer Option
We also experiment with allowing the model to se- lect a special âno-answerâ option for each para- graph. First, note that the independent-bounds ob- jective can be re-written as:
1 es 1 eb og oO} = . ee ° in ef
e8e9 ea yet ei log
where sj and gj are the scores for the start and end bounds produced by the model for token j, and a and b are the correct start and end tokens. We have the model compute another score, z, to represent | 1710.10723#15 | Simple and Effective Multi-Paragraph Reading Comprehension | We consider the problem of adapting neural paragraph-level question answering
models to the case where entire documents are given as input. Our proposed
solution trains models to produce well calibrated confidence scores for their
results on individual paragraphs. We sample multiple paragraphs from the
documents during training, and use a shared-normalization training objective
that encourages the model to produce globally correct output. We combine this
method with a state-of-the-art pipeline for training models on document QA
data. Experiments demonstrate strong performance on several document QA
datasets. Overall, we are able to achieve a score of 71.3 F1 on the web portion
of TriviaQA, a large improvement from the 56.7 F1 of the previous best system. | http://arxiv.org/pdf/1710.10723 | Christopher Clark, Matt Gardner | cs.CL | 11 pages, updated a reference | null | cs.CL | 20171029 | 20171107 | [
{
"id": "1608.07905"
},
{
"id": "1703.04816"
},
{
"id": "1511.02301"
},
{
"id": "1707.09098"
},
{
"id": "1702.02171"
},
{
"id": "1709.00023"
},
{
"id": "1606.05250"
},
{
"id": "1601.06733"
},
{
"id": "1611.09268"
},
{
"id": "1706.02596"
},
{
"id": "1608.03542"
},
{
"id": "1706.04815"
},
{
"id": "1704.00051"
},
{
"id": "1705.03551"
},
{
"id": "1707.03904"
},
{
"id": "1603.01547"
},
{
"id": "1707.07328"
}
] |
1710.10723 | 16 | Question When is the Members Debate held? Low Conï¬dence Correct Extraction Immediately after Decision Time a âMem- bers Debateâ is held, which lasts for 45 min- utes... High Conï¬dence Incorrect Extraction ...majority of the Scottish electorate voted for it in a referendum to be held on 1 March 1979 that represented at least... How many tree species are in the rainforest? Who was Warsz? How much did the ini- tial LM weight in kg? ...plant species is the highest on Earth with one 2001 study ï¬nding a quarter square kilo- meter (62 acres) of Ecuadorian rainforest supports more than 1,100 tree species ....In actuality, Warsz was a 12th/13th century nobleman who owned a village located at the modern.... The initial LM model weighed approximately 33,300 pounds (15,000 kg), and... The affected region was approximately 1,160,000 square miles (3,000,000 km2) of rainforest, compared to 734,000 square miles One of the most famous people born in War- saw was Maria Sklodowska - Curie, who achieved international... The module was 11.42 feet (3.48 m) tall, and weighed approximately 12,250 pounds | 1710.10723#16 | Simple and Effective Multi-Paragraph Reading Comprehension | We consider the problem of adapting neural paragraph-level question answering
models to the case where entire documents are given as input. Our proposed
solution trains models to produce well calibrated confidence scores for their
results on individual paragraphs. We sample multiple paragraphs from the
documents during training, and use a shared-normalization training objective
that encourages the model to produce globally correct output. We combine this
method with a state-of-the-art pipeline for training models on document QA
data. Experiments demonstrate strong performance on several document QA
datasets. Overall, we are able to achieve a score of 71.3 F1 on the web portion
of TriviaQA, a large improvement from the 56.7 F1 of the previous best system. | http://arxiv.org/pdf/1710.10723 | Christopher Clark, Matt Gardner | cs.CL | 11 pages, updated a reference | null | cs.CL | 20171029 | 20171107 | [
{
"id": "1608.07905"
},
{
"id": "1703.04816"
},
{
"id": "1511.02301"
},
{
"id": "1707.09098"
},
{
"id": "1702.02171"
},
{
"id": "1709.00023"
},
{
"id": "1606.05250"
},
{
"id": "1601.06733"
},
{
"id": "1611.09268"
},
{
"id": "1706.02596"
},
{
"id": "1608.03542"
},
{
"id": "1706.04815"
},
{
"id": "1704.00051"
},
{
"id": "1705.03551"
},
{
"id": "1707.03904"
},
{
"id": "1603.01547"
},
{
"id": "1707.07328"
}
] |
1710.10723 | 17 | was Maria Sklodowska - Curie, who achieved international... The module was 11.42 feet (3.48 m) tall, and weighed approximately 12,250 pounds (5,560 kg) What do the auricles do? ...many species of lobates have four auricles, gelatinous projections edged with cilia that produce water currents that help direct microscopic prey toward the mouth... The Cestida are ribbon - shaped planktonic animals, with the mouth and aboral organ aligned in the middle of opposite edges of the ribbon | 1710.10723#17 | Simple and Effective Multi-Paragraph Reading Comprehension | We consider the problem of adapting neural paragraph-level question answering
models to the case where entire documents are given as input. Our proposed
solution trains models to produce well calibrated confidence scores for their
results on individual paragraphs. We sample multiple paragraphs from the
documents during training, and use a shared-normalization training objective
that encourages the model to produce globally correct output. We combine this
method with a state-of-the-art pipeline for training models on document QA
data. Experiments demonstrate strong performance on several document QA
datasets. Overall, we are able to achieve a score of 71.3 F1 on the web portion
of TriviaQA, a large improvement from the 56.7 F1 of the previous best system. | http://arxiv.org/pdf/1710.10723 | Christopher Clark, Matt Gardner | cs.CL | 11 pages, updated a reference | null | cs.CL | 20171029 | 20171107 | [
{
"id": "1608.07905"
},
{
"id": "1703.04816"
},
{
"id": "1511.02301"
},
{
"id": "1707.09098"
},
{
"id": "1702.02171"
},
{
"id": "1709.00023"
},
{
"id": "1606.05250"
},
{
"id": "1601.06733"
},
{
"id": "1611.09268"
},
{
"id": "1706.02596"
},
{
"id": "1608.03542"
},
{
"id": "1706.04815"
},
{
"id": "1704.00051"
},
{
"id": "1705.03551"
},
{
"id": "1707.03904"
},
{
"id": "1603.01547"
},
{
"id": "1707.07328"
}
] |
1710.10723 | 18 | Table 1: Examples from SQuAD where a paragraph-level model was less conï¬dent in a correct extraction from one paragraph (left) than in an incorrect extraction from another (right). Even if the passage has no correct answer, the model still assigns high conï¬dence to phrases that match the category the question is asking about. Because the conï¬dence scores are not well-calibrated, this conï¬dence is often higher than the conï¬dence assigned to the correct answer span.
the weight given to a âno-answerâ possibility. Our revised objective function becomes: 3.4 Sigmoid
As a ï¬nal baseline, we consider training models with the sigmoid loss objective function. That is, we compute a start/end probability for each token in the context by applying the sigmoid function to the start/end scores of each token. A cross entropy loss is used on each individual probability. The in- tuition is that, since the scores are being evaluated independently of one another, they will be compa- rable between different paragraphs.
(1 â d)e* + dese + Di by OF
where δ is 1 if an answer exists and 0 otherwise. If there are multiple answer spans we use the same objective, except the numerator includes the sum- mation over all answer start and end tokens. | 1710.10723#18 | Simple and Effective Multi-Paragraph Reading Comprehension | We consider the problem of adapting neural paragraph-level question answering
models to the case where entire documents are given as input. Our proposed
solution trains models to produce well calibrated confidence scores for their
results on individual paragraphs. We sample multiple paragraphs from the
documents during training, and use a shared-normalization training objective
that encourages the model to produce globally correct output. We combine this
method with a state-of-the-art pipeline for training models on document QA
data. Experiments demonstrate strong performance on several document QA
datasets. Overall, we are able to achieve a score of 71.3 F1 on the web portion
of TriviaQA, a large improvement from the 56.7 F1 of the previous best system. | http://arxiv.org/pdf/1710.10723 | Christopher Clark, Matt Gardner | cs.CL | 11 pages, updated a reference | null | cs.CL | 20171029 | 20171107 | [
{
"id": "1608.07905"
},
{
"id": "1703.04816"
},
{
"id": "1511.02301"
},
{
"id": "1707.09098"
},
{
"id": "1702.02171"
},
{
"id": "1709.00023"
},
{
"id": "1606.05250"
},
{
"id": "1601.06733"
},
{
"id": "1611.09268"
},
{
"id": "1706.02596"
},
{
"id": "1608.03542"
},
{
"id": "1706.04815"
},
{
"id": "1704.00051"
},
{
"id": "1705.03551"
},
{
"id": "1707.03904"
},
{
"id": "1603.01547"
},
{
"id": "1707.07328"
}
] |
1710.10723 | 19 | We compute z by adding an extra layer at the end of our model. We compute a soft attention = esi over the span start scores, p; = Se and then take the weighted sum of the hidden states from the GRU used to generate those scores, h;, giving Vi= Han hyp;. We compute a second vector, v2 in the same way using the end scores. Finally, a step of learned attention is performed on the out- put of the Self-Attention layer that computes:
# 4 Experimental Setup
# 4.1 Datasets
We evaluate our approach on three datasets: Triv- iaQA unï¬ltered (Joshi et al., 2017), a dataset of questions from trivia databases paired with docu- ments found by completing a web search of the questions; TriviaQA web, a dataset derived from TriviaQA unï¬ltered by treating each question- document pair where the document contains the question answer as an individual training point; and SQuAD (Rajpurkar et al., 2016), a collection of Wikipedia articles and crowdsourced questions.
a;=w-h ev R= 7 Qj Vie ev n va = >_ hip; i=l
i=1 where w is a learned weight vector and hi is the vector for token i.
# 4.2 Preprocessing | 1710.10723#19 | Simple and Effective Multi-Paragraph Reading Comprehension | We consider the problem of adapting neural paragraph-level question answering
models to the case where entire documents are given as input. Our proposed
solution trains models to produce well calibrated confidence scores for their
results on individual paragraphs. We sample multiple paragraphs from the
documents during training, and use a shared-normalization training objective
that encourages the model to produce globally correct output. We combine this
method with a state-of-the-art pipeline for training models on document QA
data. Experiments demonstrate strong performance on several document QA
datasets. Overall, we are able to achieve a score of 71.3 F1 on the web portion
of TriviaQA, a large improvement from the 56.7 F1 of the previous best system. | http://arxiv.org/pdf/1710.10723 | Christopher Clark, Matt Gardner | cs.CL | 11 pages, updated a reference | null | cs.CL | 20171029 | 20171107 | [
{
"id": "1608.07905"
},
{
"id": "1703.04816"
},
{
"id": "1511.02301"
},
{
"id": "1707.09098"
},
{
"id": "1702.02171"
},
{
"id": "1709.00023"
},
{
"id": "1606.05250"
},
{
"id": "1601.06733"
},
{
"id": "1611.09268"
},
{
"id": "1706.02596"
},
{
"id": "1608.03542"
},
{
"id": "1706.04815"
},
{
"id": "1704.00051"
},
{
"id": "1705.03551"
},
{
"id": "1707.03904"
},
{
"id": "1603.01547"
},
{
"id": "1707.07328"
}
] |
1710.10723 | 20 | i=1 where w is a learned weight vector and hi is the vector for token i.
# 4.2 Preprocessing
We note that for TriviaQA web we do not sub- sample as was done by Joshi et al. (2017), in- stead training on the full 530k question-document training pairs. We also observed that the metrics for TriviaQA are computed after applying a small
We concatenate these three vectors and use them as input to a two layer network with an 80 di- mensional hidden layer and ReLU activations that produces z as its only output.
amount of text normalization (stripping punctua- tion, removing articles, ect.) to both the ground truth text and the predicted text. As a result, some spans of text that would have been considered an exact match after normalization were not marked as answer spans during preprocessing, which only detected exact string matches. We ï¬x this issue by labeling all spans of text that would have been considered an exact match by the ofï¬cial evalua- tion script as an answer span.
In TriviaQA, documents often contain many small paragraphs, so we merge paragraphs to- gether as needed to get paragraphs of up to a tar- get size. We use a maximum size of 400 unless stated otherwise. Paragraph separator tokens with learned embeddings are added between merged paragraphs to preserve formatting information.
# 4.3 Sampling | 1710.10723#20 | Simple and Effective Multi-Paragraph Reading Comprehension | We consider the problem of adapting neural paragraph-level question answering
models to the case where entire documents are given as input. Our proposed
solution trains models to produce well calibrated confidence scores for their
results on individual paragraphs. We sample multiple paragraphs from the
documents during training, and use a shared-normalization training objective
that encourages the model to produce globally correct output. We combine this
method with a state-of-the-art pipeline for training models on document QA
data. Experiments demonstrate strong performance on several document QA
datasets. Overall, we are able to achieve a score of 71.3 F1 on the web portion
of TriviaQA, a large improvement from the 56.7 F1 of the previous best system. | http://arxiv.org/pdf/1710.10723 | Christopher Clark, Matt Gardner | cs.CL | 11 pages, updated a reference | null | cs.CL | 20171029 | 20171107 | [
{
"id": "1608.07905"
},
{
"id": "1703.04816"
},
{
"id": "1511.02301"
},
{
"id": "1707.09098"
},
{
"id": "1702.02171"
},
{
"id": "1709.00023"
},
{
"id": "1606.05250"
},
{
"id": "1601.06733"
},
{
"id": "1611.09268"
},
{
"id": "1706.02596"
},
{
"id": "1608.03542"
},
{
"id": "1706.04815"
},
{
"id": "1704.00051"
},
{
"id": "1705.03551"
},
{
"id": "1707.03904"
},
{
"id": "1603.01547"
},
{
"id": "1707.07328"
}
] |
1710.10723 | 21 | # 4.3 Sampling
Our conï¬dence-based approaches are all trained by sampling paragraphs, including paragraphs that do not contain an answer, during training. For SQuAD and TriviaQA web we take the top four paragraphs ranked by TF-IDF score for each question-document pair. We then sample two dif- ferent paragraphs from this set each epoch. Since we observe that the higher-ranked paragraphs are much more likely to contain the context needed to answer the question, we sample the highest ranked paragraph that contains an answer twice as often as the others. For the merge and shared-norm ap- proaches, we additionally require that at least one of the paragraphs contains an answer span.
For TriviaQA unï¬ltered, where we have multi- ple documents for each question, we ï¬nd it bene- ï¬cial to use a more sophisticated paragraph rank- ing function. In particular, we use a linear func- tion with ï¬ve features: the TF-IDF cosine dis- tance, whether the paragraph was the ï¬rst in its document, how many tokens occur before it, and the number of case insensitive and case sensitive matches with question words. The function is trained on the distantly supervised objective of se- lecting paragraphs that contain at least one answer span. We select the top 16 paragraphs for each question and sample pairs of paragraphs as before. | 1710.10723#21 | Simple and Effective Multi-Paragraph Reading Comprehension | We consider the problem of adapting neural paragraph-level question answering
models to the case where entire documents are given as input. Our proposed
solution trains models to produce well calibrated confidence scores for their
results on individual paragraphs. We sample multiple paragraphs from the
documents during training, and use a shared-normalization training objective
that encourages the model to produce globally correct output. We combine this
method with a state-of-the-art pipeline for training models on document QA
data. Experiments demonstrate strong performance on several document QA
datasets. Overall, we are able to achieve a score of 71.3 F1 on the web portion
of TriviaQA, a large improvement from the 56.7 F1 of the previous best system. | http://arxiv.org/pdf/1710.10723 | Christopher Clark, Matt Gardner | cs.CL | 11 pages, updated a reference | null | cs.CL | 20171029 | 20171107 | [
{
"id": "1608.07905"
},
{
"id": "1703.04816"
},
{
"id": "1511.02301"
},
{
"id": "1707.09098"
},
{
"id": "1702.02171"
},
{
"id": "1709.00023"
},
{
"id": "1606.05250"
},
{
"id": "1601.06733"
},
{
"id": "1611.09268"
},
{
"id": "1706.02596"
},
{
"id": "1608.03542"
},
{
"id": "1706.04815"
},
{
"id": "1704.00051"
},
{
"id": "1705.03551"
},
{
"id": "1707.03904"
},
{
"id": "1603.01547"
},
{
"id": "1707.07328"
}
] |
1710.10723 | 22 | # 4.4 Implementation
We train the model with the Adadelta opti- mizer (Zeiler, 2012) with a batch size 60 for Triv- iaQA and 45 for SQuAD. At test time we select the most probable answer span of length less than
EM Model 41.08 baseline (Joshi et al., 2017) 50.21 BiDAF 53.41 BiDAF + TF-IDF 56.22 BiDAF + sum BiDAF + TF-IDF + sum 57.20 our model + TF-IDF + sum 61.10 F1 47.40 56.86 59.18 61.48 62.44 66.04
Table 2: Results on TriviaQA web using our pipelined method. We signiï¬cantly improve upon the baseline by combining the preprocessing procedures, TF-IDF para- graph selection, the sum objective, and our model de- sign. | 1710.10723#22 | Simple and Effective Multi-Paragraph Reading Comprehension | We consider the problem of adapting neural paragraph-level question answering
models to the case where entire documents are given as input. Our proposed
solution trains models to produce well calibrated confidence scores for their
results on individual paragraphs. We sample multiple paragraphs from the
documents during training, and use a shared-normalization training objective
that encourages the model to produce globally correct output. We combine this
method with a state-of-the-art pipeline for training models on document QA
data. Experiments demonstrate strong performance on several document QA
datasets. Overall, we are able to achieve a score of 71.3 F1 on the web portion
of TriviaQA, a large improvement from the 56.7 F1 of the previous best system. | http://arxiv.org/pdf/1710.10723 | Christopher Clark, Matt Gardner | cs.CL | 11 pages, updated a reference | null | cs.CL | 20171029 | 20171107 | [
{
"id": "1608.07905"
},
{
"id": "1703.04816"
},
{
"id": "1511.02301"
},
{
"id": "1707.09098"
},
{
"id": "1702.02171"
},
{
"id": "1709.00023"
},
{
"id": "1606.05250"
},
{
"id": "1601.06733"
},
{
"id": "1611.09268"
},
{
"id": "1706.02596"
},
{
"id": "1608.03542"
},
{
"id": "1706.04815"
},
{
"id": "1704.00051"
},
{
"id": "1705.03551"
},
{
"id": "1707.03904"
},
{
"id": "1603.01547"
},
{
"id": "1707.07328"
}
] |
1710.10723 | 23 | or equal to 8 for TriviaQA and 17 for SQuAD. The GloVe 300 dimensional word vectors released by Pennington et al. (2014) are used for word em- beddings. On SQuAD, we use a dimensionality of size 100 for the GRUs and of size 200 for the linear layers employed after each attention mech- anism. We ï¬nd for TriviaQA, likely because there is more data, using a larger dimensionality of 140 for each GRU and 280 for the linear layers is bene- ï¬cial. During training, we maintain an exponential moving average of the weights with a decay rate of 0.999. We use the weight averages at test time.
# 5 Results
# 5.1 TriviaQA Web | 1710.10723#23 | Simple and Effective Multi-Paragraph Reading Comprehension | We consider the problem of adapting neural paragraph-level question answering
models to the case where entire documents are given as input. Our proposed
solution trains models to produce well calibrated confidence scores for their
results on individual paragraphs. We sample multiple paragraphs from the
documents during training, and use a shared-normalization training objective
that encourages the model to produce globally correct output. We combine this
method with a state-of-the-art pipeline for training models on document QA
data. Experiments demonstrate strong performance on several document QA
datasets. Overall, we are able to achieve a score of 71.3 F1 on the web portion
of TriviaQA, a large improvement from the 56.7 F1 of the previous best system. | http://arxiv.org/pdf/1710.10723 | Christopher Clark, Matt Gardner | cs.CL | 11 pages, updated a reference | null | cs.CL | 20171029 | 20171107 | [
{
"id": "1608.07905"
},
{
"id": "1703.04816"
},
{
"id": "1511.02301"
},
{
"id": "1707.09098"
},
{
"id": "1702.02171"
},
{
"id": "1709.00023"
},
{
"id": "1606.05250"
},
{
"id": "1601.06733"
},
{
"id": "1611.09268"
},
{
"id": "1706.02596"
},
{
"id": "1608.03542"
},
{
"id": "1706.04815"
},
{
"id": "1704.00051"
},
{
"id": "1705.03551"
},
{
"id": "1707.03904"
},
{
"id": "1603.01547"
},
{
"id": "1707.07328"
}
] |
1710.10723 | 24 | # 5 Results
# 5.1 TriviaQA Web
First, we do an ablation study on TriviaQA web to show the effects of our proposed methods for our pipeline model. We start with an implementa- tion of the baseline from (Joshi et al., 2017). Their system selects paragraphs by taking the ï¬rst 400 tokens of each document, uses BiDAF (Seo et al., 2016) as the paragraph model, and selects a ran- dom answer span from each paragraph each epoch to be used in BiDAFâs cross entropy loss function during training. Paragraphs of size 800 are used at test time. As shown in Table 2, our implemen- tation of this approach outperforms the results re- ported by Joshi et al. (2017) signiï¬cantly, likely because we are not subsampling the data. We ï¬nd both TF-IDF ranking and the sum objective to be effective; even without changing the model we achieve state-of-the-art results. Using our reï¬ned model increases the gain by another 4 points.
Next we show the results of our conï¬dence- based approaches. In this setting we group each documentâs text into paragraphs of at most 400 to- kens and rank them using our TF-IDF heuristic. Then we measure the performance of our proposed | 1710.10723#24 | Simple and Effective Multi-Paragraph Reading Comprehension | We consider the problem of adapting neural paragraph-level question answering
models to the case where entire documents are given as input. Our proposed
solution trains models to produce well calibrated confidence scores for their
results on individual paragraphs. We sample multiple paragraphs from the
documents during training, and use a shared-normalization training objective
that encourages the model to produce globally correct output. We combine this
method with a state-of-the-art pipeline for training models on document QA
data. Experiments demonstrate strong performance on several document QA
datasets. Overall, we are able to achieve a score of 71.3 F1 on the web portion
of TriviaQA, a large improvement from the 56.7 F1 of the previous best system. | http://arxiv.org/pdf/1710.10723 | Christopher Clark, Matt Gardner | cs.CL | 11 pages, updated a reference | null | cs.CL | 20171029 | 20171107 | [
{
"id": "1608.07905"
},
{
"id": "1703.04816"
},
{
"id": "1511.02301"
},
{
"id": "1707.09098"
},
{
"id": "1702.02171"
},
{
"id": "1709.00023"
},
{
"id": "1606.05250"
},
{
"id": "1601.06733"
},
{
"id": "1611.09268"
},
{
"id": "1706.02596"
},
{
"id": "1608.03542"
},
{
"id": "1706.04815"
},
{
"id": "1704.00051"
},
{
"id": "1705.03551"
},
{
"id": "1707.03904"
},
{
"id": "1603.01547"
},
{
"id": "1707.07328"
}
] |
1710.10723 | 25 | TriviaQA Web F1 vs. Number of Paragraphs TriviaQA Web Verified F1 vs. Number of Paragraphs 0.70 ° a & ° a & â none â sigmoid â merge â no-answer â shared-norm F1 Score 0.64 0.62 F1 Score none sigmoid merge no-answer shared-norm 1 3 5 7 9 11 13 15 Number of Paragraphs 1 3 5 7 9 11 13 15 Number of Paragraphs
Figure 3: Results on TriviaQA web (left) and veriï¬ed TriviaQA web (right) when applying our models to multiple paragraphs from each document. The shared-norm, merge, and no-answer training methods improve the modelâs ability to utilize more text, with the shared-norm method being signiï¬cantly ahead of the others on the veriï¬ed set and tied with the merge approach on the general set. | 1710.10723#25 | Simple and Effective Multi-Paragraph Reading Comprehension | We consider the problem of adapting neural paragraph-level question answering
models to the case where entire documents are given as input. Our proposed
solution trains models to produce well calibrated confidence scores for their
results on individual paragraphs. We sample multiple paragraphs from the
documents during training, and use a shared-normalization training objective
that encourages the model to produce globally correct output. We combine this
method with a state-of-the-art pipeline for training models on document QA
data. Experiments demonstrate strong performance on several document QA
datasets. Overall, we are able to achieve a score of 71.3 F1 on the web portion
of TriviaQA, a large improvement from the 56.7 F1 of the previous best system. | http://arxiv.org/pdf/1710.10723 | Christopher Clark, Matt Gardner | cs.CL | 11 pages, updated a reference | null | cs.CL | 20171029 | 20171107 | [
{
"id": "1608.07905"
},
{
"id": "1703.04816"
},
{
"id": "1511.02301"
},
{
"id": "1707.09098"
},
{
"id": "1702.02171"
},
{
"id": "1709.00023"
},
{
"id": "1606.05250"
},
{
"id": "1601.06733"
},
{
"id": "1611.09268"
},
{
"id": "1706.02596"
},
{
"id": "1608.03542"
},
{
"id": "1706.04815"
},
{
"id": "1704.00051"
},
{
"id": "1705.03551"
},
{
"id": "1707.03904"
},
{
"id": "1603.01547"
},
{
"id": "1707.07328"
}
] |
1710.10723 | 26 | Model All Veriï¬ed baseline (Joshi et al., 2017) MEMEN* (Pan et al., 2017) Mnemonic Reader (Hu et al., 2017) Reading Twice for NLU (Weissenborn et al., 2017a) S-Norm (ours) EM 40.74 43.16 46.94 50.56 66.37 F1 47.06 46.90 52.85 56.73 71.32 EM 49.54 49.28 54.45 63.20 79.97 F1 55.80 55.83 59.46 67.97 83.70
*Results on the dev set
Table 3: Published TriviaQA results. We advance the state of the art by about 15 points both test sets.
approaches as the model is used to independently process an increasing number of these paragraphs and the modelâs most conï¬dent answer is returned. We additionally measure performance on the ver- iï¬ed portion of TriviaQA, a small subset of the question-document pairs in TriviaQA web where humans have manually veriï¬ed that the document contains sufï¬cient context to answer the question. The results are shown in Figure 3. | 1710.10723#26 | Simple and Effective Multi-Paragraph Reading Comprehension | We consider the problem of adapting neural paragraph-level question answering
models to the case where entire documents are given as input. Our proposed
solution trains models to produce well calibrated confidence scores for their
results on individual paragraphs. We sample multiple paragraphs from the
documents during training, and use a shared-normalization training objective
that encourages the model to produce globally correct output. We combine this
method with a state-of-the-art pipeline for training models on document QA
data. Experiments demonstrate strong performance on several document QA
datasets. Overall, we are able to achieve a score of 71.3 F1 on the web portion
of TriviaQA, a large improvement from the 56.7 F1 of the previous best system. | http://arxiv.org/pdf/1710.10723 | Christopher Clark, Matt Gardner | cs.CL | 11 pages, updated a reference | null | cs.CL | 20171029 | 20171107 | [
{
"id": "1608.07905"
},
{
"id": "1703.04816"
},
{
"id": "1511.02301"
},
{
"id": "1707.09098"
},
{
"id": "1702.02171"
},
{
"id": "1709.00023"
},
{
"id": "1606.05250"
},
{
"id": "1601.06733"
},
{
"id": "1611.09268"
},
{
"id": "1706.02596"
},
{
"id": "1608.03542"
},
{
"id": "1706.04815"
},
{
"id": "1704.00051"
},
{
"id": "1705.03551"
},
{
"id": "1707.03904"
},
{
"id": "1603.01547"
},
{
"id": "1707.07328"
}
] |
1710.10723 | 27 | On these datasets even the model trained with- out any of the proposed training methods (ânoneâ) improves as it is allowed to use more text, show- ing it does a passable job at focusing on the cor- rect paragraph. The no-answer option training ap- proach lead to a signiï¬cant improvement, and the shared-norm and merge approach are even better. On the veriï¬ed set, the shared-norm approach is solidly ahead of the other options. This suggests the shared-norm model is better at extracting an- swers when it is clearly stated in the text, but worse at guessing the answer in other cases.
We use the shared-norm approach for evalua- tion on the TriviaQA test set. We found that in- creasing the paragraph size to 800 at test time, and re-training the model on paragraphs of size 600, was slightly beneï¬cial, allowing our model to | 1710.10723#27 | Simple and Effective Multi-Paragraph Reading Comprehension | We consider the problem of adapting neural paragraph-level question answering
models to the case where entire documents are given as input. Our proposed
solution trains models to produce well calibrated confidence scores for their
results on individual paragraphs. We sample multiple paragraphs from the
documents during training, and use a shared-normalization training objective
that encourages the model to produce globally correct output. We combine this
method with a state-of-the-art pipeline for training models on document QA
data. Experiments demonstrate strong performance on several document QA
datasets. Overall, we are able to achieve a score of 71.3 F1 on the web portion
of TriviaQA, a large improvement from the 56.7 F1 of the previous best system. | http://arxiv.org/pdf/1710.10723 | Christopher Clark, Matt Gardner | cs.CL | 11 pages, updated a reference | null | cs.CL | 20171029 | 20171107 | [
{
"id": "1608.07905"
},
{
"id": "1703.04816"
},
{
"id": "1511.02301"
},
{
"id": "1707.09098"
},
{
"id": "1702.02171"
},
{
"id": "1709.00023"
},
{
"id": "1606.05250"
},
{
"id": "1601.06733"
},
{
"id": "1611.09268"
},
{
"id": "1706.02596"
},
{
"id": "1608.03542"
},
{
"id": "1706.04815"
},
{
"id": "1704.00051"
},
{
"id": "1705.03551"
},
{
"id": "1707.03904"
},
{
"id": "1603.01547"
},
{
"id": "1707.07328"
}
] |
1710.10723 | 28 | reach 66.04 EM and 70.98 F1 on the dev set. We submitted this model to be evaluated on the Triv- iaQA test set and achieved 66.37 EM and 71.32 F1, ï¬rmly ahead of prior work, as shown in Ta- ble 3. Note that human annotators have estimated that only 75.4% of the question-document pairs contain sufï¬cient evidence to answer the ques- tion (Joshi et al., 2017), which suggests we are ap- proaching the upper bound for this task. However, the score of 83.7 F1 on the veriï¬ed set suggests that there is still room for improvement.
# 5.2 TriviaQA Unï¬ltered | 1710.10723#28 | Simple and Effective Multi-Paragraph Reading Comprehension | We consider the problem of adapting neural paragraph-level question answering
models to the case where entire documents are given as input. Our proposed
solution trains models to produce well calibrated confidence scores for their
results on individual paragraphs. We sample multiple paragraphs from the
documents during training, and use a shared-normalization training objective
that encourages the model to produce globally correct output. We combine this
method with a state-of-the-art pipeline for training models on document QA
data. Experiments demonstrate strong performance on several document QA
datasets. Overall, we are able to achieve a score of 71.3 F1 on the web portion
of TriviaQA, a large improvement from the 56.7 F1 of the previous best system. | http://arxiv.org/pdf/1710.10723 | Christopher Clark, Matt Gardner | cs.CL | 11 pages, updated a reference | null | cs.CL | 20171029 | 20171107 | [
{
"id": "1608.07905"
},
{
"id": "1703.04816"
},
{
"id": "1511.02301"
},
{
"id": "1707.09098"
},
{
"id": "1702.02171"
},
{
"id": "1709.00023"
},
{
"id": "1606.05250"
},
{
"id": "1601.06733"
},
{
"id": "1611.09268"
},
{
"id": "1706.02596"
},
{
"id": "1608.03542"
},
{
"id": "1706.04815"
},
{
"id": "1704.00051"
},
{
"id": "1705.03551"
},
{
"id": "1707.03904"
},
{
"id": "1603.01547"
},
{
"id": "1707.07328"
}
] |
1710.10723 | 29 | # 5.2 TriviaQA Unï¬ltered
Next we apply our conï¬dence methods to Trivi- aQA unï¬ltered. This dataset is of particular inter- est because the system is not told which document contains the answer, so it provides a plausible sim- ulation of attempting to answer a question using a document retrieval system. We show the same graph as before for this dataset in Figure 4. On this dataset it is more important to train the model to produce well calibrated conï¬dence scores. Note the base model starts to lose performance as more paragraphs are used, showing that errors are be- ing caused by the model being overly conï¬dent in incorrect extractions.
Unfiltered TriviaQA F1 vs. Number of Paragraphs
none sigmoid merge no-answer shared-norm F1 Score 0 5 10 15 20 25 30 Number of Paragraphs
Figure 4: Results for our conï¬dence methods on Triv- iaQA unï¬ltered. Here we see a more dramatic differ- ence between these models. The shared-norm approach is the strongest, while the base model starts to lose per- formance as more paragraphs are used. | 1710.10723#29 | Simple and Effective Multi-Paragraph Reading Comprehension | We consider the problem of adapting neural paragraph-level question answering
models to the case where entire documents are given as input. Our proposed
solution trains models to produce well calibrated confidence scores for their
results on individual paragraphs. We sample multiple paragraphs from the
documents during training, and use a shared-normalization training objective
that encourages the model to produce globally correct output. We combine this
method with a state-of-the-art pipeline for training models on document QA
data. Experiments demonstrate strong performance on several document QA
datasets. Overall, we are able to achieve a score of 71.3 F1 on the web portion
of TriviaQA, a large improvement from the 56.7 F1 of the previous best system. | http://arxiv.org/pdf/1710.10723 | Christopher Clark, Matt Gardner | cs.CL | 11 pages, updated a reference | null | cs.CL | 20171029 | 20171107 | [
{
"id": "1608.07905"
},
{
"id": "1703.04816"
},
{
"id": "1511.02301"
},
{
"id": "1707.09098"
},
{
"id": "1702.02171"
},
{
"id": "1709.00023"
},
{
"id": "1606.05250"
},
{
"id": "1601.06733"
},
{
"id": "1611.09268"
},
{
"id": "1706.02596"
},
{
"id": "1608.03542"
},
{
"id": "1706.04815"
},
{
"id": "1704.00051"
},
{
"id": "1705.03551"
},
{
"id": "1707.03904"
},
{
"id": "1603.01547"
},
{
"id": "1707.07328"
}
] |
1710.10723 | 30 | Dev Test EM Model 71.60 none 70.28 sigmoid 71.20 merge no-answer 71.51 shared-norm 71.16 F1 80.78 79.05 80.26 80.71 80.23 EM 72.14 - - - - F1 81.05 - - - Table 4: Results on the standard SQuAD dataset. The test scores place our model as 8th on the SQuAD leader board among non-ensemble models3. Training with the proposed multi-paragraph approaches only leads to a marginal drop in performance in this setting.
# 5.3 SQuAD
We additionally evaluate our model on SQuAD. SQuAD questions were not built to be answered independently of their context paragraph, which makes it unclear how effective of an evaluation tool they can be for document-level question an- swering. To assess this we manually label 500 random questions from the training set. We cat- egorize questions as:
1. Context-independent, meaning it can be un- derstood independently of the paragraph.
2. Document-dependent, meaning it can be un- derstood given the articleâs title. For exam- ple, âWhat individual is the school named af- ter?â for the document âHarvard Universityâ. | 1710.10723#30 | Simple and Effective Multi-Paragraph Reading Comprehension | We consider the problem of adapting neural paragraph-level question answering
models to the case where entire documents are given as input. Our proposed
solution trains models to produce well calibrated confidence scores for their
results on individual paragraphs. We sample multiple paragraphs from the
documents during training, and use a shared-normalization training objective
that encourages the model to produce globally correct output. We combine this
method with a state-of-the-art pipeline for training models on document QA
data. Experiments demonstrate strong performance on several document QA
datasets. Overall, we are able to achieve a score of 71.3 F1 on the web portion
of TriviaQA, a large improvement from the 56.7 F1 of the previous best system. | http://arxiv.org/pdf/1710.10723 | Christopher Clark, Matt Gardner | cs.CL | 11 pages, updated a reference | null | cs.CL | 20171029 | 20171107 | [
{
"id": "1608.07905"
},
{
"id": "1703.04816"
},
{
"id": "1511.02301"
},
{
"id": "1707.09098"
},
{
"id": "1702.02171"
},
{
"id": "1709.00023"
},
{
"id": "1606.05250"
},
{
"id": "1601.06733"
},
{
"id": "1611.09268"
},
{
"id": "1706.02596"
},
{
"id": "1608.03542"
},
{
"id": "1706.04815"
},
{
"id": "1704.00051"
},
{
"id": "1705.03551"
},
{
"id": "1707.03904"
},
{
"id": "1603.01547"
},
{
"id": "1707.07328"
}
] |
1710.10723 | 31 | 3. Paragraph-dependent, meaning it can only be understood given its paragraph. For example, âWhat was the ï¬rst step in the reforms?â.
3as of 10/23/2017
SQuAD F1 vs. Number of Paragraphs 0.725 0.700 2 0.675 5 § 0.650 © 625] //ââ none â sigmoid 0.600 ââ merge â no-answer 0.575 ââ shared-norm 0.550 1 3 5 7 9 11 13 15 Number of Paragraphs
Figure 5: Results for our conï¬dence methods on document-level SQuAD. The base model does poorly in this case, rapidly losing performance once more than two paragraphs are used. While all our approaches had some beneï¬t, the shared-norm model is the strongest, and is the only one to not lose performance as large numbers of paragraphs are used. | 1710.10723#31 | Simple and Effective Multi-Paragraph Reading Comprehension | We consider the problem of adapting neural paragraph-level question answering
models to the case where entire documents are given as input. Our proposed
solution trains models to produce well calibrated confidence scores for their
results on individual paragraphs. We sample multiple paragraphs from the
documents during training, and use a shared-normalization training objective
that encourages the model to produce globally correct output. We combine this
method with a state-of-the-art pipeline for training models on document QA
data. Experiments demonstrate strong performance on several document QA
datasets. Overall, we are able to achieve a score of 71.3 F1 on the web portion
of TriviaQA, a large improvement from the 56.7 F1 of the previous best system. | http://arxiv.org/pdf/1710.10723 | Christopher Clark, Matt Gardner | cs.CL | 11 pages, updated a reference | null | cs.CL | 20171029 | 20171107 | [
{
"id": "1608.07905"
},
{
"id": "1703.04816"
},
{
"id": "1511.02301"
},
{
"id": "1707.09098"
},
{
"id": "1702.02171"
},
{
"id": "1709.00023"
},
{
"id": "1606.05250"
},
{
"id": "1601.06733"
},
{
"id": "1611.09268"
},
{
"id": "1706.02596"
},
{
"id": "1608.03542"
},
{
"id": "1706.04815"
},
{
"id": "1704.00051"
},
{
"id": "1705.03551"
},
{
"id": "1707.03904"
},
{
"id": "1603.01547"
},
{
"id": "1707.07328"
}
] |
1710.10723 | 32 | We ï¬nd 67.4% of the questions to be context- independent, 22.6% to be document-dependent, and the remaining 10% to be paragraph- dependent. The many document-dependent ques- tions stem from the fact that questions are fre- quently about the subject of the document, so the articleâs title is often sufï¬cient to resolve co- references or ambiguities that appear in the ques- tion. Since a reasonably high fraction of the ques- tions can be understood given the document they are from, and to isolate our analysis from the re- trieval mechanism used, we choose to evaluate on the document-level. We build documents by con- catenating all the paragraphs in SQuAD from the same article together into a single document.
The performance of our models given the cor- rect paragraph (i.e., in the standard SQuAD set- ting), is shown in Table 4. Our paragraph-level model is competitive on this task, and our vari- ations to handle the multi-paragraph setting only cause a minor loss of performance. | 1710.10723#32 | Simple and Effective Multi-Paragraph Reading Comprehension | We consider the problem of adapting neural paragraph-level question answering
models to the case where entire documents are given as input. Our proposed
solution trains models to produce well calibrated confidence scores for their
results on individual paragraphs. We sample multiple paragraphs from the
documents during training, and use a shared-normalization training objective
that encourages the model to produce globally correct output. We combine this
method with a state-of-the-art pipeline for training models on document QA
data. Experiments demonstrate strong performance on several document QA
datasets. Overall, we are able to achieve a score of 71.3 F1 on the web portion
of TriviaQA, a large improvement from the 56.7 F1 of the previous best system. | http://arxiv.org/pdf/1710.10723 | Christopher Clark, Matt Gardner | cs.CL | 11 pages, updated a reference | null | cs.CL | 20171029 | 20171107 | [
{
"id": "1608.07905"
},
{
"id": "1703.04816"
},
{
"id": "1511.02301"
},
{
"id": "1707.09098"
},
{
"id": "1702.02171"
},
{
"id": "1709.00023"
},
{
"id": "1606.05250"
},
{
"id": "1601.06733"
},
{
"id": "1611.09268"
},
{
"id": "1706.02596"
},
{
"id": "1608.03542"
},
{
"id": "1706.04815"
},
{
"id": "1704.00051"
},
{
"id": "1705.03551"
},
{
"id": "1707.03904"
},
{
"id": "1603.01547"
},
{
"id": "1707.07328"
}
] |
1710.10723 | 33 | We graph the document-level performance in Figure 5. For SQuAD, we ï¬nd it crucial to em- ploy one of the suggested conï¬dence training tech- niques. The base model starts to drop in perfor- mance once more than two paragraphs are used. However, the shared-norm approach is able to reach a peak performance of 72.37 F1 and 64.08 EM given 15 paragraphs. Given our estimate that 10% of the questions are ambiguous if the para- graph is unknown, our approach appears to have adapted to the document-level task very well.
Finally, we compare the shared-norm model with the document-level result reported by Chen
et al. (2017). We re-evaluate our model using the documents used by Chen et al. (2017), which con- sist of the same Wikipedia articles SQuAD was built from, but downloaded at different dates. The advantage of this dataset is that it does not allow the model to know a priori which paragraphs were ï¬ltered out during the construction of SQuAD. The disadvantage is that some of the articles have been edited since the questions were written, so some questions may no longer be answerable. Our model achieves 59.14 EM and 67.34 F1 on this dataset, which signiï¬cantly outperforms the 49.7 EM reported by Chen et al. (2017).
# 5.4 Discussion | 1710.10723#33 | Simple and Effective Multi-Paragraph Reading Comprehension | We consider the problem of adapting neural paragraph-level question answering
models to the case where entire documents are given as input. Our proposed
solution trains models to produce well calibrated confidence scores for their
results on individual paragraphs. We sample multiple paragraphs from the
documents during training, and use a shared-normalization training objective
that encourages the model to produce globally correct output. We combine this
method with a state-of-the-art pipeline for training models on document QA
data. Experiments demonstrate strong performance on several document QA
datasets. Overall, we are able to achieve a score of 71.3 F1 on the web portion
of TriviaQA, a large improvement from the 56.7 F1 of the previous best system. | http://arxiv.org/pdf/1710.10723 | Christopher Clark, Matt Gardner | cs.CL | 11 pages, updated a reference | null | cs.CL | 20171029 | 20171107 | [
{
"id": "1608.07905"
},
{
"id": "1703.04816"
},
{
"id": "1511.02301"
},
{
"id": "1707.09098"
},
{
"id": "1702.02171"
},
{
"id": "1709.00023"
},
{
"id": "1606.05250"
},
{
"id": "1601.06733"
},
{
"id": "1611.09268"
},
{
"id": "1706.02596"
},
{
"id": "1608.03542"
},
{
"id": "1706.04815"
},
{
"id": "1704.00051"
},
{
"id": "1705.03551"
},
{
"id": "1707.03904"
},
{
"id": "1603.01547"
},
{
"id": "1707.07328"
}
] |
1710.10723 | 34 | # 5.4 Discussion
We found that models that have only been trained on answer-containing paragraphs can perform very poorly in the multi-paragraph setting. The results were particularly bad for SQuAD, we think this is partly because the paragraphs are shorter, so the model had less exposure to irrelevant text. In general, we found the shared-norm approach to be the most effective way to resolve this problem. The no-answer and merge approaches were mod- erately effective, but we note that they do not re- solve the scaling problem inherent to the softmax objective we discussed in Section 3, which might be why they lagged behind. The sigmoid objective function reduces the paragraph-level performance considerably, especially on the TriviaQA datasets. We suspect this is because it is vulnerable to label noise, as discussed in Section 2.2.
# 6 Related Work | 1710.10723#34 | Simple and Effective Multi-Paragraph Reading Comprehension | We consider the problem of adapting neural paragraph-level question answering
models to the case where entire documents are given as input. Our proposed
solution trains models to produce well calibrated confidence scores for their
results on individual paragraphs. We sample multiple paragraphs from the
documents during training, and use a shared-normalization training objective
that encourages the model to produce globally correct output. We combine this
method with a state-of-the-art pipeline for training models on document QA
data. Experiments demonstrate strong performance on several document QA
datasets. Overall, we are able to achieve a score of 71.3 F1 on the web portion
of TriviaQA, a large improvement from the 56.7 F1 of the previous best system. | http://arxiv.org/pdf/1710.10723 | Christopher Clark, Matt Gardner | cs.CL | 11 pages, updated a reference | null | cs.CL | 20171029 | 20171107 | [
{
"id": "1608.07905"
},
{
"id": "1703.04816"
},
{
"id": "1511.02301"
},
{
"id": "1707.09098"
},
{
"id": "1702.02171"
},
{
"id": "1709.00023"
},
{
"id": "1606.05250"
},
{
"id": "1601.06733"
},
{
"id": "1611.09268"
},
{
"id": "1706.02596"
},
{
"id": "1608.03542"
},
{
"id": "1706.04815"
},
{
"id": "1704.00051"
},
{
"id": "1705.03551"
},
{
"id": "1707.03904"
},
{
"id": "1603.01547"
},
{
"id": "1707.07328"
}
] |
1710.10723 | 35 | # 6 Related Work
Reading Comprehension Datasets. The state of the art in reading comprehension has been rapidly advanced by neural models, in no small part due to the introduction of many large datasets. The ï¬rst large scale datasets for training neural reading comprehension models used a Cloze-style task, where systems must predict a held out word from a piece of text (Hermann et al., 2015; Hill et al., 2015). Additional datasets including SQuAD (Ra- jpurkar et al., 2016), WikiReading (Hewlett et al., 2016), MS Marco (Nguyen et al., 2016) and Triv- iaQA (Joshi et al., 2017) provided more realis- tic questions. Another dataset of trivia questions, Quasar-T (Dhingra et al., 2017), was introduced recently that uses ClueWeb09 (Callan et al., 2009) In this work we as its source for documents. choose to focus on SQuAD and TriviaQA. Neural Reading Comprehension. | 1710.10723#35 | Simple and Effective Multi-Paragraph Reading Comprehension | We consider the problem of adapting neural paragraph-level question answering
models to the case where entire documents are given as input. Our proposed
solution trains models to produce well calibrated confidence scores for their
results on individual paragraphs. We sample multiple paragraphs from the
documents during training, and use a shared-normalization training objective
that encourages the model to produce globally correct output. We combine this
method with a state-of-the-art pipeline for training models on document QA
data. Experiments demonstrate strong performance on several document QA
datasets. Overall, we are able to achieve a score of 71.3 F1 on the web portion
of TriviaQA, a large improvement from the 56.7 F1 of the previous best system. | http://arxiv.org/pdf/1710.10723 | Christopher Clark, Matt Gardner | cs.CL | 11 pages, updated a reference | null | cs.CL | 20171029 | 20171107 | [
{
"id": "1608.07905"
},
{
"id": "1703.04816"
},
{
"id": "1511.02301"
},
{
"id": "1707.09098"
},
{
"id": "1702.02171"
},
{
"id": "1709.00023"
},
{
"id": "1606.05250"
},
{
"id": "1601.06733"
},
{
"id": "1611.09268"
},
{
"id": "1706.02596"
},
{
"id": "1608.03542"
},
{
"id": "1706.04815"
},
{
"id": "1704.00051"
},
{
"id": "1705.03551"
},
{
"id": "1707.03904"
},
{
"id": "1603.01547"
},
{
"id": "1707.07328"
}
] |
1710.10723 | 36 | reading comprehension systems typically use some form of attention (Wang and Jiang, 2016), al- though alternative architectures exist (Chen et al., 2017; Weissenborn et al., 2017b). Our model includes some re- follows this approach, but cent advances such as variational dropout (Gal and Ghahramani, 2016) and bi-directional atten- tion (Seo et al., 2016). Self-attention has been used in several prior works (Cheng et al., 2016; Wang et al., 2017b; Pan et al., 2017). Our approach to allowing a reading comprehension model to produce a per-paragraph no-answer score is related to the approach used in the BiDAF- T (Min et al., 2017) model to produce per-sentence classiï¬cation scores, although we use an attention- based method instead of max-pooling. | 1710.10723#36 | Simple and Effective Multi-Paragraph Reading Comprehension | We consider the problem of adapting neural paragraph-level question answering
models to the case where entire documents are given as input. Our proposed
solution trains models to produce well calibrated confidence scores for their
results on individual paragraphs. We sample multiple paragraphs from the
documents during training, and use a shared-normalization training objective
that encourages the model to produce globally correct output. We combine this
method with a state-of-the-art pipeline for training models on document QA
data. Experiments demonstrate strong performance on several document QA
datasets. Overall, we are able to achieve a score of 71.3 F1 on the web portion
of TriviaQA, a large improvement from the 56.7 F1 of the previous best system. | http://arxiv.org/pdf/1710.10723 | Christopher Clark, Matt Gardner | cs.CL | 11 pages, updated a reference | null | cs.CL | 20171029 | 20171107 | [
{
"id": "1608.07905"
},
{
"id": "1703.04816"
},
{
"id": "1511.02301"
},
{
"id": "1707.09098"
},
{
"id": "1702.02171"
},
{
"id": "1709.00023"
},
{
"id": "1606.05250"
},
{
"id": "1601.06733"
},
{
"id": "1611.09268"
},
{
"id": "1706.02596"
},
{
"id": "1608.03542"
},
{
"id": "1706.04815"
},
{
"id": "1704.00051"
},
{
"id": "1705.03551"
},
{
"id": "1707.03904"
},
{
"id": "1603.01547"
},
{
"id": "1707.07328"
}
] |
1710.10723 | 37 | Open QA. Open question answering has been the subject of much research, especially spurred by the TREC question answering track (Voorhees et al., 1999). Knowledge bases can be used, such as in (Berant et al., 2013), although the re- sulting systems are limited by the quality of the knowledge base. Systems that try to answer ques- tions using natural language resources such as YodaQA (BaudiËs, 2015) typically use pipelined methods to retrieve related text, build answer can- didates, and pick a ï¬nal output.
Neural Open QA. Open question answering with neural models was considered by Chen et al. (2017), where researchers trained a model on SQuAD and combined it with a retrieval engine for Wikipedia articles. Our work differs because we focus on explicitly addressing the problem of applying the model to multiple paragraphs. A pipelined approach to QA was recently proposed by Wang et al. (2017a), where a ranker model is used to select a paragraph for the reading compre- hension model to process.
# 7 Conclusion | 1710.10723#37 | Simple and Effective Multi-Paragraph Reading Comprehension | We consider the problem of adapting neural paragraph-level question answering
models to the case where entire documents are given as input. Our proposed
solution trains models to produce well calibrated confidence scores for their
results on individual paragraphs. We sample multiple paragraphs from the
documents during training, and use a shared-normalization training objective
that encourages the model to produce globally correct output. We combine this
method with a state-of-the-art pipeline for training models on document QA
data. Experiments demonstrate strong performance on several document QA
datasets. Overall, we are able to achieve a score of 71.3 F1 on the web portion
of TriviaQA, a large improvement from the 56.7 F1 of the previous best system. | http://arxiv.org/pdf/1710.10723 | Christopher Clark, Matt Gardner | cs.CL | 11 pages, updated a reference | null | cs.CL | 20171029 | 20171107 | [
{
"id": "1608.07905"
},
{
"id": "1703.04816"
},
{
"id": "1511.02301"
},
{
"id": "1707.09098"
},
{
"id": "1702.02171"
},
{
"id": "1709.00023"
},
{
"id": "1606.05250"
},
{
"id": "1601.06733"
},
{
"id": "1611.09268"
},
{
"id": "1706.02596"
},
{
"id": "1608.03542"
},
{
"id": "1706.04815"
},
{
"id": "1704.00051"
},
{
"id": "1705.03551"
},
{
"id": "1707.03904"
},
{
"id": "1603.01547"
},
{
"id": "1707.07328"
}
] |
1710.10723 | 38 | # 7 Conclusion
We have shown that, when using a paragraph-level QA model across multiple paragraphs, our train- ing method of sampling non-answer containing paragraphs while using a shared-norm objective function can be very beneï¬cial. Combining this with our suggestions for paragraph selection, us- ing the summed training objective, and our model design allows us to advance the state of the art on TriviaQA by a large stride. As shown by our demo, this work can be directly applied to build- ing deep learning powered open question answer- ing systems.
# References
Petr BaudiËs. 2015. YodaQA: A Modular Question An- swering System Pipeline. In POSTER 2015-19th In- ternational Student Conference on Electrical Engi- neering. pages 1156â1165.
Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic Parsing on Freebase from Question-Answer Pairs. In EMNLP.
Jamie Callan, Mark Hoy, Changkuk Yoo, and Le Zhao. 2009. Clueweb09 Data Set.
Danqi Chen, Adam Fisch, Jason Weston, and An- toine Bordes. 2017. Reading Wikipedia to An- arXiv preprint swer Open-Domain Questions. arXiv:1704.00051 . | 1710.10723#38 | Simple and Effective Multi-Paragraph Reading Comprehension | We consider the problem of adapting neural paragraph-level question answering
models to the case where entire documents are given as input. Our proposed
solution trains models to produce well calibrated confidence scores for their
results on individual paragraphs. We sample multiple paragraphs from the
documents during training, and use a shared-normalization training objective
that encourages the model to produce globally correct output. We combine this
method with a state-of-the-art pipeline for training models on document QA
data. Experiments demonstrate strong performance on several document QA
datasets. Overall, we are able to achieve a score of 71.3 F1 on the web portion
of TriviaQA, a large improvement from the 56.7 F1 of the previous best system. | http://arxiv.org/pdf/1710.10723 | Christopher Clark, Matt Gardner | cs.CL | 11 pages, updated a reference | null | cs.CL | 20171029 | 20171107 | [
{
"id": "1608.07905"
},
{
"id": "1703.04816"
},
{
"id": "1511.02301"
},
{
"id": "1707.09098"
},
{
"id": "1702.02171"
},
{
"id": "1709.00023"
},
{
"id": "1606.05250"
},
{
"id": "1601.06733"
},
{
"id": "1611.09268"
},
{
"id": "1706.02596"
},
{
"id": "1608.03542"
},
{
"id": "1706.04815"
},
{
"id": "1704.00051"
},
{
"id": "1705.03551"
},
{
"id": "1707.03904"
},
{
"id": "1603.01547"
},
{
"id": "1707.07328"
}
] |
1710.10723 | 39 | Jianpeng Cheng, Li Dong, and Mirella Lapata. 2016. Long Short-Term Memory-Networks for Machine Reading. arXiv preprint arXiv:1601.06733 .
Kyunghyun Cho, Bart Van Merri¨enboer, Caglar Gul- cehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078 .
Bhuwan Dhingra, Kathryn Mazaitis, and William W Cohen. 2017. Quasar: Datasets for Question An- arXiv preprint swering by Search and Reading. arXiv:1707.03904 .
Yarin Gal and Zoubin Ghahramani. 2016. A Theoreti- cally Grounded Application of Dropout in Recurrent In Advances in neural informa- Neural Networks. tion processing systems.
Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Su- leyman, and Phil Blunsom. 2015. Teaching Ma- In Advances in chines to Read and Comprehend. Neural Information Processing Systems. | 1710.10723#39 | Simple and Effective Multi-Paragraph Reading Comprehension | We consider the problem of adapting neural paragraph-level question answering
models to the case where entire documents are given as input. Our proposed
solution trains models to produce well calibrated confidence scores for their
results on individual paragraphs. We sample multiple paragraphs from the
documents during training, and use a shared-normalization training objective
that encourages the model to produce globally correct output. We combine this
method with a state-of-the-art pipeline for training models on document QA
data. Experiments demonstrate strong performance on several document QA
datasets. Overall, we are able to achieve a score of 71.3 F1 on the web portion
of TriviaQA, a large improvement from the 56.7 F1 of the previous best system. | http://arxiv.org/pdf/1710.10723 | Christopher Clark, Matt Gardner | cs.CL | 11 pages, updated a reference | null | cs.CL | 20171029 | 20171107 | [
{
"id": "1608.07905"
},
{
"id": "1703.04816"
},
{
"id": "1511.02301"
},
{
"id": "1707.09098"
},
{
"id": "1702.02171"
},
{
"id": "1709.00023"
},
{
"id": "1606.05250"
},
{
"id": "1601.06733"
},
{
"id": "1611.09268"
},
{
"id": "1706.02596"
},
{
"id": "1608.03542"
},
{
"id": "1706.04815"
},
{
"id": "1704.00051"
},
{
"id": "1705.03551"
},
{
"id": "1707.03904"
},
{
"id": "1603.01547"
},
{
"id": "1707.07328"
}
] |
1710.10723 | 40 | Daniel Hewlett, Alexandre Lacoste, Llion Jones, Illia Polosukhin, Andrew Fandrianto, Jay Han, Matthew Kelcey, and David Berthelot. 2016. Wikireading: A Novel Large-scale Language Understanding Task over Wikipedia. arXiv preprint arXiv:1608.03542 .
Felix Hill, Antoine Bordes, Sumit Chopra, and Jason Weston. 2015. The Goldilocks Principle: Reading Childrenâs Books with Explicit Memory Represen- tations. arXiv preprint arXiv:1511.02301 .
Minghao Hu, Yuxing Peng, and Xipeng Qiu. 2017. Mnemonic Reader: Machine Comprehension with Iterative Aligning and Multi-hop Answer Pointing .
Robin Jia and Percy Liang. 2017. Adversarial Ex- amples for Evaluating Reading Comprehension Sys- tems. arXiv preprint arXiv:1707.07328 .
Mandar Joshi, Eunsol Choi, Daniel S Weld, and Luke Zettlemoyer. 2017. TriviaQA: A Large Scale Dis- tantly Supervised Challenge Dataset for Reading Comprehension. arXiv preprint arXiv:1705.03551 . | 1710.10723#40 | Simple and Effective Multi-Paragraph Reading Comprehension | We consider the problem of adapting neural paragraph-level question answering
models to the case where entire documents are given as input. Our proposed
solution trains models to produce well calibrated confidence scores for their
results on individual paragraphs. We sample multiple paragraphs from the
documents during training, and use a shared-normalization training objective
that encourages the model to produce globally correct output. We combine this
method with a state-of-the-art pipeline for training models on document QA
data. Experiments demonstrate strong performance on several document QA
datasets. Overall, we are able to achieve a score of 71.3 F1 on the web portion
of TriviaQA, a large improvement from the 56.7 F1 of the previous best system. | http://arxiv.org/pdf/1710.10723 | Christopher Clark, Matt Gardner | cs.CL | 11 pages, updated a reference | null | cs.CL | 20171029 | 20171107 | [
{
"id": "1608.07905"
},
{
"id": "1703.04816"
},
{
"id": "1511.02301"
},
{
"id": "1707.09098"
},
{
"id": "1702.02171"
},
{
"id": "1709.00023"
},
{
"id": "1606.05250"
},
{
"id": "1601.06733"
},
{
"id": "1611.09268"
},
{
"id": "1706.02596"
},
{
"id": "1608.03542"
},
{
"id": "1706.04815"
},
{
"id": "1704.00051"
},
{
"id": "1705.03551"
},
{
"id": "1707.03904"
},
{
"id": "1603.01547"
},
{
"id": "1707.07328"
}
] |
1710.10723 | 41 | Rudolf Kadlec, Martin Schmid, Ondrej Bajgar, and Jan Kleindienst. 2016. Text understanding with the attention sum reader network. arXiv preprint arXiv:1603.01547 .
Sewon Min, Minjoon Seo, and Hannaneh Hajishirzi. 2017. Question Answering through Transfer Learn- ing from Large Fine-grained Supervision Data. arXiv preprint arXiv:1702.02171 .
Tri Nguyen, Mir Rosenberg, Xia Song, Jianfeng Gao, Saurabh Tiwary, Rangan Majumder, and Li Deng. 2016. MS MARCO: A Human Generated MAchine Reading COmprehension Dataset. arXiv preprint arXiv:1611.09268 .
Boyuan Pan, Hao Li, Zhou Zhao, Bin Cao, Deng Cai, and Xiaofei He. 2017. MEMEN: Multi-layer Em- bedding with Memory Networks for Machine Com- prehension. arXiv preprint arXiv:1707.09098 .
Jeffrey Pennington, Richard Socher, and Christo- pher D. Manning. 2014. GloVe: Global Vectors for Word Representation. In Empirical Methods in Nat- ural Language Processing (EMNLP). | 1710.10723#41 | Simple and Effective Multi-Paragraph Reading Comprehension | We consider the problem of adapting neural paragraph-level question answering
models to the case where entire documents are given as input. Our proposed
solution trains models to produce well calibrated confidence scores for their
results on individual paragraphs. We sample multiple paragraphs from the
documents during training, and use a shared-normalization training objective
that encourages the model to produce globally correct output. We combine this
method with a state-of-the-art pipeline for training models on document QA
data. Experiments demonstrate strong performance on several document QA
datasets. Overall, we are able to achieve a score of 71.3 F1 on the web portion
of TriviaQA, a large improvement from the 56.7 F1 of the previous best system. | http://arxiv.org/pdf/1710.10723 | Christopher Clark, Matt Gardner | cs.CL | 11 pages, updated a reference | null | cs.CL | 20171029 | 20171107 | [
{
"id": "1608.07905"
},
{
"id": "1703.04816"
},
{
"id": "1511.02301"
},
{
"id": "1707.09098"
},
{
"id": "1702.02171"
},
{
"id": "1709.00023"
},
{
"id": "1606.05250"
},
{
"id": "1601.06733"
},
{
"id": "1611.09268"
},
{
"id": "1706.02596"
},
{
"id": "1608.03542"
},
{
"id": "1706.04815"
},
{
"id": "1704.00051"
},
{
"id": "1705.03551"
},
{
"id": "1707.03904"
},
{
"id": "1603.01547"
},
{
"id": "1707.07328"
}
] |
1710.10723 | 42 | Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ Questions for Machine Comprehension of Text. arXiv preprint arXiv:1606.05250 .
Min Joon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2016. Bidirectional At- tention Flow for Machine Comprehension. CoRR abs/1611.01603.
Chuanqi Tan, Furu Wei, Nan Yang, Weifeng Lv, and Ming Zhou. 2017. S-net: From answer extraction to answer generation for machine reading comprehen- sion. arXiv preprint arXiv:1706.04815 .
Ellen M Voorhees et al. 1999. The TREC-8 Question Answering Track Report. In Trec.
Shuohang Wang and Jing Jiang. 2016. Machine Comprehension Using Match-LSTM and Answer Pointer. arXiv preprint arXiv:1608.07905 . | 1710.10723#42 | Simple and Effective Multi-Paragraph Reading Comprehension | We consider the problem of adapting neural paragraph-level question answering
models to the case where entire documents are given as input. Our proposed
solution trains models to produce well calibrated confidence scores for their
results on individual paragraphs. We sample multiple paragraphs from the
documents during training, and use a shared-normalization training objective
that encourages the model to produce globally correct output. We combine this
method with a state-of-the-art pipeline for training models on document QA
data. Experiments demonstrate strong performance on several document QA
datasets. Overall, we are able to achieve a score of 71.3 F1 on the web portion
of TriviaQA, a large improvement from the 56.7 F1 of the previous best system. | http://arxiv.org/pdf/1710.10723 | Christopher Clark, Matt Gardner | cs.CL | 11 pages, updated a reference | null | cs.CL | 20171029 | 20171107 | [
{
"id": "1608.07905"
},
{
"id": "1703.04816"
},
{
"id": "1511.02301"
},
{
"id": "1707.09098"
},
{
"id": "1702.02171"
},
{
"id": "1709.00023"
},
{
"id": "1606.05250"
},
{
"id": "1601.06733"
},
{
"id": "1611.09268"
},
{
"id": "1706.02596"
},
{
"id": "1608.03542"
},
{
"id": "1706.04815"
},
{
"id": "1704.00051"
},
{
"id": "1705.03551"
},
{
"id": "1707.03904"
},
{
"id": "1603.01547"
},
{
"id": "1707.07328"
}
] |
1710.10723 | 43 | Shuohang Wang, Mo Yu, Xiaoxiao Guo, Zhiguo Wang, Tim Klinger, Wei Zhang, Shiyu Chang, Gerald Tesauro, Bowen Zhou, and Jing Jiang. 2017a. R: Reinforced Reader-Ranker for Open-Domain Ques- tion Answering. arXiv preprint arXiv:1709.00023 .
Wenhui Wang, Nan Yang, Furu Wei, Baobao Chang, and Ming Zhou. 2017b. Gated self-matching net- works for reading comprehension and question an- swering. In Proceedings of the 55th Annual Meet- ing of the Association for Computational Linguistics (Volume 1: Long Papers). volume 1, pages 189â198.
Dirk Weissenborn, Tom Koisk, and Chris Dyer. 2017a. Dynamic Integration of Background Knowl- arXiv preprint edge in Neural NLU Systems. arXiv:1706.02596 .
Dirk Weissenborn, Georg Wiese, and Laura Seiffe. 2017b. FastQA: A Simple and Efï¬cient Neural Ar- chitecture for Question Answering. arXiv preprint arXiv:1703.04816 .
Matthew D Zeiler. 2012. Adadelta: an adaptive learn- ing rate method. arXiv preprint arXiv:1212.5701 . | 1710.10723#43 | Simple and Effective Multi-Paragraph Reading Comprehension | We consider the problem of adapting neural paragraph-level question answering
models to the case where entire documents are given as input. Our proposed
solution trains models to produce well calibrated confidence scores for their
results on individual paragraphs. We sample multiple paragraphs from the
documents during training, and use a shared-normalization training objective
that encourages the model to produce globally correct output. We combine this
method with a state-of-the-art pipeline for training models on document QA
data. Experiments demonstrate strong performance on several document QA
datasets. Overall, we are able to achieve a score of 71.3 F1 on the web portion
of TriviaQA, a large improvement from the 56.7 F1 of the previous best system. | http://arxiv.org/pdf/1710.10723 | Christopher Clark, Matt Gardner | cs.CL | 11 pages, updated a reference | null | cs.CL | 20171029 | 20171107 | [
{
"id": "1608.07905"
},
{
"id": "1703.04816"
},
{
"id": "1511.02301"
},
{
"id": "1707.09098"
},
{
"id": "1702.02171"
},
{
"id": "1709.00023"
},
{
"id": "1606.05250"
},
{
"id": "1601.06733"
},
{
"id": "1611.09268"
},
{
"id": "1706.02596"
},
{
"id": "1608.03542"
},
{
"id": "1706.04815"
},
{
"id": "1704.00051"
},
{
"id": "1705.03551"
},
{
"id": "1707.03904"
},
{
"id": "1603.01547"
},
{
"id": "1707.07328"
}
] |
1710.10368 | 0 | 8 1 0 2
y a M 5 2 ] G L . s c [
2 v 8 6 3 0 1 . 0 1 7 1 : v i X r a
# Deep Generative Dual Memory Network for Continual Learning
# Nitin Kamra 1 Umang Gupta 1 Yan Liu 1
# Abstract
Despite advances in deep learning, neural net- works can only learn multiple tasks when trained on them jointly. When tasks arrive sequentially, they lose performance on previously learnt tasks. This phenomenon called catastrophic forgetting is a fundamental challenge to overcome before neural networks can learn continually from in- coming data. In this work, we derive inspiration from human memory to develop an architecture capable of learning continuously from sequen- tially incoming tasks, while averting catastrophic forgetting. Speciï¬cally, our contributions are: (i) a dual memory architecture emulating the com- plementary learning systems (hippocampus and the neocortex) in the human brain, (ii) memory consolidation via generative replay of past expe- riences, (iii) demonstrating advantages of gener- ative replay and dual memories via experiments, and (iv) improved performance retention on chal- lenging tasks even for low capacity models. Our architecture displays many characteristics of the mammalian memory and provides insights on the connection between sleep and learning. | 1710.10368#0 | Deep Generative Dual Memory Network for Continual Learning | Despite advances in deep learning, neural networks can only learn multiple
tasks when trained on them jointly. When tasks arrive sequentially, they lose
performance on previously learnt tasks. This phenomenon called catastrophic
forgetting is a fundamental challenge to overcome before neural networks can
learn continually from incoming data. In this work, we derive inspiration from
human memory to develop an architecture capable of learning continuously from
sequentially incoming tasks, while averting catastrophic forgetting.
Specifically, our contributions are: (i) a dual memory architecture emulating
the complementary learning systems (hippocampus and the neocortex) in the human
brain, (ii) memory consolidation via generative replay of past experiences,
(iii) demonstrating advantages of generative replay and dual memories via
experiments, and (iv) improved performance retention on challenging tasks even
for low capacity models. Our architecture displays many characteristics of the
mammalian memory and provides insights on the connection between sleep and
learning. | http://arxiv.org/pdf/1710.10368 | Nitin Kamra, Umang Gupta, Yan Liu | cs.LG | null | null | cs.LG | 20171028 | 20180525 | [
{
"id": "1703.08110"
},
{
"id": "1511.06295"
},
{
"id": "1701.08734"
},
{
"id": "1701.04722"
},
{
"id": "1606.04671"
}
] |
1710.10368 | 1 | 1993; French, 1994). Recently, activations like maxout and dropout (Goodfellow et al., 2013) and local winner-takes- all (Srivastava et al., 2013) have been explored to create sparsiï¬ed feature representations. But, natural cognitive sys- tems e.g. mammalian brains are also connectionist in nature and yet they only undergo gradual systematic forgetting. Frequently and recently encountered tasks tend to survive much longer in memory, while those rarely encountered are slowly forgotten. Hence shared representations may not be the root cause of the problem. More recent approaches have targeted slowing down learning on network weights which are important for previously learnt tasks. Kirkpatrick et al. (2017) have used a ï¬sher information matrix based regularizer to slow down learning on network weights which correlate with previously acquired knowledge. Zenke et al. (2017) have employed path integrals of loss-derivatives to slow down learning on weights important for the previous tasks. Progressive neural networks (Rusu et al., 2016) and Pathnets (Fernando et al., 2017) directly freeze important pathways in neural networks, which eliminates forgetting altogether but requires growing the network after each | 1710.10368#1 | Deep Generative Dual Memory Network for Continual Learning | Despite advances in deep learning, neural networks can only learn multiple
tasks when trained on them jointly. When tasks arrive sequentially, they lose
performance on previously learnt tasks. This phenomenon called catastrophic
forgetting is a fundamental challenge to overcome before neural networks can
learn continually from incoming data. In this work, we derive inspiration from
human memory to develop an architecture capable of learning continuously from
sequentially incoming tasks, while averting catastrophic forgetting.
Specifically, our contributions are: (i) a dual memory architecture emulating
the complementary learning systems (hippocampus and the neocortex) in the human
brain, (ii) memory consolidation via generative replay of past experiences,
(iii) demonstrating advantages of generative replay and dual memories via
experiments, and (iv) improved performance retention on challenging tasks even
for low capacity models. Our architecture displays many characteristics of the
mammalian memory and provides insights on the connection between sleep and
learning. | http://arxiv.org/pdf/1710.10368 | Nitin Kamra, Umang Gupta, Yan Liu | cs.LG | null | null | cs.LG | 20171028 | 20180525 | [
{
"id": "1703.08110"
},
{
"id": "1511.06295"
},
{
"id": "1701.08734"
},
{
"id": "1701.04722"
},
{
"id": "1606.04671"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.